Science.gov

Sample records for 3-d vision system

  1. Computer-aided 3D display system and its application in 3D vision test

    NASA Astrophysics Data System (ADS)

    Shen, XiaoYun; Ma, Lan; Hou, Chunping; Wang, Jiening; Tang, Da; Li, Chang

    1998-08-01

    The computer aided 3D display system, flicker-free field sequential stereoscopic image display system, is newly developed. This system is composed of personal computer, liquid crystal glasses driving card, stereoscopic display software and liquid crystal glasses. It can display field sequential stereoscopic images at refresh rate of 70 Hz to 120 Hz. A typical application of this system, 3D vision test system, is mainly discussed in this paper. This stereoscopic vision test system can test stereoscopic acuity, cross disparity, uncross disparity and dynamic stereoscopic vision quantitatively. We have taken the use of random-dot- stereograms as stereoscopic vision test charts. Through practical test experiment between Anaglyph Stereoscopic Vision Test Charts and this stereoscopic vision test system, the statistical figures and test result is given out.

  2. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  3. Fiber optic coherent laser radar 3D vision system

    SciTech Connect

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-12-31

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution.

  4. The 3D laser radar vision processor system

    NASA Technical Reports Server (NTRS)

    Sebok, T. M.

    1990-01-01

    Loral Defense Systems (LDS) developed a 3D Laser Radar Vision Processor system capable of detecting, classifying, and identifying small mobile targets as well as larger fixed targets using three dimensional laser radar imagery for use with a robotic type system. This processor system is designed to interface with the NASA Johnson Space Center in-house Extra Vehicular Activity (EVA) Retriever robot program and provide to it needed information so it can fetch and grasp targets in a space-type scenario.

  5. 3D vision system for intelligent milking robot automation

    NASA Astrophysics Data System (ADS)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  6. Fiber optic coherent laser radar 3d vision system

    SciTech Connect

    Sebastian, R.L.; Clark, R.B.; Simonson, D.L.

    1994-12-31

    Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic of coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  7. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  8. Angle extended linear MEMS scanning system for 3D laser vision sensor

    NASA Astrophysics Data System (ADS)

    Pang, Yajun; Zhang, Yinxin; Yang, Huaidong; Zhu, Pan; Gai, Ye; Zhao, Jian; Huang, Zhanhua

    2016-09-01

    Scanning system is often considered as the most important part for 3D laser vision sensor. In this paper, we propose a method for the optical system design of angle extended linear MEMS scanning system, which has features of huge scanning degree, small beam divergence angle and small spot size for 3D laser vision sensor. The principle of design and theoretical formulas are derived strictly. With the help of software ZEMAX, a linear scanning optical system based on MEMS has been designed. Results show that the designed system can extend scanning angle from ±8° to ±26.5° with a divergence angle small than 3.5 mr, and the spot size is reduced for 4.545 times.

  9. Morphological features of the macerated cranial bones registered by the 3D vision system for potential use in forensic anthropology.

    PubMed

    Skrzat, Janusz; Sioma, Andrzej; Kozerska, Magdalena

    2013-01-01

    In this paper we present potential usage of the 3D vision system for registering features of the macerated cranial bones. Applied 3D vision system collects height profiles of the object surface and from that data builds a three-dimensional image of the surface. This method appeared to be accurate enough to capture anatomical details of the macerated bones. With the aid of the 3D vision system we generated images of the surface of the human calvaria which was used for testing the system. Performed reconstruction visualized the imprints of the dural vascular system, cranial sutures, and the three-layer structure of the cranial bones observed in the cross-section. We figure out that the 3D vision system may deliver data which can enhance estimation of sex from the osteological material.

  10. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  11. A binocular machine vision system for non-melanoma skin cancer 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris S.; Politopoulos, Kostas; Alexandratou, Eleni; Yova, Dido

    2006-02-01

    Computer vision advancements have not till now achieved the accurate 3D reconstruction of objects smaller than 1cm diameter. Although this problem is of great importance in dermatology for Non Melanoma Skin Cancer diagnosis and therapy, has not yet been solved. This paper describes the development of a novel volumetric method for NMSC animal model tumors, using a binocular vision system. Monitoring NMSC tumors volume changes during PDT will grant important information for the assessment of the therapeutic progress and the efficiency of the applied drug. The vision system was designed taking into account the targets size and the flexibility. By using high resolution cameras with telecentric lenses most distortion factors were reduced significantly. Furthermore, z-axis movement was possible without requiring calibration, in contrary to wide angle lenses. The calibration was achieved by means of adapted photogrammetric technique. The required time for calibrating both cameras was less than a minute. For accuracy expansion, a structured light projector was used. The captured stereo-pair images were processed with modified morphological filters to improve background contrast and minimize noise. The determination of conjugate points was achieved via maximum correlation values and region properties, thus decreasing significantly the computational cost. The 3D reconstruction algorithm has been assessed with objects of known volumes and applied to animal model tumors with less than 0.6cm diameter. The achieved precision was at very high levels providing a standard deviation of 0.0313mm. The robustness of our system is based on the overall approach and on the size of the targets.

  12. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  13. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  14. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  15. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  16. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  17. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    NASA Astrophysics Data System (ADS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  18. Simple and inexpensive stereo vision system for 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Mermall, Samuel E.; Lindner, John F.

    2014-10-01

    We describe a simple stereo-vision system for tracking motion in three dimensions using a single ordinary camera. A simple mirror system divides the camera's field of view into left and right stereo pairs. We calibrate the system by tracking a point on a spinning wheel and demonstrate its use by tracking the corner of a flapping flag.

  19. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  20. The experiment study of image acquisition system based on 3D machine vision

    NASA Astrophysics Data System (ADS)

    Zhou, Haiying; Xiao, Zexin; Zhang, Xuefei; Wei, Zhe

    2011-11-01

    Binocular vision is one of the key technology in three-dimensional reconstructed of scene of three-dimensional machine vision. Important information of three-dimensional image could be acquired by binocular vision. When use it, we first get two or more pictures by camera, then we could get three-dimensional imformation included in these pictures by geometry and other relationship. In order to measurement accuracy of image acquisition system improved, image acquisition system of binocular vision about scene three-dimensional reconstruction is studyed in this article. Base on parallax principle and human eye binocular imaging, image acquired system between double optical path and double CCD mothd is comed up with. Experiment could obtain the best angle of double optical path optical axis and the best operating distance of double optical path. Then, through the bset angle of optical axis of double optical path and the best operating distance of double optical path, the centre distance of double CCD could be made sure. The two images of the same scene with different viewpoints is shoot by double CCD. This two images could establish well foundation for three-dimensional reconstructed of image processing in the later period. Through the experimental data shows the rationality of this method.

  1. The study of dual camera 3D coordinate vision measurement system using a special probe

    NASA Astrophysics Data System (ADS)

    Liu, Shugui; Peng, Kai; Zhang, Xuefei; Zhang, Haifeng; Huang, Fengshan

    2006-11-01

    Due to high precision and convenient operation, the vision coordinate measurement machine with one probe has become the research focus in visual industry. In general such a visual system can be setup conveniently with just one CCD camera and probe. However, the price of the system will surge up too high to accept while the top performance hardware, such as CCD camera, image captured card and etc, have to be applied in the system to obtain the high axis-oriented measurement precision. In this paper, a new dual CCD camera vision coordinate measurement system based on redundancy principle is proposed to achieve high precision by moderate price. Since two CCD cameras are placed with the angle of camera axis like about 90 degrees to build the system, two sub-systems can be built by each CCD camera and the probe. With the help of the probe the inner and outer parameters of camera are first calibrated, the system by use of redundancy technique is set up now. When axis-oriented error is eliminated within the two sub-systems, which is so large and always exits in the single camera system, the high precision measurement is obtained by the system. The result of experiment compared to that from CMM shows that the system proposed is more excellent in stableness and precision with the uncertainty beyond +/-0.1 mm in xyz orient within the distance of 2m using two common CCD cameras.

  2. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  3. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. PMID:23827333

  4. Development of a system based on 3D vision, interactive virtual environments, ergonometric signals and a humanoid for stroke rehabilitation.

    PubMed

    Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra

    2013-11-01

    This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient.

  5. Revitalizing the Space Shuttle's Thermal Protection System with Reverse Engineering and 3D Vision Technology

    NASA Technical Reports Server (NTRS)

    Wilson, Brad; Galatzer, Yishai

    2008-01-01

    The Space Shuttle is protected by a Thermal Protection System (TPS) made of tens of thousands of individually shaped heat protection tile. With every flight, tiles are damaged on take-off and return to earth. After each mission, the heat tiles must be fixed or replaced depending on the level of damage. As part of the return to flight mission, the TPS requirements are more stringent, leading to a significant increase in heat tile replacements. The replacement operation requires scanning tile cavities, and in some cases the actual tiles. The 3D scan data is used to reverse engineer each tile into a precise CAD model, which in turn, is exported to a CAM system for the manufacture of the heat protection tile. Scanning is performed while other activities are going on in the shuttle processing facility. Many technicians work simultaneously on the space shuttle structure, which results in structural movements and vibrations. This paper will cover a portable, ultra-fast data acquisition approach used to scan surfaces in this unstable environment.

  6. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    PubMed

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  7. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    PubMed Central

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-01-01

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust. PMID:25912350

  8. Multi-Camera and Structured-Light Vision System (MSVS) for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    PubMed Central

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-01-01

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results. PMID:25875190

  9. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  10. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    PubMed

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-01-01

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results. PMID:25875190

  11. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  12. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  13. 3D vision assisted flexible robotic assembly of machine components

    NASA Astrophysics Data System (ADS)

    Ogun, Philips S.; Usman, Zahid; Dharmaraj, Karthick; Jackson, Michael R.

    2015-12-01

    Robotic assembly systems either make use of expensive fixtures to hold components in predefined locations, or the poses of the components are determined using various machine vision techniques. Vision-guided assembly robots can handle subtle variations in geometries and poses of parts. Therefore, they provide greater flexibility than the use of fixtures. However, the currently established vision-guided assembly systems use 2D vision, which is limited to three degrees of freedom. The work reported in this paper is focused on flexible automated assembly of clearance fit machine components using 3D vision. The recognition and the estimation of the poses of the components are achieved by matching their CAD models with the acquired point cloud data of the scene. Experimental results obtained from a robot demonstrating the assembly of a set of rings on a shaft show that the developed system is not only reliable and accurate, but also fast enough for industrial deployment.

  14. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  15. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  16. 2D/3D Synthetic Vision Navigation Display

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, jason L.

    2008-01-01

    Flight-deck display software was designed and developed at NASA Langley Research Center to provide two-dimensional (2D) and three-dimensional (3D) terrain, obstacle, and flight-path perspectives on a single navigation display. The objective was to optimize the presentation of synthetic vision (SV) system technology that permits pilots to view multiple perspectives of flight-deck display symbology and 3D terrain information. Research was conducted to evaluate the efficacy of the concept. The concept has numerous unique implementation features that would permit enhanced operational concepts and efficiencies in both current and future aircraft.

  17. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  18. The New Realm of 3-D Vision

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  19. Evaluation of vision training using 3D play game

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  20. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes. PMID:23955795

  1. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  2. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  3. Unexpected Regularity in Swimming Behavior of Clausocalanus furcatus Revealed by a Telecentric 3D Computer Vision System

    PubMed Central

    Bianco, Giuseppe; Botte, Vincenzo; Dubroca, Laurent; Ribera d’Alcalà, Maurizio; Mazzocchi, Maria Grazia

    2013-01-01

    Planktonic copepods display a large repertoire of motion behaviors in a three-dimensional environment. Two-dimensional video observations demonstrated that the small copepod Clausocalanus furcatus, one the most widely distributed calanoids at low to medium latitudes, presented a unique swimming behavior that was continuous and fast and followed notably convoluted trajectories. Furthermore, previous observations indicated that the motion of C. furcatus resembled a random process. We characterized the swimming behavior of this species in three-dimensional space using a video system equipped with telecentric lenses, which allow tracking of zooplankton without the distortion errors inherent in common lenses. Our observations revealed unexpected regularities in the behavior of C. furcatus that appear primarily in the horizontal plane and could not have been identified in previous observations based on lateral views. Our results indicate that the swimming behavior of C. furcatus is based on a limited repertoire of basic kinematic modules but exhibits greater plasticity than previously thought. PMID:23826331

  4. 3D optical measuring technologies and systems

    NASA Astrophysics Data System (ADS)

    Chugui, Yuri V.

    2005-02-01

    The results of the R & D activity of TDI SIE SB RAS in the field of the 3D optical measuring technologies and systems for noncontact 3D optical dimensional inspection applied to atomic and railway industry safety problems are presented. This activity includes investigations of diffraction phenomena on some 3D objects, using the original constructive calculation method. The efficient algorithms for precise determining the transverse and longitudinal sizes of 3D objects of constant thickness by diffraction method, peculiarities on formation of the shadow and images of the typical elements of the extended objects were suggested. Ensuring the safety of nuclear reactors and running trains as well as their high exploitation reliability requires a 100% noncontact precise inspection of geometrical parameters of their components. To solve this problem we have developed methods and produced the technical vision measuring systems LMM, CONTROL, PROFIL, and technologies for noncontact 3D dimensional inspection of grid spacers and fuel elements for the nuclear reactor VVER-1000 and VVER-440, as well as automatic laser diagnostic COMPLEX for noncontact inspection of geometric parameters of running freight car wheel pairs. The performances of these systems and the results of industrial testing are presented and discussed. The created devices are in pilot operation at Atomic and Railway Companies.

  5. Indirectly online 3D position measurement based on machine vision using auxiliary gauge

    NASA Astrophysics Data System (ADS)

    Wu, Qinghua; He, Tao

    2008-12-01

    Accurate and rapid 3D position measurement is required in many industrial applications. Traditional 3D position measurements is usually applied in laboratories using coordinate measuring machine(CMM). CMM can achieve a high accuracy, but efficiency is low. Machine vision is a new technology in position measuring. Measurement based on machine vision has non-touch, high speed, high accuracy and other prominent advantages. Because depth information is lost during the process of image formation, synthesizing operation become more complicated, direct 3D position measurement based on machine vision has hardly been used in online industry application. In this paper, an indirectly online 3D position measurement system is discussed. This system is consisted with an assistant gauge, one set of machine vision system and a computer. Through the assistant gauge, 3D position measurement is transffered to 2D measurement. Thus, making full use of existing 2D image processing theory and method, accuracy and speed of measurement of 3D position measurement may be promoted effectively.

  6. Electrotactile vision substitution for 3D trajectory following.

    PubMed

    Chekhchoukh, A; Goumidi, M; Vuillerme, N; Payan, Y; Glade, N

    2013-01-01

    Navigation for blind persons represents a challenge for researchers in vision substitution. In this field, one of the used techniques to navigate is guidance. In this study, we develop a new approach for 3D trajectory following in which the requested task is to track a light path using computer input devices (keyboard and mouse) or a rigid body handled in front of a stereoscopic camera. The light path is visualized either on direct vision or by way of a electro-stimulation device, the Tongue Display Unit, a 12 × 12 matrix of electrodes. We improve our method by a series of experiments in which the effect of the modality of perception and that of the input device. Preliminary results indicated a close correlation between the stimulated and recorded trajectories.

  7. Obstacle avoidance using predictive vision based on a dynamic 3D world model

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Achtemichuk, Tom

    2006-10-01

    We have designed and implemented a fast predictive vision system for a mobile robot based on the principles of active vision. This vision system is part of a larger project to design a comprehensive cognitive architecture for mobile robotics. The vision system represents the robot's environment with a dynamic 3D world model based on a 3D gaming platform (Ogre3D). This world model contains a virtual copy of the robot and its environment, and outputs graphics showing what the virtual robot "sees" in the virtual world; this is what the real robot expects to see in the real world. The vision system compares this output in real time with the visual data. Any large discrepancies are flagged and sent to the robot's cognitive system, which constructs a plan for focusing on the discrepancies and resolving them, e.g. by updating the position of an object or by recognizing a new object. An object is recognized only once; thereafter its observed data are monitored for consistency with the predictions, greatly reducing the cost of scene understanding. We describe the implementation of this vision system and how the robot uses it to locate and avoid obstacles.

  8. 3-D measuring of engine camshaft based on machine vision

    NASA Astrophysics Data System (ADS)

    Qiu, Jianxin; Tan, Liang; Xu, Xiaodong

    2008-12-01

    The non-touch 3D measuring based on machine vision is introduced into camshaft precise measuring. Currently, because CCD 3-dimensional measuring can't meet requirements for camshaft's measuring precision, it's necessary to improve its measuring precision. In this paper, we put forward a method to improve the measuring method. A Multi-Character Match method based on the Polygonal Non-regular model is advanced with the theory of Corner Extraction and Corner Matching .This method has solved the problem of the matching difficulty and a low precision. In the measuring process, the use of the Coded marked Point method and Self-Character Match method can bring on this problem. The 3D measuring experiment on camshaft, which based on the Multi-Character Match method of the Polygonal Non-regular model, proves that the normal average measuring precision is increased to a new level less than 0.04mm in the point-clouds photo merge. This measuring method can effectively increase the 3D measuring precision of the binocular CCD.

  9. Efficient data association for robot 3D vision-SLAM

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-hua; Zhu, Dai-xian

    2010-08-01

    A new approach to vision-based simultaneous localization and mapping (SLAM) is proposed. the scale invariant feature transform (SIFT) features is landmarks, The minimal connected dominating set(CDS) approach is used in data association which solve the problem that the scale of data association increase with the map grows in process of SLAM. SLAM is completed by fusing the information of binocular vision and robot pose with Extended Kalman Filter (EKF). The system has been implemented and tested on data gathered with a mobile robot in a typical office environment. Experiments presented demonstrate that proposed method improves the data association and in this way leads to more accurate maps.

  10. Fast vision-based catheter 3D reconstruction.

    PubMed

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D

    2016-07-21

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms. PMID:27352011

  11. Fast vision-based catheter 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Moradi Dalvand, Mohsen; Nahavandi, Saeid; Howe, Robert D.

    2016-07-01

    Continuum robots offer better maneuverability and inherent compliance and are well-suited for surgical applications as catheters, where gentle interaction with the environment is desired. However, sensing their shape and tip position is a challenge as traditional sensors can not be employed in the way they are in rigid robotic manipulators. In this paper, a high speed vision-based shape sensing algorithm for real-time 3D reconstruction of continuum robots based on the views of two arbitrary positioned cameras is presented. The algorithm is based on the closed-form analytical solution of the reconstruction of quadratic curves in 3D space from two arbitrary perspective projections. High-speed image processing algorithms are developed for the segmentation and feature extraction from the images. The proposed algorithms are experimentally validated for accuracy by measuring the tip position, length and bending and orientation angles for known circular and elliptical catheter shaped tubes. Sensitivity analysis is also carried out to evaluate the robustness of the algorithm. Experimental results demonstrate good accuracy (maximum errors of  ±0.6 mm and  ±0.5 deg), performance (200 Hz), and robustness (maximum absolute error of 1.74 mm, 3.64 deg for the added noises) of the proposed high speed algorithms.

  12. 3D World Building System

    ScienceCinema

    None

    2016-07-12

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  13. 3D World Building System

    SciTech Connect

    2013-10-30

    This video provides an overview of the Sandia National Laboratories developed 3-D World Model Building capability that provides users with an immersive, texture rich 3-D model of their environment in minutes using a laptop and color and depth camera.

  14. Sketch on dynamic gesture tracking and analysis exploiting vision-based 3D interface

    NASA Astrophysics Data System (ADS)

    Woo, Woontack; Kim, Namgyu; Wong, Karen; Tadenuma, Makoto

    2000-12-01

    In this paper, we propose a vision-based 3D interface exploiting invisible 3D boxes, arranged in the personal space (i.e. reachable space by the body without traveling), which allows robust yet simple dynamic gesture tracking and analysis, without exploiting complicated sensor-based motion tracking systems. Vision-based gesture tracking and analysis is still a challenging problem, even though we have witnessed rapid advances in computer vision over the last few decades. The proposed framework consists of three main parts, i.e. (1) object segmentation without bluescreen and 3D box initialization with depth information, (2) movement tracking by observing how the body passes through the 3D boxes in the personal space and (3) movement feature extraction based on Laban's Effort theory and movement analysis by mapping features to meaningful symbols using time-delay neural networks. Obviously, exploiting depth information using multiview images improves the performance of gesture analysis by reducing the errors introduced by simple 2D interfaces In addition, the proposed box-based 3D interface lessens the difficulties in both tracking movement in 3D space and in extracting low-level features of the movement. Furthermore, the time-delay neural networks lessens the difficulties in movement analysis by training. Due to its simplicity and robustness, the framework will provide interactive systems, such as ATR I-cubed Tangible Music System or ATR Interactive Dance system, with improved quality of the 3D interface. The proposed simple framework also can be extended to other applications requiring dynamic gesture tracking and analysis on the fly.

  15. Tool 3D geometry measurement system

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Ni, Jun; Sun, Yi; Lin, Xuewen

    2001-10-01

    A new non-contact tool 3D geometry measurement system based on machine vision is described. In this system, analytical and optimization methods are used respectively to achieve system calibration, which can determine the rotation center of the drill. The data merging method is fully studied which can translate the scattered different groups of raw data in sensor coordinates into drill coordinates and get 3-D topography of the drill body. Corresponding data processing methods for drill geometry are also studied. Statistical methods are used to remove the outliers. Laplacian of Gaussian operator are used to detect the boundary on drill cross-section and drill tip profile. The arithmetic method for calculating the parameters is introduced. The initial measurement results are presented. The cross-section profile, drill tips geometry are shown. Pictures of drill wear on drill tip are given. Parameters extracted from the cross-section are listed. Compared with the measurement results using CMM, the difference between this drill geometry measurement system and CMM is, Radius of drill: 0.020mm, Helix angle: 1.310, Web thickness: 0.034mm.

  16. Random-profiles-based 3D face recognition system.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Lee, Sangyoun

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  17. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  18. Magmatic Systems in 3-D

    NASA Astrophysics Data System (ADS)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  19. Development and Evaluation of 2-D and 3-D Exocentric Synthetic Vision Navigation Display Concepts for Commercial Aircraft

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, J. J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  20. Development and evaluation of 2D and 3D exocentric synthetic vision navigation display concepts for commercial aircraft

    NASA Astrophysics Data System (ADS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Sweeters, Jason L.

    2005-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications that will help to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. The paper describes experimental evaluation of a multi-mode 3-D exocentric synthetic vision navigation display concept for commercial aircraft. Experimental results evinced the situation awareness benefits of 2-D and 3-D exocentric synthetic vision displays over traditional 2-D co-planar navigation and vertical situation displays. Conclusions and future research directions are discussed.

  1. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  2. Low-cost 3D rangefinder system

    NASA Astrophysics Data System (ADS)

    Chen, Bor-Tow; Lou, Wen-Shiou; Chen, Chia-Chen; Lin, Hsien-Chang

    1998-06-01

    Nowadays, 3D data are popularly performed in computer, and 3D browsers manipulate 3D model in the virtual world. Yet, till now, 3D digitizer is still a high-cost product and not a familiar equipment. In order to meet the requirement of 3D fancy world, in this paper, the concept of a low-cost 3D digitizer system is proposed to catch 3D range data from objects. The specified optical design of the 3D extraction is effective to depress the size, and the processing software of the system is compatible with PC to promote its portable capability. Both features contribute a low-cost system in PC environment in contrast to a large system bundled in an expensive workstation platform. In the structure of 3D extraction, laser beam and CCD camera are adopted to construct a 3D sensor. Instead of 2 CCD cameras for capturing laser lines twice before, a 2-in-1 system is proposed to merge 2 images in one CCD which still retains the information of two fields of views to inhibit occlusion problems. Besides, optical paths of two camera views are reflected by mirror in order that the volume of the system can be minified with one rotary axis only. It makes a portable system be more possible to work. Combined with the processing software executable in PC windows system, the proposed system not only saves hardware cost but also processing time of software. The system performance achieves 0.05 mm accuracy. It shows that a low- cost system is more possible to be high-performance.

  3. Vision-Based Long-Range 3D Tracking, applied to Underground Surveying Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes; Chmelina, Klaus

    2014-04-01

    To address the need of highly automated positioning systems in underground construction, we present a long-range 3D tracking system based on infrared optical markers. It provides continuous 3D position estimation of static or kinematic targets with low latency over a tracking volume of 12 m x 8 m x 70 m (width x height x depth). Over the entire volume, relative 3D point accuracy with a maximal deviation ≤ 22 mm is ensured with possible target rotations of yaw, pitch = 0 - 45° and roll = 0 - 360°. No preliminary sighting of target(s) is necessary since the system automatically locks onto a target without user intervention and autonomously starts tracking as soon as a target is within the view of the system. The proposed system needs a minimal hardware setup, consisting of two machine vision cameras and a standard workstation for data processing. This allows for quick installation with minimal disturbance of construction work. The data processing pipeline ensures camera calibration and tracking during on-going underground activities. Tests in real underground scenarios prove the system's capabilities to act as 3D position measurement platform for multiple underground tasks that require long range, low latency and high accuracy. Those tasks include simultaneously tracking of personnel, machines or robots.

  4. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  5. A new 3D measurement method and its calibration based on the combination of binocular and monocular vision

    NASA Astrophysics Data System (ADS)

    Li, Dong; Tian, Jindong; Yang, Xin

    2012-11-01

    The traditional structured light binocular vision measurement system consists of two cameras and a projector, which can be regarded to two monocular vision systems composed by the projector and a camera. In this paper, we present a threedimensional (3D) measurement method based on the combination of binocular vision and monocular vision. The common field of view is reconstructed by a binocular vision system, and the missing data area is filled up by two monocular vision systems. In order to improve the measurement accuracy and unify the three world coordinate systems, a calibration method is proposed. The calibration procedure consists of a binocular vision system calibration, the two monocular vision systems calibration and a globe optimization of the three systems for unifying to a common reference. In monocular vision system calibration, a new method based on virtual target is proposed and used to set up the coordinate relations. We use a projector and two cameras to build a vision system for testing the proposed technique. The experimental results show the calibration algorithm ensures the consistent accuracy in the three systems, which is important for data fusion. And it is clear that the proposed method improves the integrity of measurement results and measuring range efficiently.

  6. A technique for 3-D robot vision for space applications

    NASA Technical Reports Server (NTRS)

    Markandey, V.; Tagare, H.; Defigueiredo, R. J. P.

    1987-01-01

    An extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using Moment Invariants as features of object representation is discussed. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  7. Demonstration of a 3D vision algorithm for space applications

    NASA Technical Reports Server (NTRS)

    Defigueiredo, Rui J. P. (Editor)

    1987-01-01

    This paper reports an extension of the MIAG algorithm for recognition and motion parameter determination of general 3-D polyhedral objects based on model matching techniques and using movement invariants as features of object representation. Results of tests conducted on the algorithm under conditions simulating space conditions are presented.

  8. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  9. Coherent laser vision system

    SciTech Connect

    Sebastion, R.L.

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  10. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  11. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  12. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  13. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    PubMed Central

    El Natour, Ghina; Ait-Aider, Omar; Rouveure, Raphael; Berry, François; Faure, Patrice

    2015-01-01

    In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data. PMID:26473874

  14. 3D packaging for integrated circuit systems

    SciTech Connect

    Chu, D.; Palmer, D.W.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  15. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  16. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    NASA Astrophysics Data System (ADS)

    Ilyas, Ismet P.

    2013-06-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  17. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  18. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  19. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  20. Spatial light modulation for improved microscope stereo vision and 3D tracking

    NASA Astrophysics Data System (ADS)

    Lee, Michael P.; Gibson, Graham; Tassieri, Manlio; Phillips, Dave; Bernet, Stefan; Ritsh-Marte, Monika; Padgett, Miles J.

    2013-09-01

    We present a new type of stereo microscopy which can be used for tracking in 3D over an extended depth. The use of Spatial Light Modulators (SLMs) in the Fourier plane of a microscope sample is a common technique in Holographic Optical Tweezers (HOT). This set up is readily transferable from a tweezer system to an imaging system, where the tweezing laser is replaced with a camera. Just as a HOT system can diffract many traps of different types, in the imaging system many different imaging types can be diffracted with the SLM. The type of imaging we have developed is stereo imaging combined with lens correction. This approach has similarities with human vision where each eye has a lens, and it also extends the depth over which we can accurately track particles.

  1. Robust 3D reconstruction system for human jaw modeling

    NASA Astrophysics Data System (ADS)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  2. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  3. Real-time 3D vision solution for on-orbit autonomous rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Ruel, S.; English, C.; Anctil, M.; Daly, J.; Smith, C.; Zhu, S.

    2006-05-01

    Neptec has developed a vision system for the capture of non-cooperative objects on orbit. This system uses an active TriDAR sensor and a model based tracking algorithm to provide 6 degree of freedom pose information in real-time from mid range to docking. This system was selected for the Hubble Robotic Vehicle De-orbit Module (HRVDM) mission and for a Detailed Test Objective (DTO) mission to fly on the Space Shuttle. TriDAR (triangulation + LIDAR) technology makes use of a novel approach to 3D sensing by combining triangulation and Time-of-Flight (ToF) active ranging techniques in the same optical path. This approach exploits the complementary nature of these sensing technologies. Real-time tracking of target objects is accomplished using 3D model based tracking algorithms developed at Neptec in partnership with the Canadian Space Agency (CSA). The system provides 6 degrees of freedom pose estimation and incorporates search capabilities to initiate and recover tracking. Pose estimation is performed using an innovative approach that is faster than traditional techniques. This performance allows the algorithms to operate in real-time on the TriDAR's flight certified embedded processor. This paper presents results from simulation and lab testing demonstrating that the system's performance meets the requirements of a complete tracking system for on-orbit autonomous rendezvous and docking.

  4. 3D Multifunctional Ablative Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  5. Structured Light-Based 3D Reconstruction System for Plants.

    PubMed

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  6. Structured Light-Based 3D Reconstruction System for Plants

    PubMed Central

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants.This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance. PMID:26230701

  7. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour

    PubMed Central

    Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  8. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  9. Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.

    PubMed

    Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola

    2016-01-01

    Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non

  10. Coherent laser vision system (CLVS)

    SciTech Connect

    1997-02-13

    The purpose of the CLVS research project is to develop a prototype fiber-optic based Coherent Laser Vision System suitable for DOE`s EM Robotics program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update geometric data on the order of once per second. The CLVS project plan required implementation in two phases of the contract, a Base Contract and a continuance option. This is the Base Program Interim Phase Topical Report presenting the results of Phase 1 of the CLVS research project. Test results and demonstration results provide a proof-of-concept for a system providing three-dimensional (3D) vision with the performance capability required to update geometric data on the order of once per second.

  11. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    PubMed

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  12. Precise positioning surveillance in 3-D using night-vision stereoscopic photogrammetry

    NASA Astrophysics Data System (ADS)

    Schwartz, Jason M.

    2011-06-01

    A 3-D imaging technique is presented which pairs high-resolution night-vision cameras with GPS to increase the capabilities of passive imaging surveillance. Camera models and GPS are used to derive a registered point cloud from multiple night-vision images. These point clouds are used to generate 3-D scene models and extract real-world positions of mission critical objects. Analysis shows accuracies rivaling laser scanning even in near-total darkness. The technique has been tested on stereoscopic 3-D video collections as well. Because this technique does not rely on active laser emissions it is more portable, less complex, less costly, and less detectable than laser scanning. This study investigates close-range photogrammetry under night-vision lighting conditions using practical use-case examples of terrain modeling, covert facility surveillance, and stand-off facial recognition. The examples serve as the context for discussion of a standard processing workflow. Results include completed, geo-referenced 3-D models, assessments of related accuracy and precision, and a discussion of future activities.

  13. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  14. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  15. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  16. Vision system for telerobotics operation

    NASA Astrophysics Data System (ADS)

    Wong, Andrew K. C.; Li, Li-Wei; Liu, Wei-Cheng

    1992-10-01

    This paper presents a knowledge-based vision system for a telerobotics guidance project. The system is capable of recognizing and locating 3-D objects with unrestricted viewpoints in a simulated unconstrained space environment. It constructs object representation for vision tasks from wireframe models; recognizes and locates objects in a 3-D scene, and provides world modeling capability to establish, maintain, and update 3-D environment description for telerobotic manipulations. In this paper, an object model is represented by an attributed hypergraph which contains direct structural (relational) information with features grouped according to their multiple-views so as the interpretation of the 3-D object and its 2-D projections are coupled. With this representation, object recognition is directed by a knowledge-directed hypothesis refinement strategy. The strategy starts with the identification of 2-D local feature characteristics for initiating feature and relation matching. Next it continues to refine the matching by adding 2-D features from the image according to viewpoint and geometric consistency. Finally it links the successful matchings back to the 3-D model to recover the feature, relation and location information of the recognized object. The paper also presents the implementation and the experimentation of the vision prototype.

  17. A dynamic 3D foot reconstruction system.

    PubMed

    Thabet, Ali K; Trucco, Emanuele; Salvi, Joaquim; Wang, Weijie; Abboud, Rami J

    2011-01-01

    Foot problems are varied and range from simple disorders through to complex diseases and joint deformities. Wherever possible, the use of insoles, or orthoses, is preferred over surgery. Current insole design techniques are based on static measurements of the foot, despite the fact that orthoses are prevalently used in dynamic conditions while walking or running. This paper presents the design and implementation of a structured-light prototype system providing dense three dimensional (3D) measurements of the foot in motion, and its use to show that foot measurements in dynamic conditions differ significantly from their static counterparts. The input to the system is a video sequence of a foot during a single step; the output is a 3D reconstruction of the plantar surface of the foot for each frame of the input. Engineering and clinical tests were carried out for the validation of the system. The accuracy of the system was found to be 0.34 mm with planar test objects. In tests with real feet, the system proved repeatable, with reconstruction differences between trials one week apart averaging 2.44 mm (static case) and 2.81 mm (dynamic case). Furthermore, a study was performed to compare the effective length of the foot between static and dynamic reconstructions using the 4D system. Results showed an average increase of 9 mm for the dynamic case. This increase is substantial for orthotics design, cannot be captured by a static system, and its subject-specific measurement is crucial for the design of effective foot orthoses.

  18. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  19. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.

    PubMed

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  20. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision.

    PubMed

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  1. How the venetian blind percept emerges from the laminar cortical dynamics of 3D vision

    PubMed Central

    Cao, Yongqiang; Grossberg, Stephen

    2014-01-01

    The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and to show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model proposes how lateral geniculate nucleus (LGN) and hierarchically organized laminar circuits in cortical areas V1, V2, and V4 interact to control processes of 3D boundary formation and surface filling-in that simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. Interactions between layers 4, 3B, and 2/3 in V1 and V2 carry out stereopsis and 3D boundary formation. Both binocular and monocular information combine to form 3D boundary and surface representations. Surface contour surface-to-boundary feedback from V2 thin stripes to V2 pale stripes combines computationally complementary boundary and surface formation properties, leading to a single consistent percept, while also eliminating redundant 3D boundaries, and triggering figure-ground perception. False binocular boundary matches are eliminated by Gestalt grouping properties during boundary formation. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. These model mechanisms have also simulated properties of 3D neon

  2. Characterizing the influence of surface roughness and inclination on 3D vision sensor performance

    NASA Astrophysics Data System (ADS)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Jackson, Michael R.

    2015-12-01

    This paper reports a methodology to evaluate the performance of 3D scanners, focusing on the influence of surface roughness and inclination on the number of acquired data points and measurement noise. Point clouds were captured of samples mounted on a robotic pan-tilt stage using an Ensenso active stereo 3D scanner. The samples have isotropic texture and range in surface roughness (Ra) from 0.09 to 0.46 μm. By extracting the point cloud quality indicators, point density and standard deviation, at a multitude of inclinations, maps of scanner performance are created. These maps highlight the performance envelopes of the sensor, the aim being to predict and compare scanner performance on real-world surfaces, rather than idealistic artifacts. The results highlight the need to characterize 3D vision sensors by their measurement limits as well as best-case performance, determined either by theoretical calculation or measurements in ideal circumstances.

  3. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  4. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  5. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  6. Advancements in 3D Structural Analysis of Geothermal Systems

    SciTech Connect

    Siler, Drew L; Faulds, James E; Mayhew, Brett; McNamara, David

    2013-06-23

    Robust geothermal activity in the Great Basin, USA is a product of both anomalously high regional heat flow and active fault-controlled extension. Elevated permeability associated with some fault systems provides pathways for circulation of geothermal fluids. Constraining the local-scale 3D geometry of these structures and their roles as fluid flow conduits is crucial in order to mitigate both the costs and risks of geothermal exploration and to identify blind (no surface expression) geothermal resources. Ongoing studies have indicated that much of the robust geothermal activity in the Great Basin is associated with high density faulting at structurally complex fault intersection/interaction areas, such as accommodation/transfer zones between discrete fault systems, step-overs or relay ramps in fault systems, intersection zones between faults with different strikes or different senses of slip, and horse-tailing fault terminations. These conceptualized models are crucial for locating and characterizing geothermal systems in a regional context. At the local scale, however, pinpointing drilling targets and characterizing resource potential within known or probable geothermal areas requires precise 3D characterization of the system. Employing a variety of surface and subsurface data sets, we have conducted detailed 3D geologic analyses of two Great Basin geothermal systems. Using EarthVision (Dynamic Graphics Inc., Alameda, CA) we constructed 3D geologic models of both the actively producing Brady’s geothermal system and a ‘greenfield’ geothermal prospect at Astor Pass, NV. These 3D models allow spatial comparison of disparate data sets in 3D and are the basis for quantitative structural analyses that can aid geothermal resource assessment and be used to pinpoint discrete drilling targets. The relatively abundant data set at Brady’s, ~80 km NE of Reno, NV, includes 24 wells with lithologies interpreted from careful analysis of cuttings and core, a 1

  7. 3-D Mesh Generation Nonlinear Systems

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surfacemore » equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.« less

  8. 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D

    NASA Astrophysics Data System (ADS)

    Choi, Sung-Ja; Lee, Gang-Soo

    The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].

  9. Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.

  10. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  11. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  12. COHERENT LASER VISION SYSTEM (CLVS) OPTION PHASE

    SciTech Connect

    Robert Clark

    1999-11-18

    The purpose of this research project was to develop a prototype fiber-optic based Coherent Laser Vision System (CLVS) suitable for DOE's EM Robotic program. The system provides three-dimensional (3D) vision for monitoring situations in which it is necessary to update the dimensional spatial data on the order of once per second. The system has total immunity to ambient lighting conditions.

  13. Identification and Detection of Simple 3D Objects with Severely Blurred Vision

    PubMed Central

    Kallie, Christopher S.; Legge, Gordon E.; Yu, Deyue

    2012-01-01

    Purpose. Detecting and recognizing three-dimensional (3D) objects is an important component of the visual accessibility of public spaces for people with impaired vision. The present study investigated the impact of environmental factors and object properties on the recognition of objects by subjects who viewed physical objects with severely reduced acuity. Methods. The experiment was conducted in an indoor testing space. We examined detection and identification of simple convex objects by normally sighted subjects wearing diffusing goggles that reduced effective acuity to 20/900. We used psychophysical methods to examine the effect on performance of important environmental variables: viewing distance (from 10–24 feet, or 3.05–7.32 m) and illumination (overhead fluorescent and artificial window), and object variables: shape (boxes and cylinders), size (heights from 2–6 feet, or 0.61–1.83 m), and color (gray and white). Results. Object identification was significantly affected by distance, color, height, and shape, as well as interactions between illumination, color, and shape. A stepwise regression analysis showed that 64% of the variability in identification could be explained by object contrast values (58%) and object visual angle (6%). Conclusions. When acuity is severely limited, illumination, distance, color, height, and shape influence the identification and detection of simple 3D objects. These effects can be explained in large part by the impact of these variables on object contrast and visual angle. Basic design principles for improving object visibility are discussed. PMID:23111613

  14. Perception of 3-D location based on vision, touch, and extended touch.

    PubMed

    Giudice, Nicholas A; Klatzky, Roberta L; Bennett, Christopher R; Loomis, Jack M

    2013-01-01

    Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.

  15. Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Prinzel, L.J.; Kramer, L.J.

    2009-01-01

    A synthetic vision system is an aircraft cockpit display technology that presents the visual environment external to the aircraft using computer-generated imagery in a manner analogous to how it would appear to the pilot if forward visibility were not restricted. The purpose of this chapter is to review the state of synthetic vision systems, and discuss selected human factors issues that should be considered when designing such displays.

  16. Automatic Quality Inspection of Percussion Cap Mass Production by Means of 3D Machine Vision and Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Tellaeche, A.; Arana, R.; Ibarguren, A.; Martínez-Otzeta, J. M.

    The exhaustive quality control is becoming very important in the world's globalized market. One of these examples where quality control becomes critical is the percussion cap mass production. These elements must achieve a minimum tolerance deviation in their fabrication. This paper outlines a machine vision development using a 3D camera for the inspection of the whole production of percussion caps. This system presents multiple problems, such as metallic reflections in the percussion caps, high speed movement of the system and mechanical errors and irregularities in percussion cap placement. Due to these problems, it is impossible to solve the problem by traditional image processing methods, and hence, machine learning algorithms have been tested to provide a feasible classification of the possible errors present in the percussion caps.

  17. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  18. Quantitative data quality metrics for 3D laser radar systems

    NASA Astrophysics Data System (ADS)

    Stevens, Jeffrey R.; Lopez, Norman A.; Burton, Robin R.

    2011-06-01

    Several quantitative data quality metrics for three dimensional (3D) laser radar systems are presented, namely: X-Y contrast transfer function, Z noise, Z resolution, X-Y edge & line spread functions, 3D point spread function and data voids. These metrics are calculated from both raw and/or processed point cloud data, providing different information regarding the performance of 3D imaging laser radar systems and the perceptual quality attributes of 3D datasets. The discussion is presented within the context of 3D imaging laser radar systems employing arrays of Geiger-mode Avalanche Photodiode (GmAPD) detectors, but the metrics may generally be applied to linear mode systems as well. An example for the role of these metrics in comparison of noise removal algorithms is also provided.

  19. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    ERIC Educational Resources Information Center

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  20. 3D Vision by Using Calibration Pattern with Inertial Sensor and RBF Neural Networks.

    PubMed

    Beṣdok, Erkan

    2009-01-01

    Camera calibration is a crucial prerequisite for the retrieval of metric information from images. The problem of camera calibration is the computation of camera intrinsic parameters (i.e., coefficients of geometric distortions, principle distance and principle point) and extrinsic parameters (i.e., 3D spatial orientations: ω, ϕ, κ, and 3D spatial translations: t(x), t(y), t(z)). The intrinsic camera calibration (i.e., interior orientation) models the imaging system of camera optics, while the extrinsic camera calibration (i.e., exterior orientation) indicates the translation and the orientation of the camera with respect to the global coordinate system. Traditional camera calibration techniques require a predefined mathematical-camera model and they use prior knowledge of many parameters. Definition of a realistic camera model is quite difficult and computation of camera calibration parameters are error-prone. In this paper, a novel implicit camera calibration method based on Radial Basis Functions Neural Networks is proposed. The proposed method requires neither an exactly defined camera model nor any prior knowledge about the imaging-setup or classical camera calibration parameters. The proposed method uses a calibration grid-pattern rotated around a static-fixed axis. The rotations of the calibration grid-pattern have been acquired by using an Xsens MTi-9 inertial sensor and in order to evaluate the success of the proposed method, 3D reconstruction performance of the proposed method has been compared with the performance of a traditional camera calibration method, Modified Direct Linear Transformation (MDLT). Extensive simulation results show that the proposed method achieves a better performance than MDLT aspect of 3D reconstruction. PMID:22408542

  1. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©. PMID:26921815

  2. Multi-camera system for 3D forensic documentation.

    PubMed

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©.

  3. Synthetic 3D multicellular systems for drug development.

    PubMed

    Rimann, Markus; Graf-Hausner, Ursula

    2012-10-01

    Since the 1970s, the limitations of two dimensional (2D) cell culture and the relevance of appropriate three dimensional (3D) cell systems have become increasingly evident. Extensive effort has thus been made to move cells from a flat world to a 3D environment. While 3D cell culture technologies are meanwhile widely used in academia, 2D culture technologies are still entrenched in the (pharmaceutical) industry for most kind of cell-based efficacy and toxicology tests. However, 3D cell culture technologies will certainly become more applicable if biological relevance, reproducibility and high throughput can be assured at acceptable costs. Most recent innovations and developments clearly indicate that the transition from 2D to 3D cell culture for industrial purposes, for example, drug development is simply a question of time.

  4. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  5. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  6. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    NASA Astrophysics Data System (ADS)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  7. A 3D digital medical photography system in paediatric medicine.

    PubMed

    Williams, Susanne K; Ellis, Lloyd A; Williams, Gigi

    2008-01-01

    In 2004, traditional clinical photography services at the Educational Resource Centre were extended using new technology. This paper describes the establishment of a 3D digital imaging system in a paediatric setting at the Royal Children's Hospital, Melbourne.

  8. a 3d Campus Information System - Initial Studies

    NASA Astrophysics Data System (ADS)

    Kahraman, I.; Karas, I. R.; Alizadehasharfi, B.; Abdul-Rahman, A.

    2013-08-01

    This paper discusses the method of developing Campus Information System. The system can handle 3D spatial data within desktop and web environment. The method consists of texturing of building facades for 3D building models and modeling 3D Campus Information System. In this paper, some of these steps are carried out; modelling 3D buildings, toggling these models on the terrain and ortho-photo, integration with a geo-database, transferring to the CityServer3D environment by using CityGML format and designing the service, etc. In addition to this, a simple but novel method of texturing of building façades for 3D city modeling that is based on Dynamic Pulse Function (DPF) is used for synthetic and procedural texturing. DPF is very fast compared to other photo realistic texturing methods. Last but not least, it is aimed to present this project on web using web mapping services. This makes 3D analysis easy for decision makers.

  9. Minimally Invasive Cardiac Surgery Using a 3D High-Definition Endoscopic System.

    PubMed

    Ruttkay, Tamas; Götte, Julia; Walle, Ulrike; Doll, Nicolas

    2015-01-01

    We describe a minimally invasive heart surgery application of the EinsteinVision 2.0 3D high-definition endoscopic system (Aesculap AG, Tuttlingen, Germany) in an 81-year-old man with severe tricuspid valve insufficiency. Fourteen years ago, he underwent a Ross procedure followed by a DDD pacemaker implantation 4 years later for tachy-brady-syndrome. His biventricular function was normal. We recommended minimally invasive tricuspid valve repair. The application of the aformentioned endoscopic system was simple, and the impressive 3D depth view offered an easy and precise manipulation through a minimal thoracotomy incision, avoiding the need for a rib spreading retractor.

  10. [Quality system Vision 2000].

    PubMed

    Pasini, Evasio; Pitocchi, Oreste; de Luca, Italo; Ferrari, Roberto

    2002-12-01

    A recent document of the Italian Ministry of Health points out that all structures which provide services to the National Health System should implement a Quality System according to the ISO 9000 standards. Vision 2000 is the new version of the ISO standard. Vision 2000 is less bureaucratic than the old version. The specific requests of the Vision 2000 are: a) to identify, to monitor and to analyze the processes of the structure, b) to measure the results of the processes so as to ensure that they are effective, d) to implement actions necessary to achieve the planned results and the continual improvement of these processes, e) to identify customer requests and to measure customer satisfaction. Specific attention should be also dedicated to the competence and training of the personnel involved in the processes. The principles of the Vision 2000 agree with the principles of total quality management. The present article illustrates the Vision 2000 standard and provides practical examples of the implementation of this standard in cardiological departments.

  11. CONDOR Advanced Visionics System

    NASA Astrophysics Data System (ADS)

    Kanahele, David L.; Buckanin, Robert M.

    1996-06-01

    The Covert Night/Day Operations for Rotorcraft (CONDOR) program is a collaborative research and development program between the governments of the United States and the United Kingdom of Great Britain and Northern Ireland to develop and demonstrate an advanced visionics concept coupled with an advanced flight control system to improve rotorcraft mission effectiveness during day, night, and adverse weather conditions in the Nap- of-the-Earth environment. The Advanced Visionics System for CONDOR is the flight- ruggedized head mounted display and computer graphics generator with the intended use of exploring, developing, and evaluating proposed visionic concepts for rotorcraft including; the application of color displays, wide field-of-view, enhanced imagery, virtual displays, mission symbology, stereo imagery, and other graphical interfaces.

  12. Thermal 3D modeling system based on 3-view geometry

    NASA Astrophysics Data System (ADS)

    Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-11-01

    In this paper, we propose a novel thermal three-dimensional (3D) modeling system that includes 3D shape, visual, and thermal infrared information and solves a registration problem among these three types of information. The proposed system consists of a projector, a visual camera and, a thermal camera (PVT). To generate 3D shape information, we use a structured light technique, which consists of a visual camera and a projector. A thermal camera is added to the structured light system in order to provide thermal information. To solve the correspondence problem between the three sensors, we use three-view geometry. Finally, we obtain registered PVT data, which includes visual, thermal, and 3D shape information. Among various potential applications such as industrial measurements, biological experiments, military usage, and so on, we have adapted the proposed method to biometrics, particularly for face recognition. With the proposed method, we obtain multi-modal 3D face data that includes not only textural information but also data regarding head pose, 3D shape, and thermal information. Experimental results show that the performance of the proposed face recognition system is not limited by head pose variation which is a serious problem in face recognition.

  13. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  14. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  15. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  16. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  17. Quantitative wound healing measurement and monitoring system based on an innovative 3D imaging system

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Yang, Arthur; Yin, Gongjie; Wen, James

    2011-03-01

    In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess, document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices, physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma, local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld device; and (f) Reasonable cost (< $1,000).

  18. Bird Vision System

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Bird Vision system is a multicamera photogrammerty software application that runs on a Microsoft Windows XP platform and was developed at Kennedy Space Center by ASRC Aerospace. This software system collects data about the locations of birds within a volume centered on the Space Shuttle and transmits it in real time to the laptop computer of a test director in the Launch Control Center (LCC) Firing Room.

  19. Design of a single projector multiview 3D display system

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2014-03-01

    Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.

  20. Systems biology in 3D space--enter the morphome.

    PubMed

    Lucocq, John M; Mayhew, Terry M; Schwab, Yannick; Steyer, Anna M; Hacker, Christian

    2015-02-01

    Systems-based understanding of living organisms depends on acquiring huge datasets from arrays of genes, transcripts, proteins, and lipids. These data, referred to as 'omes', are assembled using 'omics' methodologies. Currently a comprehensive, quantitative view of cellular and organellar systems in 3D space at nanoscale/molecular resolution is missing. We introduce here the term 'morphome' for the distribution of living matter within a 3D biological system, and 'morphomics' for methods of collecting 3D data systematically and quantitatively. A sampling-based approach termed stereology currently provides rapid, precise, and minimally biased morphomics. We propose that stereology solves the 'big data' problem posed by emerging wide-scale electron microscopy (EM) and can establish quantitative links between the newer nanoimaging platforms such as electron tomography, cryo-EM, and correlative microscopy.

  1. NGT-3D: a simple nematode cultivation system to study Caenorhabditis elegans biology in 3D

    PubMed Central

    Lee, Tong Young; Yoon, Kyoung-hye; Lee, Jin Il

    2016-01-01

    ABSTRACT The nematode Caenorhabditis elegans is one of the premier experimental model organisms today. In the laboratory, they display characteristic development, fertility, and behaviors in a two dimensional habitat. In nature, however, C. elegans is found in three dimensional environments such as rotting fruit. To investigate the biology of C. elegans in a 3D controlled environment we designed a nematode cultivation habitat which we term the nematode growth tube or NGT-3D. NGT-3D allows for the growth of both nematodes and the bacteria they consume. Worms show comparable rates of growth, reproduction and lifespan when bacterial colonies in the 3D matrix are abundant. However, when bacteria are sparse, growth and brood size fail to reach levels observed in standard 2D plates. Using NGT-3D we observe drastic deficits in fertility in a sensory mutant in 3D compared to 2D, and this defect was likely due to an inability to locate bacteria. Overall, NGT-3D will sharpen our understanding of nematode biology and allow scientists to investigate questions of nematode ecology and evolutionary fitness in the laboratory. PMID:26962047

  2. NGT-3D: a simple nematode cultivation system to study Caenorhabditis elegans biology in 3D.

    PubMed

    Lee, Tong Young; Yoon, Kyoung-Hye; Lee, Jin Il

    2016-01-01

    The nematodeCaenorhabditiselegansis one of the premier experimental model organisms today. In the laboratory, they display characteristic development, fertility, and behaviors in a two dimensional habitat. In nature, however,C. elegansis found in three dimensional environments such as rotting fruit. To investigate the biology ofC. elegansin a 3D controlled environment we designed a nematode cultivation habitat which we term the nematode growth tube or NGT-3D. NGT-3D allows for the growth of both nematodes and the bacteria they consume. Worms show comparable rates of growth, reproduction and lifespan when bacterial colonies in the 3D matrix are abundant. However, when bacteria are sparse, growth and brood size fail to reach levels observed in standard 2D plates. Using NGT-3D we observe drastic deficits in fertility in a sensory mutant in 3D compared to 2D, and this defect was likely due to an inability to locate bacteria. Overall, NGT-3D will sharpen our understanding of nematode biology and allow scientists to investigate questions of nematode ecology and evolutionary fitness in the laboratory. PMID:26962047

  3. Mechanically assisted 3D ultrasound guided prostate biopsy system.

    PubMed

    Bax, Jeffrey; Cool, Derek; Gardi, Lori; Knight, Kerry; Smith, David; Montreuil, Jacques; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

    2008-12-01

    There are currently limitations associated with the prostate biopsy procedure, which is the most commonly used method for a definitive diagnosis of prostate cancer. With the use of two-dimensional (2D) transrectal ultrasound (TRUS) for needle-guidance in this procedure, the physician has restricted anatomical reference points for guiding the needle to target sites. Further, any motion of the physician's hand during the procedure may cause the prostate to move or deform to a prohibitive extent. These variations make it difficult to establish a consistent reference frame for guiding a needle. We have developed a 3D navigation system for prostate biopsy, which addresses these shortcomings. This system is composed of a 3D US imaging subsystem and a passive mechanical arm to minimize prostate motion. To validate our prototype, a series of experiments were performed on prostate phantoms. The 3D scan of the string phantom produced minimal geometric distortions, and the geometric error of the 3D imaging subsystem was 0.37 mm. The accuracy of 3D prostate segmentation was determined by comparing the known volume in a certified phantom to a reconstructed volume generated by our system and was shown to estimate the volume with less then 5% error. Biopsy needle guidance accuracy tests in agar prostate phantoms showed that the mean error was 2.1 mm and the 3D location of the biopsy core was recorded with a mean error of 1.8 mm. In this paper, we describe the mechanical design and validation of the prototype system using an in vitro prostate phantom. Preliminary results from an ongoing clinical trial show that prostate motion is small with an in-plane displacement of less than 1 mm during the biopsy procedure.

  4. A 3-D measurement system using object-oriented FORTH

    SciTech Connect

    Butterfield, K.B.

    1989-01-01

    Discussed is a system for storing 3-D measurements of points that relates the coordinate system of the measurement device to the global coordinate system. The program described here used object-oriented FORTH to store the measured points as sons of the measuring device location. Conversion of local coordinates to absolute coordinates is performed by passing messages to the point objects. Modifications to the object-oriented FORTH system are also described. 1 ref.

  5. 3D vision based on PMD-technology for mobile robots

    NASA Astrophysics Data System (ADS)

    Roth, Hubert J.; Schwarte, Rudolf; Ruangpayoongsak, Niramon; Kuhle, Joerg; Albrecht, Martin; Grothof, Markus; Hess, Holger

    2003-09-01

    A series of micro-robots (MERLIN: Mobile Experimental Robots for Locomotion and Intelligent Navigation) has been designed and implemented for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication by radio link. The sensors onboard on the MERLIN robot can be divided into two categories: internal sensors for low-level control and for measuring the state of the robot and external sensors for obstacle detection, modeling of the environment and position estimation and navigation of the robot in a global co-ordinate system. The special emphasis of this paper is to describe the capabilities of MERLIN for obstacle detection, targets detection and for distance measurement. Besides ultrasonic sensors a new camera based on PMD-technology is used. This Photonic Mixer Device (PMD) represents a new electro-optic device that provides a smart interface between the world of incoherent optical signals and the world of their electronic signal processing. This PMD-technology directly enables 3D-imaging by means of the time-of-flight (TOF) principle. It offers an extremely high potential for new solutions in the robotics application field. The PMD-Technology opens up amazing new perspectives for obstacle detection systems, target acquisition as well as mapping of unknown environments.

  6. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    NASA Astrophysics Data System (ADS)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  7. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  8. Field calibration of binocular stereo vision based on fast reconstruction of 3D control field

    NASA Astrophysics Data System (ADS)

    Zhang, Haijun; Liu, Changjie; Fu, Luhua; Guo, Yin

    2015-08-01

    Construction of high-speed railway in China has entered a period of rapid growth. To accurately and quickly obtain the dynamic envelope curve of high-speed vehicle is an important guarantee for safe driving. The measuring system is based on binocular stereo vision. Considering the difficulties in field calibration such as environmental changes and time limits, carried out a field calibration method based on fast reconstruction of three-dimensional control field. With the rapid assembly of pre-calibrated three-dimensional control field, whose coordinate accuracy is guaranteed by manufacture accuracy and calibrated by V-STARS, two cameras take a quick shot of it at the same time. The field calibration parameters are then solved by the method combining linear solution with nonlinear optimization. Experimental results showed that the measurement accuracy can reach up to +/- 0.5mm, and more importantly, in the premise of guaranteeing accuracy, the speed of the calibration and the portability of the devices have been improved considerably.

  9. Industrial robot's vision systems

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Raskin, Evgeni O.; Komarov, Igor I.; Maltseva, Nadezhda K.; Fedosovsky, Michael E.

    2016-03-01

    Due to the improved economic situation in the high technology sectors, work on the creation of industrial robots and special mobile robotic systems are resumed. Despite this, the robotic control systems mostly remained unchanged. Hence one can see all advantages and disadvantages of these systems. This is due to lack of funds, which could greatly facilitate the work of the operator, and in some cases, completely replace it. The paper is concerned with the complex machine vision of robotic system for monitoring of underground pipelines, which collects and analyzes up to 90% of the necessary information. Vision Systems are used to identify obstacles to the process of movement on a trajectory to determine their origin, dimensions and character. The object is illuminated in a structured light, TV camera records projected structure. Distortions of the structure uniquely determine the shape of the object in view of the camera. The reference illumination is synchronized with the camera. The main parameters of the system are the basic distance between the generator and the lights and the camera parallax angle (the angle between the optical axes of the projection unit and camera).

  10. Panoramic, large-screen, 3-D flight display system design

    NASA Technical Reports Server (NTRS)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  11. Advanced system for 3D dental anatomy reconstruction and 3D tooth movement simulation during orthodontic treatment

    NASA Astrophysics Data System (ADS)

    Monserrat, Carlos; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Albalat, Salvador E.

    1997-05-01

    This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.

  12. 3D Geological Model for "LUSI" - a Deep Geothermal System

    NASA Astrophysics Data System (ADS)

    Sohrabi, Reza; Jansen, Gunnar; Mazzini, Adriano; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Geothermal applications require the correct simulation of flow and heat transport processes in porous media, and many of these media, like deep volcanic hydrothermal systems, host a certain degree of fracturing. This work aims to understand the heat and fluid transport within a new-born sedimentary hosted geothermal system, termed Lusi, that began erupting in 2006 in East Java, Indonesia. Our goal is to develop conceptual and numerical models capable of simulating multiphase flow within large-scale fractured reservoirs such as the Lusi region, with fractures of arbitrary size, orientation and shape. Additionally, these models can also address a number of other applications, including Enhanced Geothermal Systems (EGS), CO2 sequestration (Carbon Capture and Storage CCS), and nuclear waste isolation. Fractured systems are ubiquitous, with a wide-range of lengths and scales, making difficult the development of a general model that can easily handle this complexity. We are developing a flexible continuum approach with an efficient, accurate numerical simulator based on an appropriate 3D geological model representing the structure of the deep geothermal reservoir. Using previous studies, borehole information and seismic data obtained in the framework of the Lusi Lab project (ERC grant n°308126), we present here the first 3D geological model of Lusi. This model is calculated using implicit 3D potential field or multi-potential fields, depending on the geological context and complexity. This method is based on geological pile containing the geological history of the area and relationship between geological bodies allowing automatic computation of intersections and volume reconstruction. Based on the 3D geological model, we developed a new mesh algorithm to create hexahedral octree meshes to transfer the structural geological information for 3D numerical simulations to quantify Thermal-Hydraulic-Mechanical-Chemical (THMC) physical processes.

  13. Analysis of 3-D images of dental imprints using computer vision

    NASA Astrophysics Data System (ADS)

    Aubin, Michele; Cote, Jean; Laurendeau, Denis; Poussart, Denis

    1992-05-01

    This paper addressed two important aspects of dental analysis: (1) location and (2) identification of the types of teeth by means of 3-D image acquisition and segmentation. The 3-D images of both maxillaries are acquired using a wax wafer as support. The interstices between teeth are detected by non-linear filtering of the 3-D and grey-level data. Two operators are presented: one for the detection of the interstices between incisors, canines, and premolars and one for those between molars. Teeth are then identified by mapping the imprint under analysis on the computer model of an 'ideal' imprint. For the mapping to be valid, a set of three reference points is detected on the imprint. Then, the points are put in correspondence with similar points on the model. Two such points are chosen based on a least-squares fit of a second-order polynomial of the 3-D data in the area of canines. This area is of particular interest since the canines show a very characteristic shape and are easily detected on the imprint. The mapping technique is described in detail in the paper as well as pre-processing of the 3-D profiles. Experimental results are presented for different imprints.

  14. Measurement system for 3-D foot coordinates and parameters

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Yunhui; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-12-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle and the model of the measurement system were presented. Errors caused by nonlinearity of CCD cameras and caused by installation can be eliminated by using the global calibration method for CCD cameras, which based on nonlinear coordinate mapping function and the optimized method. A local foot coordinate system is defined with the Pternion and the Acropodion extracted from the boundaries of foot projections. The characteristic points can thus be located and foot parameters be extracted automatically by the local foot coordinate system and the related sections. Foot measurements for about 200 participants were conducted and the measurement results for male and female participants were presented. 3-D foot coordinates and parameters measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers.

  15. Development of an advanced 3D cone beam tomographic system

    NASA Astrophysics Data System (ADS)

    Sire, Pascal; Rizo, Philippe; Martin, M.; Grangeat, Pierre; Morisseau, P.

    Due to its high spatial resolution, the 3D X-ray cone-beam tomograph (CT) maximizes understanding of test object microstructure. In order for the present X-ray CT NDT system to control ceramics and ceramic-matrix composites, its spatial resolution must exceed 50 microns. Attention is given to two experimental data reconstructions that have been conducted to illustrate system capabilities.

  16. Vision-based building energy diagnostics and retrofit analysis using 3D thermography and building information modeling

    NASA Astrophysics Data System (ADS)

    Ham, Youngjib

    localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the

  17. 3D gel printing for soft-matter systems innovation

    NASA Astrophysics Data System (ADS)

    Furukawa, Hidemitsu; Kawakami, Masaru; Gong, Jin; Makino, Masato; Kabir, M. Hasnat; Saito, Azusa

    2015-04-01

    In the past decade, several high-strength gels have been developed, especially from Japan. These gels are expected to use as a kind of new engineering materials in the fields of industry and medical as substitutes to polyester fibers, which are materials of artificial blood vessels. We consider if various gel materials including such high-strength gels are 3D-printable, many new soft and wet systems will be developed since the most intricate shape gels can be printed regardless of the quite softness and brittleness of gels. Recently we have tried to develop an optical 3D gel printer to realize the free-form formation of gel materials. We named this apparatus Easy Realizer of Soft and Wet Industrial Materials (SWIM-ER). The SWIM-ER will be applied to print bespoke artificial organs, including artificial blood vessels, which will be possibly used for both surgery trainings and actual surgery. The SWIM-ER can print one of the world strongest gels, called Double-Network (DN) gels, by using UV irradiation through an optical fiber. Now we also are developing another type of 3D gel printer for foods, named E-Chef. We believe these new 3D gel printers will broaden the applications of soft-matter gels.

  18. Progress in building a cognitive vision system

    NASA Astrophysics Data System (ADS)

    Benjamin, D. Paul; Lyons, Damian; Yue, Hong

    2016-05-01

    We are building a cognitive vision system for mobile robots that works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion to create a local dynamic spatial model. These local 3D models are composed to create an overall 3D model of the robot and its environment. This approach turns the computer vision problem into a search problem whose goal is the acquisition of sufficient spatial understanding for the robot to succeed at its tasks. The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot's current goals. For example, if the goal is to navigate through a room, the model needs to contain any obstacles that would be encountered, giving their approximate positions and sizes. Other information does not need to be rendered into the virtual world, so this approach trades model accuracy for speed.

  19. Modelling of 3D fractured geological systems - technique and application

    NASA Astrophysics Data System (ADS)

    Cacace, M.; Scheck-Wenderoth, M.; Cherubini, Y.; Kaiser, B. O.; Bloecher, G.

    2011-12-01

    All rocks in the earth's crust are fractured to some extent. Faults and fractures are important in different scientific and industry fields comprising engineering, geotechnical and hydrogeological applications. Many petroleum, gas and geothermal and water supply reservoirs form in faulted and fractured geological systems. Additionally, faults and fractures may control the transport of chemical contaminants into and through the subsurface. Depending on their origin and orientation with respect to the recent and palaeo stress field as well as on the overall kinematics of chemical processes occurring within them, faults and fractures can act either as hydraulic conductors providing preferential pathways for fluid to flow or as barriers preventing flow across them. The main challenge in modelling processes occurring in fractured rocks is related to the way of describing the heterogeneities of such geological systems. Flow paths are controlled by the geometry of faults and their open void space. To correctly simulate these processes an adequate 3D mesh is a basic requirement. Unfortunately, the representation of realistic 3D geological environments is limited by the complexity of embedded fracture networks often resulting in oversimplified models of the natural system. A technical description of an improved method to integrate generic dipping structures (representing faults and fractures) into a 3D porous medium is out forward. The automated mesh generation algorithm is composed of various existing routines from computational geometry (e.g. 2D-3D projection, interpolation, intersection, convex hull calculation) and meshing (e.g. triangulation in 2D and tetrahedralization in 3D). All routines have been combined in an automated software framework and the robustness of the approach has been tested and verified. These techniques and methods can be applied for fractured porous media including fault systems and therefore found wide applications in different geo-energy related

  20. Laboratory 3D Micro-XRF/Micro-CT Imaging System

    NASA Astrophysics Data System (ADS)

    Bruyndonckx, P.; Sasov, A.; Liu, X.

    2011-09-01

    A prototype micro-XRF laboratory system based on pinhole imaging was developed to produce 3D elemental maps. The fluorescence x-rays are detected by a deep-depleted CCD camera operating in photon-counting mode. A charge-clustering algorithm, together with dynamically adjusted exposure times, ensures a correct energy measurement. The XRF component has a spatial resolution of 70 μm and an energy resolution of 180 eV at 6.4 keV. The system is augmented by a micro-CT imaging modality. This is used for attenuation correction of the XRF images and to co-register features in the 3D XRF images with morphological structures visible in the volumetric CT images of the object.

  1. A review of computer-aided body surface area determination: SAGE II and EPRI's 3D Burn Vision.

    PubMed

    Neuwalder, J M; Sampson, C; Breuing, K H; Orgill, D P

    2002-01-01

    Estimates of percent body surface area (%BSA) burns correlate well with fluid needs, nutritional requirements, and prognosis. Most burn centers rely on the Lund Browder chart and "rule of nines," to calculate the %BSA. Computer-based methods may improve precision and data analysis. We studied two new methods of determining %BSA: a two-dimensional Web-based program (Sage II) and a three-dimensional computer-aided design program (EPRI 3D Burn Vision). Members of our burn team found the Sage II program easy to use and found many of the features useful for patient care. The EPRI program has the advantage of 3D images and different body morphologies but required training to use. Computer-aided methods offer the potential for improved precision and data analysis of %BSA measurements.

  2. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  3. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  4. 3D temperature field reconstruction using ultrasound sensing system

    NASA Astrophysics Data System (ADS)

    Liu, Yuqian; Ma, Tong; Cao, Chengyu; Wang, Xingwei

    2016-04-01

    3D temperature field reconstruction is of practical interest to the power, transportation and aviation industries and it also opens up opportunities for real time control or optimization of high temperature fluid or combustion process. In our paper, a new distributed optical fiber sensing system consisting of a series of elements will be used to generate and receive acoustic signals. This system is the first active temperature field sensing system that features the advantages of the optical fiber sensors (distributed sensing capability) and the acoustic sensors (non-contact measurement). Signals along multiple paths will be measured simultaneously enabled by a code division multiple access (CDMA) technique. Then a proposed Gaussian Radial Basis Functions (GRBF)-based approach can approximate the temperature field as a finite summation of space-dependent basis functions and time-dependent coefficients. The travel time of the acoustic signals depends on the temperature of the media. On this basis, the Gaussian functions are integrated along a number of paths which are determined by the number and distribution of sensors. The inversion problem to estimate the unknown parameters of the Gaussian functions can be solved with the measured times-of-flight (ToF) of acoustic waves and the length of propagation paths using the recursive least square method (RLS). The simulation results show an approximation error less than 2% in 2D and 5% in 3D respectively. It demonstrates the availability and efficiency of our proposed 3D temperature field reconstruction mechanism.

  5. Visions image operating system

    SciTech Connect

    Kohler, R.R.; Hanson, A.R.

    1982-01-01

    The image operating system is a complete software environment specifically designed for dynamic experimentation in scene analysis. The IOS consists of a high-level interpretive control language (LISP) with efficient image operators in a noninterpretive language. The image operators are viewed as local operators to be applied in parallel at all pixels to a set of input images. In order to carry out complex image analysis experiments an environment conducive to such experimentation was needed. This environment is provided by the visions image operating system based on a computational structure known as a processing cone proposed by Hanson and Riseman (1974, 1980) and implemented on a VAX-11/780 running VMS. 6 references.

  6. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  7. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  8. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. Self-adaptive Vision System

    NASA Astrophysics Data System (ADS)

    Stipancic, Tomislav; Jerbic, Bojan

    Light conditions represent an important part of every vision application. This paper describes one active behavioral scheme of one particular active vision system. This behavioral scheme enables an active system to adapt to current environmental conditions by constantly validating the amount of the reflected light using luminance meter and dynamically changed significant vision parameters. The purpose of the experiment was to determine the connections between light conditions and inner vision parameters. As a part of the experiment, Response Surface Methodology (RSM) was used to predict values of vision parameters with respect to luminance input values. RSM was used to approximate an unknown function for which only few values were computed. The main output validation system parameter is called Match Score. Match Score indicates how well the found object matches the learned model. All obtained data are stored in the local database. By timely applying new parameters predicted by the RSM, the vision application works in a stabile and robust manner.

  10. State of the art of 3D scanning systems and inspection of textile surfaces

    NASA Astrophysics Data System (ADS)

    Montilla, M.; Orjuela-Vargas, S. A.; Philips, W.

    2014-02-01

    The rapid development of hardware and software in the digital image processing field has boosted research in computer vision for applications in industry. The development of new electronic devices and the tendency to decrease their prices makes possible new developments that few decades ago were possible only in the imagination. This is the case of 3D imaging technology which permits to detect failures in industrial products by inspecting aspects on their 3D surface. In search of an optimal solution for scanning textiles we present in this paper a review of existing techniques for digitizing 3D surfaces. Topographic details of textiles can be obtained by digitizing surfaces using laser line triangulation, phase shifting optical triangulation, projected-light, stereo-vision systems and silhouette analysis. Although we are focused on methods that have been used in the textile industry, we also consider potential mechanisms used for other applications. We discuss the advantages and disadvantages of the evaluated methods and state a summary of potential implementations for the textile industry.

  11. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  12. Facial-paralysis diagnostic system based on 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  13. Three-Dimensional Air Quality System (3D-AQS)

    NASA Astrophysics Data System (ADS)

    Engel-Cox, J.; Hoff, R.; Weber, S.; Zhang, H.; Prados, A.

    2007-12-01

    The 3-Dimensional Air Quality System (3DAQS) integrates remote sensing observations from a variety of platforms into air quality decision support systems at the U.S. Environmental Protection Agency (EPA), with a focus on particulate air pollution. The decision support systems are the Air Quality System (AQS) / AirQuest database at EPA, Infusing satellite Data into Environmental Applications (IDEA) system, the U.S. Air Quality weblog (Smog Blog) at UMBC, and the Regional East Atmospheric Lidar Mesonet (REALM). The project includes an end user advisory group with representatives from the air quality community providing ongoing feedback. The 3DAQS data sets are UMBC ground based LIDAR, and NASA and NOAA satellite data from MODIS, OMI, AIRS, CALIPSO, MISR, and GASP. Based on end user input, we are co-locating these measurements to the EPA's ground-based air pollution monitors as well as re-gridding to the Community Multiscale Air Quality (CMAQ) model grid. These data provide forecasters and the scientific community with a tool for assessment, analysis, and forecasting of U.S Air Quality. The third dimension and the ability to analyze the vertical transport of particulate pollution are provided by aerosol extinction profiles from the UMBC LIDAR and CALIPSO. We present examples of a 3D visualization tool we are developing to facilitate use of this data. We also present two specific applications of 3D-AQS data. The first is comparisons between PM2.5 monitor data and remote sensing aerosol optical depth (AOD) data, which show moderate agreement but variation with EPA region. The second is a case study for Baltimore, Maryland, as an example of 3D-analysis for a metropolitan area. In that case, some improvement is found in the PM2.5 /LIDAR correlations when using vertical aerosol information to calculate an AOD below the boundary layer.

  14. Simulation of 3D flows past hypersonic vehicles in FlowVision software

    NASA Astrophysics Data System (ADS)

    Aksenov, A. A.; Zhluktov, S. V.; Savitskiy, D. V.; Bartenev, G. Y.; Pokhilko, V. I.

    2015-11-01

    A new implicit velocity-pressure split method is discussed in the given presentation. The method implies using conservative velocities, obtained at the given time step, for integration of the momentum equation and other convection-diffusion equations. This enables simulation of super- and hypersonic flows with account of motion of solid boundaries. Calculations of known test cases performed in the FlowVision software are demonstrated. It is shown that the method allows one to carry out calculations at high Mach numbers with integration step essentially exceeding the explicit time step.

  15. 3D in vitro modeling of the central nervous system.

    PubMed

    Hopkins, Amy M; DeSimone, Elise; Chwalek, Karolina; Kaplan, David L

    2015-02-01

    There are currently more than 600 diseases characterized as affecting the central nervous system (CNS) which inflict neural damage. Unfortunately, few of these conditions have effective treatments available. Although significant efforts have been put into developing new therapeutics, drugs which were promising in the developmental phase have high attrition rates in late stage clinical trials. These failures could be circumvented if current 2D in vitro and in vivo models were improved. 3D, tissue-engineered in vitro systems can address this need and enhance clinical translation through two approaches: (1) bottom-up, and (2) top-down (developmental/regenerative) strategies to reproduce the structure and function of human tissues. Critical challenges remain including biomaterials capable of matching the mechanical properties and extracellular matrix (ECM) composition of neural tissues, compartmentalized scaffolds that support heterogeneous tissue architectures reflective of brain organization and structure, and robust functional assays for in vitro tissue validation. The unique design parameters defined by the complex physiology of the CNS for construction and validation of 3D in vitro neural systems are reviewed here.

  16. 3D in vitro modeling of the central nervous system

    PubMed Central

    Hopkins, Amy M.; DeSimone, Elise; Chwalek, Karolina; Kaplan, David L.

    2015-01-01

    There are currently more than 600 diseases characterized as affecting the central nervous system (CNS) which inflict neural damage. Unfortunately, few of these conditions have effective treatments available. Although significant efforts have been put into developing new therapeutics, drugs which were promising in the developmental phase have high attrition rates in late stage clinical trials. These failures could be circumvented if current 2D in vitro and in vivo models were improved. 3D, tissue-engineered in vitro systems can address this need and enhance clinical translation through two approaches: (1) bottom-up, and (2) top-down (developmental/regenerative) strategies to reproduce the structure and function of human tissues. Critical challenges remain including biomaterials capable of matching the mechanical properties and extracellular matrix (ECM) composition of neural tissues, compartmentalized scaffolds that support heterogeneous tissue architectures reflective of brain organization and structure, and robust functional assays for in vitro tissue validation. The unique design parameters defined by the complex physiology of the CNS for construction and validation of 3D in vitro neural systems are reviewed here. PMID:25461688

  17. Rapid prototyping with optical 3D measurement systems

    NASA Astrophysics Data System (ADS)

    Gaessler, J.; Blount, G. N.; Jones, R. M.

    1994-11-01

    One of the important tools for speeding up the prototyping of an new industrial or consumer product is the rapid generation of CAD data from hand-made styling models and moulds. We present a new optical 3D digitizing system which produces in a fully automatic way non- ambiguous, absolute and complete surface coordinate data of very complex objects in a short time. The system named `OptoShape' is based on a projection of sinusoidal fringes with a true grey-level matrix projector. The system measures both non-ambiguous and absolute XYZ surface data with a pronounced robustness towards optical surface properties. By moving the 3D sensor head around the object to be digitized with a 3/5 axes manipulator, multiple range images are obtained and automatically merged into a unified cloud of point coordinates. This set of surface coordinates are transferred to a software package where interactive manipulation, sectioning and semi-automatic generation of CAD surface descriptions are performed. CNC data can also be directly generated from the point surface coordinate data set.

  18. Handheld camera 3D modeling system using multiple reference panels

    NASA Astrophysics Data System (ADS)

    Fujimura, Kouta; Oue, Yasuhiro; Terauchi, Tomoya; Emi, Tetsuichi

    2002-03-01

    A novel 3D modeling system in which a target object is easily captured and modeled by using a hand-held camera with several reference panels is presented in this paper. The reference panels are designed to be able to obtain the camera position and discriminate between each other. A conventional 3D modeling system using a reference panel has several restrictions regarding the target object, specifically the size and its location. Our system uses multiple reference panels, which are set around the target object to remove these restrictions. The main features of this system are as follows: 1) The whole shape and photo-realistic textures of the target object can be digitized based on several still images or a movie captured by using a hand-held camera; as well as each location of the camera that can be calculated using the reference panels. 2) Our system can be provided as a software product only. That means there are no special requirements for hardware; even the reference panels , because they can be printed from image files or software. 3) This system can be applied to digitize a larger object. In the experiments, we developed and used an interactive region selection tool to detect the silhouette on each image instead of using the chroma -keying method. We have tested our system with a toy object. The calculation time is about 10 minutes (except for the capturing the images and extracting the silhouette by using our tool) on a personal computer with a Pentium-III processor (600MHz) and 320MB memory. However, it depends on how complex the images are and how many images you use. Our future plan is to evaluate the system with various kind of objects, specifically, large ones in outdoor environments.

  19. Developmental neurotoxic effects of Malathion on 3D neurosphere system

    PubMed Central

    Salama, Mohamed; Lotfy, Ahmed; Fathy, Khaled; Makar, Maria; El-emam, Mona; El-gamal, Aya; El-gamal, Mohamed; Badawy, Ahmad; Mohamed, Wael M.Y.; Sobh, Mohamed

    2015-01-01

    Developmental neurotoxicity (DNT) refers to the toxic effects induced by various chemicals on brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have significant effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS; however, most of agents cannot be identified with certainty. This is because available animal models do not cover the whole spectrum of CNS developmental periods. A novel alternative method that can overcome most of the limitations of the conventional techniques is the use of 3D neurosphere system. This in-vitro system can recapitulate many of the changes during the period of brain development making it an ideal model for predicting developmental neurotoxic effects. In the present study we verified the possible DNT of Malathion, which is one of organophosphate pesticides with suggested possible neurotoxic effects on nursing children. Three doses of Malathion (0.25 μM, 1 μM and 10 μM) were used in cultured neurospheres for a period of 14 days. Malathion was found to affect proliferation, differentiation and viability of neurospheres, these effects were positively correlated to doses and time progress. This study confirms the DNT effects of Malathion on 3D neurosphere model. Further epidemiological studies will be needed to link these results to human exposure and effects data. PMID:27054080

  20. Inertial Pocket Navigation System: Unaided 3D Positioning.

    PubMed

    Diaz, Estefania Munoz

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  1. Inertial Pocket Navigation System: Unaided 3D Positioning

    PubMed Central

    Munoz Diaz, Estefania

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  2. Dynamical Systems Analysis of Fully 3D Ocean Features

    NASA Astrophysics Data System (ADS)

    Pratt, L. J.

    2011-12-01

    Dynamical systems analysis of transport and stirring processes has been developed most thoroughly for 2D flow fields. The calculation of manifolds, turnstile lobes, transport barriers, etc. based on observations of the ocean is most often conducted near the sea surface, whereas analyses at depth, usually carried out with model output, is normally confined to constant-z surfaces. At the meoscale and larger, ocean flows are quasi 2D, but smaller scale (submesoscale) motions, including mixed layer phenomena with significant vertical velocity, may be predominantly 3D. The zoology of hyperbolic trajectories becomes richer in such cases and their attendant manifolds are much more difficult to calculate. I will describe some of the basic geometrical features and corresponding Lagrangian Coherent Features expected to arise in upper ocean fronts, eddies, and Langmuir circulations. Traditional GFD models such as the rotating can flow may capture the important generic features. The dynamical systems approach is most helpful when these features are coherent and persistent and the implications and difficulties for this requirement in fully 3D flows will also be discussed.

  3. Modeling moving systems with RELAP5-3D

    DOE PAGES

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less

  4. Modeling moving systems with RELAP5-3D

    SciTech Connect

    Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; Kyle, Matt R.

    2015-12-04

    RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the accelerating frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.

  5. 3D Additive Construction with Regolith for Surface Systems

    NASA Technical Reports Server (NTRS)

    Mueller, Robert P.

    2014-01-01

    Planetary surface exploration on Asteroids, the Moon, Mars and Martian Moons will require the stabilization of loose, fine, dusty regolith to avoid the effects of vertical lander rocket plume impingement, to keep abrasive and harmful dust from getting lofted and for dust free operations. In addition, the same regolith stabilization process can be used for 3 Dimensional ( 3D) printing, additive construction techniques by repeating the 2D stabilization in many vertical layers. This will allow in-situ construction with regolith so that materials will not have to be transported from Earth. Recent work in the NASA Kennedy Space Center (KSC) Surface Systems Office (NE-S) Swamp Works and at the University of Southern California (USC) under two NASA Innovative Advanced Concept (NIAC) awards have shown promising results with regolith (crushed basalt rock) materials for in-situ heat shields, bricks, landing/launch pads, berms, roads, and other structures that could be fabricated using regolith that is sintered or mixed with a polymer binder. The technical goals and objectives of this project are to prove the feasibility of 3D printing additive construction using planetary regolith simulants and to show that they have structural integrity and practical applications in space exploration.

  6. Hybrid additive manufacturing of 3D electronic systems

    NASA Astrophysics Data System (ADS)

    Li, J.; Wasley, T.; Nguyen, T. T.; Ta, V. D.; Shephard, J. D.; Stringer, J.; Smith, P.; Esenturk, E.; Connaughton, C.; Kay, R.

    2016-10-01

    A novel hybrid additive manufacturing (AM) technology combining digital light projection (DLP) stereolithography (SL) with 3D micro-dispensing alongside conventional surface mount packaging is presented in this work. This technology overcomes the inherent limitations of individual AM processes and integrates seamlessly with conventional packaging processes to enable the deposition of multiple materials. This facilitates the creation of bespoke end-use products with complex 3D geometry and multi-layer embedded electronic systems. Through a combination of four-point probe measurement and non-contact focus variation microscopy, it was identified that there was no obvious adverse effect of DLP SL embedding process on the electrical conductivity of printed conductors. The resistivity maintained to be less than 4  ×  10-4 Ω · cm before and after DLP SL embedding when cured at 100 °C for 1 h. The mechanical strength of SL specimens with thick polymerized layers was also identified through tensile testing. It was found that the polymerization thickness should be minimised (less than 2 mm) to maximise the bonding strength. As a demonstrator a polymer pyramid with embedded triple-layer 555 LED blinking circuitry was successfully fabricated to prove the technical viability.

  7. Code System to Simulate 3D Tracer Dispersion in Atmosphere.

    2002-01-25

    Version 00 SHREDI is a shielding code system which executes removal-diffusion computations for bi-dimensional shields in r-z or x-y geometries. It may also deal with monodimensional problems (infinitely high cylinders or slabs). MESYST can simulate 3D tracer dispersion in the atmosphere. Three programs are part of this system: CRE_TOPO prepares the terrain data for MESYST. NOABL calculates three-dimensional free divergence windfields over complex terrain. PAS computes tracer concentrations and depositions on a given domain. Themore » purpose of this work is to develop a reliable simulation tool for pollutant atmospheric dispersion, which gives a realistic approach and allows one to compute the pollutant concentrations over complex terrains with good accuracy. The factional brownian model, which furnishes more accurate concentration values, is introduced to calculate pollutant atmospheric dispersion. The model was validated on SIESTA international experiments.« less

  8. Towards autonomic computing in machine vision applications: techniques and strategies for in-line 3D reconstruction in harsh industrial environments

    NASA Astrophysics Data System (ADS)

    Molleda, Julio; Usamentiaga, Rubén; García, Daniel F.; Bulnes, Francisco G.

    2011-03-01

    Nowadays machine vision applications require skilled users to configure, tune, and maintain. Because such users are scarce, the robustness and reliability of applications are usually significantly affected. Autonomic computing offers a set of principles such as self-monitoring, self-regulation, and self-repair which can be used to partially overcome those problems. Systems which include self-monitoring observe their internal states, and extract features about them. Systems with self-regulation are capable of regulating their internal parameters to provide the best quality of service depending on the operational conditions and environment. Finally, self-repairing systems are able to detect anomalous working behavior and to provide strategies to deal with such conditions. Machine vision applications are the perfect field to apply autonomic computing techniques. This type of application has strong constraints on reliability and robustness, especially when working in industrial environments, and must provide accurate results even under changing conditions such as luminance, or noise. In order to exploit the autonomic approach of a machine vision application, we believe the architecture of the system must be designed using a set of orthogonal modules. In this paper, we describe how autonomic computing techniques can be applied to machine vision systems, using as an example a real application: 3D reconstruction in harsh industrial environments based on laser range finding. The application is based on modules with different responsibilities at three layers: image acquisition and processing (low level), monitoring (middle level) and supervision (high level). High level modules supervise the execution of low-level modules. Based on the information gathered by mid-level modules, they regulate low-level modules in order to optimize the global quality of service, and tune the module parameters based on operational conditions and on the environment. Regulation actions involve

  9. 3D KLT compression algorithm for camera security systems

    NASA Astrophysics Data System (ADS)

    Fritsch, Lukás; Páta, Petr

    2008-11-01

    This paper deals with an image compression algorithm based on the three- dimensional Karhunen- Loeve transform (3D KLT), whose task is to reduce time redundancy for an image data. There are many cases for which the reduction of time redundancy is very efficient and brings perceptible bitstream reduction. This is very desirable for transfering image data through telephone links, GSM networks etc. The time redundancy is very perceptible e.g. in camera security systems where relative unchanging frames are very conventional. The time evolution of grabbed scene is reviews according an energy content in eigenimages. These eigenimages are obtained in KLT for a suitable number of incoming frames. Required number of transfered eigenimages and eigenvectors is determined on the basis of the energy content in eigenimages.

  10. MIMO based 3D imaging system at 360 GHz

    NASA Astrophysics Data System (ADS)

    Herschel, R.; Nowok, S.; Zimmermann, R.; Lang, S. A.; Pohl, N.

    2016-05-01

    A MIMO radar imaging system at 360 GHz is presented as a part of the comprehensive approach of the European FP7 project TeraSCREEN, using multiple frequency bands for active and passive imaging. The MIMO system consists of 16 transmitter and 16 receiver antennas within one single array. Using a bandwidth of 30 GHz, a range resolution up to 5 mm is obtained. With the 16×16 MIMO system 256 different azimuth bins can be distinguished. Mechanical beam steering is used to measure 130 different elevation angles where the angular resolution is obtained by a focusing elliptical mirror. With this system a high resolution 3D image can be generated with 4 frames per second, each containing 16 million points. The principle of the system is presented starting from the functional structure, covering the hardware design and including the digital image generation. This is supported by simulated data and discussed using experimental results from a preliminary 90 GHz system underlining the feasibility of the approach.

  11. Optical 3D-coordinate measuring system using structured light

    NASA Astrophysics Data System (ADS)

    Schreiber, Wolfgang; Notni, Gunther; Kuehmstedt, Peter; Gerber, Joerg; Kowarschik, Richard M.

    1996-09-01

    The paper is aimed at the description of an optical shape measuring technique based on a consistent principle using fringe projection technique. We demonstrate a real 3D- coordinate measuring system where the sale of coordinates is given only by the illumination-structures. This method has the advantages that the aberration of the observing system and the depth-dependent imaging scale have no influence on the measuring accuracy and, moreover, the measurements are independent of the position of the camera with respect to the object under test. Furthermore, it is shown that the influence of specular effects of the surface on the measuring result can be eliminated. Moreover, we developed a very simple algorithm to calibrate the measuring system. The measuring examples show that a measuring accuracy of 10-4 (i.e. 10 micrometers ) within an object volume of 100 X 100 X 70 mm3 is achievable. Furthermore, it is demonstrated that the set of coordinate values can be processed in CNC- and CAD-systems.

  12. 3D active stabilization system with sub-micrometer resolution.

    PubMed

    Kursu, Olli; Tuukkanen, Tuomas; Rahkonen, Timo; Vähäsöyrinki, Mikko

    2012-01-01

    Stable positioning between a measurement probe and its target from sub- to few micrometer scales has become a prerequisite in precision metrology and in cellular level measurements from biological tissues. Here we present a 3D stabilization system based on an optoelectronic displacement sensor and custom piezo-actuators driven by a feedback control loop that constantly aims to zero the relative movement between the sensor and the target. We used simulations and prototyping to characterize the developed system. Our results show that 95% attenuation of movement artifacts is achieved at 1 Hz with stabilization performance declining to ca. 70% attenuation at 10 Hz. Stabilization bandwidth is limited by mechanical resonances within the displacement sensor that occur at relatively low frequencies, and are attributable to the sensor's high force sensitivity. We successfully used brain derived micromotion trajectories as a demonstration of complex movement stabilization. The micromotion was reduced to a level of ∼1 µm with nearly 100 fold attenuation at the lower frequencies that are typically associated with physiological processes. These results, and possible improvements of the system, are discussed with a focus on possible ways to increase the sensor's force sensitivity without compromising overall system bandwidth. PMID:22900045

  13. Three-dimensional imaging system combining vision and ultrasonics

    NASA Astrophysics Data System (ADS)

    Wykes, Catherine; Chou, Tsung N.

    1994-11-01

    Vision systems are being applied to a wide range of inspection problems in manufacturing. In 2D systems, a single video camera captures an image of the object and application of suitable image processing techniques enables information about dimension, shape and the presence of features and flaws to be extracted from the image. This can be used to recognize, inspect and/or measure the part. 3D measurement is also possible with vision systems but requires the use of either two or more cameras, or structured lighting (i.e. stripes or grids) and the processing of such images is necessarily considerably more complex, and therefore slower and more expensive than 3D imaging. Ultrasonic imaging is widely used in medical and NDT applications to give 3D images; in these systems, the ultrasound is propagated into a liquid or a solid. Imaging using air-borne ultrasound is much less advanced, mainly due to the limited availability of suitable sensors. Unique 2D ultrasonic ranging systems using in-house built phased arrays have been developed in Nottingham which enable both the range and bearing of targets to be measured. The ultrasonic/vision system will combine the excellent lateral resolution of a vision system with the straightforward range acquisition of the ultrasonic system. The system is expected to extend the use of vision systems in automation, particularly in the area of automated assembly where it can eliminate the need for expensive jigs and orienting part-feeders.

  14. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  15. Repositioning accuracy of two different mask systems-3D revisited: Comparison using true 3D/3D matching with cone-beam CT

    SciTech Connect

    Boda-Heggemann, Judit . E-mail: judit.boda-heggemann@radonk.ma.uni-heidelberg.de; Walter, Cornelia; Rahn, Angelika; Wertz, Hansjoerg; Loeb, Iris; Lohr, Frank; Wenz, Frederik

    2006-12-01

    Purpose: The repositioning accuracy of mask-based fixation systems has been assessed with two-dimensional/two-dimensional or two-dimensional/three-dimensional (3D) matching. We analyzed the accuracy of commercially available head mask systems, using true 3D/3D matching, with X-ray volume imaging and cone-beam CT. Methods and Materials: Twenty-one patients receiving radiotherapy (intracranial/head-and-neck tumors) were evaluated (14 patients with rigid and 7 with thermoplastic masks). X-ray volume imaging was analyzed online and offline separately for the skull and neck regions. Translation/rotation errors of the target isocenter were analyzed. Four patients were treated to neck sites. For these patients, repositioning was aided by additional body tattoos. A separate analysis of the setup error on the basis of the registration of the cervical vertebra was performed. The residual error after correction and intrafractional motility were calculated. Results: The mean length of the displacement vector for rigid masks was 0.312 {+-} 0.152 cm (intracranial) and 0.586 {+-} 0.294 cm (neck). For the thermoplastic masks, the value was 0.472 {+-} 0.174 cm (intracranial) and 0.726 {+-} 0.445 cm (neck). Rigid masks with body tattoos had a displacement vector length in the neck region of 0.35 {+-} 0.197 cm. The intracranial residual error and intrafractional motility after X-ray volume imaging correction for rigid masks was 0.188 {+-} 0.074 cm, and was 0.134 {+-} 0.14 cm for thermoplastic masks. Conclusions: The results of our study have demonstrated that rigid masks have a high intracranial repositioning accuracy per se. Given the small residual error and intrafractional movement, thermoplastic masks may also be used for high-precision treatments when combined with cone-beam CT. The neck region repositioning accuracy was worse than the intracranial accuracy in both cases. However, body tattoos and image guidance improved the accuracy. Finally, the combination of both mask

  16. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  17. Gel tomography for 3D acquisition of plant root systems

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.; Heyenga, Anthony G.

    1998-03-01

    A system for three-dimensional, non-destructive acquisition of the structure of plant root systems is described. The plants are grown in a transparent medium (a 'gel pack') and are then placed on a rotating stage. The stage is rotated in 5-degree increments while images are captured using either traditional photography or a CCD camera. The individual images are then used as input to a tomographic (backprojection) algorithm to recover the original volumetric data. This reconstructed volume is then used as input to a 3D-reconstruction system. The software performs segmentation and mesh generation to derive a tessellated mesh of the root structure. This mesh can then be visualized using computer graphics, or used to derive measurements of root thickness and length. For initial validation studies, a wire model of known length and gauge was used as a calibration sample. The use of the transparent gel- pack media, together with the gel tomography software, allows the plant biologist a method for non-destructive visualization and measurement of root structure that has previously been unattainable.

  18. Optical 3D laser measurement system for navigation of autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Básaca-Preciado, Luis C.; Sergiyenko, Oleg Yu.; Rodríguez-Quinonez, Julio C.; García, Xochitl; Tyrsa, Vera V.; Rivas-Lopez, Moises; Hernandez-Balbuena, Daniel; Mercorelli, Paolo; Podrygalo, Mikhail; Gurko, Alexander; Tabakova, Irina; Starostenko, Oleg

    2014-03-01

    In our current research, we are developing a practical autonomous mobile robot navigation system which is capable of performing obstacle avoiding task on an unknown environment. Therefore, in this paper, we propose a robot navigation system which works using a high accuracy localization scheme by dynamic triangulation. Our two main ideas are (1) integration of two principal systems, 3D laser scanning technical vision system (TVS) and mobile robot (MR) navigation system. (2) Novel MR navigation scheme, which allows benefiting from all advantages of precise triangulation localization of the obstacles, mostly over known camera oriented vision systems. For practical use, mobile robots are required to continue their tasks with safety and high accuracy on temporary occlusion condition. Presented in this work, prototype II of TVS is significantly improved over prototype I of our previous publications in the aspects of laser rays alignment, parasitic torque decrease and friction reduction of moving parts. The kinematic model of the MR used in this work is designed considering the optimal data acquisition from the TVS with the main goal of obtaining in real time, the necessary values for the kinematic model of the MR immediately during the calculation of obstacles based on the TVS data.

  19. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  20. 3D characterization of the Astor Pass geothermal system, Nevada

    SciTech Connect

    Mayhew, Brett; Faulds, James E

    2013-10-19

    The Astor Pass geothermal system resides in the northwestern part of the Pyramid Lake Paiute Reservation, on the margins of the Basin and Range and Walker Lane tectonic provinces in northwestern Nevada. Seismic reflection interpretation, detailed analysis of well cuttings, stress field analysis, and construction of a 3D geologic model have been used in the characterization of the stratigraphic and structural framework of the geothermal area. The area is primarily comprised of middle Miocene Pyramid sequence volcanic and sedimentary rocks, nonconformably overlying Mesozoic metamorphic and granitic rocks. Wells drilled at Astor Pass show a ~1 km thick section of highly transmissive Miocene volcanic reservoir with temperatures of ~95°C. Seismic reflection interpretation confirms a high fault density in the geothermal area, with many possible fluid pathways penetrating into the relatively impermeable Mesozoic basement. Stress field analysis using borehole breakout data reveals a complex transtensional faulting regime with a regionally consistent west-northwest-trending least principal stress direction. Considering possible strike-slip and normal stress regimes, the stress data were utilized in a slip and dilation tendency analysis of the fault model, which suggests two promising fault areas controlling upwelling geothermal fluids. Both of these fault intersection areas show positive attributes for controlling geothermal fluids, but hydrologic tests show the ~1 km thick volcanic section is highly transmissive. Thus, focused upwellings along discrete fault conduits may be confined to the Mesozoic basement before fluids diffuse into the Miocene volcanic reservoir above. This large diffuse reservoir in the faulted Miocene volcanic rocks is capable of sustaining high pump rates. Understanding this type of system may be helpful in examining large, permeable reservoirs in deep sedimentary basins of the eastern Basin and Range and the highly fractured volcanic geothermal

  1. A 3D polarizing display system base on backlight control

    NASA Astrophysics Data System (ADS)

    Liu, Pu; Huang, Ziqiang

    2011-08-01

    In this paper a new three-dimensional (3D) liquid crystal display (LCD) display mode based on backlight control is presented to avoid the left and right eye images crosstalk in 3D display. There are two major issues in this new black frame 3D display mode. One is continuously playing every frame images twice. The other is controlling the backlight switch periodically. First, this paper explains the cause of the left and right eye images crosstalk, and presents a solution to avoid this problem. Then, we propose to play the entire frame images twice by repeating each frame image after it was played instead of playing the left images and the right images frame by frame alternately. Finally, the backlight is switched periodically instead of turned on all the time. The backlight is turned off while a frame of image is played for the first time, then turned on during the second time, after that it will be turned off again and run the next period with the next frame of image start to refresh. Controlling the backlight switch periodically like this is the key to achieve the black frame 3D display mode. This mode can not only achieve better 3D display effect by avoid the left and right image crosstalk, but also save the backlight power consumption. Theoretical analysis and experiments show that our method is reasonable and efficient.

  2. 3D reconstruction of tropospheric cirrus clouds by stereovision system

    NASA Astrophysics Data System (ADS)

    Nadjib Kouahla, Mohamed; Moreels, Guy; Seridi, Hamid

    2016-07-01

    A stereo imaging method is applied to measure the altitude of cirrus clouds and provide a 3D map of the altitude of the layer centroid. They are located in the high troposphere and, sometimes in the lower stratosphere, between 6 and 10 km high. Two simultaneous images of the same scene are taken with Canon cameras (400D) in two sites distant of 37 Km. Each image processed in order to invert the perspective effect and provide a satellite-type view of the layer. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a correlation coefficient (ZNCC: Zero mean Normalized Cross-correlation or ZSSD: as Zero mean Sum of Squared Differences). This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in June 2014 in France. The images were taken simultaneously at Marnay (47°17'31.5" N, 5°44'58.8" E; altitude 275 m) 25 km northwest of Besancon and in Mont poupet (46°58'31.5" N, 5°52'22.7" E; altitude 600 m) southwest of Besancon at 43 km. 3D maps of the Natural cirrus clouds and artificial like "aircraft trails" are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the cirrus barycenter is located at 8.5 ± 1km on June 11.

  3. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  4. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  5. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  6. System and method for 3D printing of aerogels

    DOEpatents

    Worsley, Marcus A.; Duoss, Eric; Kuntz, Joshua; Spadaccini, Christopher; Zhu, Cheng

    2016-03-08

    A method of forming an aerogel. The method may involve providing a graphene oxide powder and mixing the graphene oxide powder with a solution to form an ink. A 3D printing technique may be used to write the ink into a catalytic solution that is contained in a fluid containment member to form a wet part. The wet part may then be cured in a sealed container for a predetermined period of time at a predetermined temperature. The cured wet part may then be dried to form a finished aerogel part.

  7. Stereoscopic vision system

    NASA Astrophysics Data System (ADS)

    Király, Zsolt; Springer, George S.; Van Dam, Jacques

    2006-04-01

    In this investigation, an optical system is introduced for inspecting the interiors of confined spaces, such as the walls of containers, cavities, reservoirs, fuel tanks, pipelines, and the gastrointestinal tract. The optical system wirelessly transmits stereoscopic video to a computer that displays the video in realtime on the screen, where it is viewed with shutter glasses. To minimize space requirements, the videos from the two cameras (required to produce stereoscopic images) are multiplexed into a single stream for transmission. The video is demultiplexed inside the computer, corrected for fisheye distortion and lens misalignment, and cropped to the proper size. Algorithms are developed that enable the system to perform these tasks. A proof-of-concept device is constructed that demonstrates the operation and the practicality of the optical system. Using this device, tests are performed assessing validities of the concepts and the algorithms.

  8. Online updating of synthetic vision system databases

    NASA Astrophysics Data System (ADS)

    Simard, Philippe

    In aviation, synthetic vision systems render artificial views of the world (using a database of the world and pose information) to support navigation and situational awareness in low visibility conditions. The database needs to be periodically updated to ensure its consistency with reality, since it reflects at best a nominal state of the environment. This thesis presents an approach for automatically updating the geometry of synthetic vision system databases and 3D models in general. The approach is novel in that it profits from all of the available prior information: intrinsic/extrinsic camera parameters and geometry of the world. Geometric inconsistencies (or anomalies) between the model and reality are quickly localized; this localization serves to significantly reduce the complexity of the updating problem. Given a geometric model of the world, a sample image and known camera motion, a predicted image can be generated based on a differential approach. Model locations where predictions do not match observations are assumed to be incorrect. The updating is then cast as an optimization problem where differences between observations and predictions are minimized. To cope with system uncertainties, a mechanism that automatically infers their impact on prediction validity is derived. This method not only renders the anomaly detection process robust but also prevents the overfitting of the data. The updating framework is examined at first using synthetic data and further tested in both a laboratory environment and using a helicopter in flight. Experimental results show that the algorithm is effective and robust across different operating conditions.

  9. Stereo vision and CMM-integrated intelligent inspection system in reverse engineering

    NASA Astrophysics Data System (ADS)

    Fang, Yong; Chen, Kangning; Lin, Zhihang

    1998-10-01

    3D coordinates acquisition and 3D model generation for existing parts or prototypes are the critical techniques in reverse engineering. This paper presents an integrated intelligent inspection system of stereo vision and coordinate measurement machine which is fast, flexible and accurate for reverse engineering. It also emphatically discusses the principle, structure and key technique of the system.

  10. Application issues when using optical 3D systems in place of CMMs

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.

    2002-02-01

    A primary application of optical 3D measurement systems has been in the replacement of mechanical coordinate measurement machines (CMMs). The advantage of optical 3D systems is typically greater speed and flexibility of operation over even the best CMMs. However, the two technologies are not necessarily one to one replacements, requiring new methods of use and in general proof of performance. This paper will present specific data that highlights the differences between CMMs and optical 3D systems and suggests a method to properly achieve CMM compatible results with an optical 3D system and the problems seen in this study.

  11. Combination of Virtual Tours, 3d Model and Digital Data in a 3d Archaeological Knowledge and Information System

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Brigand, N.

    2012-08-01

    The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image

  12. Fast 3D reconstruction of tool wear based on monocular vision and multi-color structured light illuminator

    NASA Astrophysics Data System (ADS)

    Wang, Zhongren; Li, Bo; Zhou, Yuebin

    2014-11-01

    Fast 3D reconstruction of tool wear from 2D images has great importance to 3D measuring and objective evaluating tool wear condition, determining accurate tool change and insuring machined part's quality. Extracting 3D information of tool wear zone based on monocular multi-color structured light can realize fast recovery of surface topography of tool wear, which overcomes the problems of traditional methods such as solution diversity and slow convergence when using SFS method and stereo match when using 3D reconstruction from multiple images. In this paper, a kind of new multi-color structured light illuminator was put forward. An information mapping model was established among illuminator's structure parameters, surface morphology and color images. The mathematical model to reconstruct 3D morphology based on monocular multi-color structured light was presented. Experimental results show that this method is effective and efficient to reconstruct the surface morphology of tool wear zone.

  13. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  14. 3D scanning and 3D printing as innovative technologies for fabricating personalized topical drug delivery systems.

    PubMed

    Goyanes, Alvaro; Det-Amornrat, Usanee; Wang, Jie; Basit, Abdul W; Gaisford, Simon

    2016-07-28

    Acne is a multifactorial inflammatory skin disease with high prevalence. In this work, the potential of 3D printing to produce flexible personalised-shape anti-acne drug (salicylic acid) loaded devices was demonstrated by two different 3D printing (3DP) technologies: Fused Deposition Modelling (FDM) and stereolithography (SLA). 3D scanning technology was used to obtain a 3D model of a nose adapted to the morphology of an individual. In FDM 3DP, commercially produced Flex EcoPLA™ (FPLA) and polycaprolactone (PCL) filaments were loaded with salicylic acid by hot melt extrusion (HME) (theoretical drug loading - 2% w/w) and used as feedstock material for 3D printing. Drug loading in the FPLA-salicylic acid and PCL-salicylic acid 3D printed patches was 0.4% w/w and 1.2% w/w respectively, indicating significant thermal degradation of drug during HME and 3D printing. Diffusion testing in Franz cells using a synthetic membrane revealed that the drug loaded printed samples released <187μg/cm(2) within 3h. FPLA-salicylic acid filament was successfully printed as a nose-shape mask by FDM 3DP, but the PCL-salicylic acid filament was not. In the SLA printing process, the drug was dissolved in different mixtures of poly(ethylene glycol) diacrylate (PEGDA) and poly(ethylene glycol) (PEG) that were solidified by the action of a laser beam. SLA printing led to 3D printed devices (nose-shape) with higher resolution and higher drug loading (1.9% w/w) than FDM, with no drug degradation. The results of drug diffusion tests revealed that drug diffusion was faster than with the FDM devices, 229 and 291μg/cm(2) within 3h for the two formulations evaluated. In this study, SLA printing was the more appropriate 3D printing technology to manufacture anti-acne devices with salicylic acid. The combination of 3D scanning and 3D printing has the potential to offer solutions to produce personalised drug loaded devices, adapted in shape and size to individual patients.

  15. 3D scanning and 3D printing as innovative technologies for fabricating personalized topical drug delivery systems.

    PubMed

    Goyanes, Alvaro; Det-Amornrat, Usanee; Wang, Jie; Basit, Abdul W; Gaisford, Simon

    2016-07-28

    Acne is a multifactorial inflammatory skin disease with high prevalence. In this work, the potential of 3D printing to produce flexible personalised-shape anti-acne drug (salicylic acid) loaded devices was demonstrated by two different 3D printing (3DP) technologies: Fused Deposition Modelling (FDM) and stereolithography (SLA). 3D scanning technology was used to obtain a 3D model of a nose adapted to the morphology of an individual. In FDM 3DP, commercially produced Flex EcoPLA™ (FPLA) and polycaprolactone (PCL) filaments were loaded with salicylic acid by hot melt extrusion (HME) (theoretical drug loading - 2% w/w) and used as feedstock material for 3D printing. Drug loading in the FPLA-salicylic acid and PCL-salicylic acid 3D printed patches was 0.4% w/w and 1.2% w/w respectively, indicating significant thermal degradation of drug during HME and 3D printing. Diffusion testing in Franz cells using a synthetic membrane revealed that the drug loaded printed samples released <187μg/cm(2) within 3h. FPLA-salicylic acid filament was successfully printed as a nose-shape mask by FDM 3DP, but the PCL-salicylic acid filament was not. In the SLA printing process, the drug was dissolved in different mixtures of poly(ethylene glycol) diacrylate (PEGDA) and poly(ethylene glycol) (PEG) that were solidified by the action of a laser beam. SLA printing led to 3D printed devices (nose-shape) with higher resolution and higher drug loading (1.9% w/w) than FDM, with no drug degradation. The results of drug diffusion tests revealed that drug diffusion was faster than with the FDM devices, 229 and 291μg/cm(2) within 3h for the two formulations evaluated. In this study, SLA printing was the more appropriate 3D printing technology to manufacture anti-acne devices with salicylic acid. The combination of 3D scanning and 3D printing has the potential to offer solutions to produce personalised drug loaded devices, adapted in shape and size to individual patients. PMID:27189134

  16. D3D augmented reality imaging system: proof of concept in mammography

    PubMed Central

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  17. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  18. Basic design principles of colorimetric vision systems

    NASA Astrophysics Data System (ADS)

    Mumzhiu, Alex M.

    1998-10-01

    Color measurement is an important part of overall production quality control in textile, coating, plastics, food, paper and other industries. The color measurement instruments such as colorimeters and spectrophotometers, used for production quality control have many limitations. In many applications they cannot be used for a variety of reasons and have to be replaced with human operators. Machine vision has great potential for color measurement. The components for color machine vision systems, such as broadcast quality 3-CCD cameras, fast and inexpensive PCI frame grabbers, and sophisticated image processing software packages are available. However the machine vision industry has only started to approach the color domain. The few color machine vision systems on the market, produced by the largest machine vision manufacturers have very limited capabilities. A lack of understanding that a vision based color measurement system could fail if it ignores the basic principles of colorimetry is the main reason for the slow progress of color vision systems. the purpose of this paper is to clarify how color measurement principles have to be applied to vision systems and how the electro-optical design features of colorimeters have to be modified in order to implement them for vision systems. The subject of this presentation far exceeds the limitations of a journal paper so only the most important aspects will be discussed. An overview of the major areas of applications for colorimetric vision system will be discussed. Finally, the reasons why some customers are happy with their vision systems and some are not will be analyzed.

  19. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  20. A navigation system for flexible endoscopes using abdominal 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Hoffmann, R.; Kaar, M.; Bathia, Amon; Bathia, Amar; Lampret, A.; Birkfellner, W.; Hummel, J.; Figl, M.

    2014-09-01

    A navigation system for flexible endoscopes equipped with ultrasound (US) scan heads is presented. In contrast to similar systems, abdominal 3D-US is used for image fusion of the pre-interventional computed tomography (CT) to the endoscopic US. A 3D-US scan, tracked with an optical tracking system (OTS), is taken pre-operatively together with the CT scan. The CT is calibrated using the OTS, providing the transformation from CT to 3D-US. Immediately before intervention a 3D-US tracked with an electromagnetic tracking system (EMTS) is acquired and registered intra-modal to the preoperative 3D-US. The endoscopic US is calibrated using the EMTS and registered to the pre-operative CT by an intra-modal 3D-US/3D-US registration. Phantom studies showed a registration error for the US to CT registration of 5.1 mm ± 2.8 mm. 3D-US/3D-US registration of patient data gave an error of 4.1 mm compared to 2.8 mm with the phantom. From this we estimate an error on patient experiments of 5.6 mm.

  1. Synthetic vision as an integrated element of an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Jennings, Chad W.; Alter, Keith W.; Barrows, Andrew K.; Bernier, Ken L.; Guell, Jeff J.

    2002-07-01

    Enhanced Vision Systems (EVS) and Synthetic Vision Systems (SVS) have the potential to allow vehicle operators to benefit from the best that various image sources have to offer. The ability to see in all directions, even in reduced visibility conditions, offers considerable benefits for operational effectiveness and safety. Nav3D and The Boeing Company are conducting development work on an Enhanced Vision System with an integrated Synthetic Vision System. The EVS consists of several imaging sensors that are digitally fused together to give a pilot a better view of the outside world even in challenging visual conditions. The EVS is limited however to provide imagery within the viewing frustum of the imaging sensors. The SVS can provide a rendered image of an a priori database in any direction that the pilot chooses to look and thus can provide information of terrain and flight path that are outside the purview of the EVS. Design concepts of the system will be discussed. In addition the ground and flight testing of the system will be described.

  2. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  3. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  4. Vision inspection system and method

    NASA Technical Reports Server (NTRS)

    Huber, Edward D. (Inventor); Williams, Rick A. (Inventor)

    1997-01-01

    An optical vision inspection system (4) and method for multiplexed illuminating, viewing, analyzing and recording a range of characteristically different kinds of defects, depressions, and ridges in a selected material surface (7) with first and second alternating optical subsystems (20, 21) illuminating and sensing successive frames of the same material surface patch. To detect the different kinds of surface features including abrupt as well as gradual surface variations, correspondingly different kinds of lighting are applied in time-multiplexed fashion to the common surface area patches under observation.

  5. VISION 21 SYSTEMS ANALYSIS METHODOLOGIES

    SciTech Connect

    G.S. Samuelsen; A. Rao; F. Robson; B. Washom

    2003-08-11

    Under the sponsorship of the U.S. Department of Energy/National Energy Technology Laboratory, a multi-disciplinary team led by the Advanced Power and Energy Program of the University of California at Irvine is defining the system engineering issues associated with the integration of key components and subsystems into power plant systems that meet performance and emission goals of the Vision 21 program. The study efforts have narrowed down the myriad of fuel processing, power generation, and emission control technologies to selected scenarios that identify those combinations having the potential to achieve the Vision 21 program goals of high efficiency and minimized environmental impact while using fossil fuels. The technology levels considered are based on projected technical and manufacturing advances being made in industry and on advances identified in current and future government supported research. Included in these advanced systems are solid oxide fuel cells and advanced cycle gas turbines. The results of this investigation will serve as a guide for the U. S. Department of Energy in identifying the research areas and technologies that warrant further support.

  6. Advanced integrated enhanced vision systems

    NASA Astrophysics Data System (ADS)

    Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha

    2003-09-01

    In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

  7. Computational 3-D Model of the Human Respiratory System

    EPA Science Inventory

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  8. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  9. Animal testing using 3D microwave tomography system for breast cancer detection.

    PubMed

    Lee, Jong Moon; Son, Sung Ho; Kim, Hyuk Je; Kim, Bo Ra; Choi, Heyng Do; Jeon, Soon Ik

    2014-01-01

    The three dimensional microwave tomography (3D MT) system of the Electronics and Telecommunications Research Institute (ETRI) comprises an antenna array, transmitting receiving module, switch matrix module and a signal processing component. This system also includes a patient interface bed as well as a 3D reconstruction algorithm. Here, we perform a comparative analysis of image reconstruction results using the assembled system and MRI results, which is used to image the breasts of dogs. Microwave imaging reconstruction results (at 1,500 MHz) obtained using the ETRI 3D MT system are presented. The system provides computationally reliable diagnosis results from the reconstructed MT Image. PMID:25160233

  10. Annular beam shaping system for advanced 3D laser brazing

    NASA Astrophysics Data System (ADS)

    Pütsch, Oliver; Stollenwerk, Jochen; Kogel-Hollacher, Markus; Traub, Martin

    2012-10-01

    As laser brazing benefits from advantages such as smooth joints and small heat-affected zones, it has become established as a joining technology that is widely used in the automotive industry. With the processing of complex-shaped geometries, recent developed brazing heads suffer, however, from the need for continuous reorientation of the optical system and/or limited accessibility due to lateral wire feeding. This motivates the development of a laser brazing head with coaxial wire feeding and enhanced functionality. An optical system is designed that allows to generate an annular intensity distribution in the working zone. The utilization of complex optical components avoids obscuration of the optical path by the wire feeding. The new design overcomes the disadvantages of the state-of-the-art brazing heads with lateral wire feeding and benefits from the independence of direction while processing complex geometries. To increase the robustness of the brazing process, the beam path also includes a seam tracking system, leading to a more challenging design of the whole optical train. This paper mainly discusses the concept and the optical design of the coaxial brazing head, and also presents the results obtained with a prototype and selected application results.

  11. [A Meridian Visualization System Based on Impedance and Binocular Vision].

    PubMed

    Su, Qiyan; Chen, Xin

    2015-03-01

    To ensure the meridian can be measured and displayed correctly on the human body surface, a visualization method based on impedance and binocular vision is proposed. First of all, using alternating constant current source to inject current signal into the human skin surface, then according to the low impedance characteristics of meridian, the multi-channel detecting instrument detects voltage of each pair of electrodes, thereby obtaining the channel of the meridian location, through the serial port communication, data is transmitted to the host computer. Secondly, intrinsic and extrinsic parameters of cameras are obtained by Zhang's camera calibration method, and 3D information of meridian location is got by corner selection and matching of the optical target, and then transform coordinate of 3D information according to the binocular vision principle. Finally, using curve fitting and image fusion technology realizes the meridian visualization. The test results show that the system can realize real-time detection and accurate display of meridian. PMID:26524777

  12. 3D two-photon lithographic microfabrication system

    DOEpatents

    Kim, Daekeun; So, Peter T. C.

    2011-03-08

    An imaging system is provided that includes a optical pulse generator for providing an optical pulse having a spectral bandwidth and includes monochromatic waves having different wavelengths. A dispersive element receives a second optical pulse associated with the optical pulse and disperses the second optical pulse at different angles on the surface of the dispersive element depending on wavelength. One or more focal elements receives the dispersed second optical pulse produced on the dispersive element. The one or more focal element recombine the dispersed second optical pulse at a focal plane on a specimen where the width of the optical pulse is restored at the focal plane.

  13. Large LED screen 3D television system without eyewear

    NASA Astrophysics Data System (ADS)

    Nishida, Nobuo; Yamamoto, Hirotsugu; Hayasaki, Yoshio

    2004-10-01

    Since the development of high-brightness blue and green LEDs, the use of outdoor commercial LED displays has been increasing. Because of their high brightness, good visibility, and long-term durability to the weather, LED displays are a preferred technology for outdoor installations such as stadiums, street advertising, and billboards. This paper deals with a large stereoscopic full-color LED display by use of a parallax barrier. We discuss optimization of the viewing area, which depends on LED arrangements. An enlarged viewing area has been demonstrated by using a 3-in-1 chip LED panel that has wider black regions than ordinary LED lamp cluster panels. We have developed a real-time measurement system of a viewer's position and utilized the measurement system for evaluation of performance of the different designs of stereoscopic LED displays, including conventional designs to provide multiple perspective images and designs to eliminate pseudoscopic viewing areas. In order to show real-world images, it is necessary to capture stereo-images, to process them, and to show in real-time. We have developed an active binocular camera and demonstrated the real-time display of stereoscopic movies and real-time control of convergence.

  14. 3d Modeling of cultural heritage objects with a structured light system.

    NASA Astrophysics Data System (ADS)

    Akca, Devrim

    3D modeling of cultural heritage objects is an expanding application area. The selection of the right technology is very important and strictly related to the project requirements, budget and user's experience. The triangulation based active sensors, e.g. structured light systems are used for many kinds of 3D object reconstruction tasks and in particular for 3D recording of cultural heritage objects. This study presents the experiences in the results of two such projects in which a close-range structured light system is used for the 3D digitization. The paper includes the essential steps of the 3D object modeling pipeline, i.e. digitization, registration, surface triangulation, editing, texture mapping and visualization. The capabilities of the used hardware and software are addressed. Particular emphasis is given to a coded structured light system as an option for data acquisition.

  15. A Laser-Based Vision System for Weld Quality Inspection

    PubMed Central

    Huang, Wei; Kovacevic, Radovan

    2011-01-01

    Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308

  16. Characterization of 3D printing output using an optical sensing system

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2015-05-01

    This paper presents the experimental design and initial testing of a system to characterize the progress and performance of a 3D printer. The system is based on five Raspberry Pi single-board computers. It collects images of the 3D printed object, which are compared to an ideal model. The system, while suitable for printers of all sizes, can potentially be produced at a sufficiently low cost to allow its incorporation into consumer-grade printers. The efficacy and accuracy of this system is presented and discussed. The paper concludes with a discussion of the benefits of being able to characterize 3D printer performance.

  17. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    SciTech Connect

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan

    2015-11-15

    achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.

  18. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology.

    PubMed

    Lee, Vivian K; Lanzi, Alison M; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A; Dai, Guohao

    2014-09-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns. PMID:25484989

  19. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology.

    PubMed

    Lee, Vivian K; Lanzi, Alison M; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A; Dai, Guohao

    2014-09-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns.

  20. Generation of Multi-Scale Vascular Network System within 3D Hydrogel using 3D Bio-Printing Technology

    PubMed Central

    Lee, Vivian K.; Lanzi, Alison M.; Haygan, Ngo; Yoo, Seung-Schik; Vincent, Peter A.; Dai, Guohao

    2014-01-01

    Although 3D bio-printing technology has great potential in creating complex tissues with multiple cell types and matrices, maintaining the viability of thick tissue construct for tissue growth and maturation after the printing is challenging due to lack of vascular perfusion. Perfused capillary network can be a solution for this issue; however, construction of a complete capillary network at single cell level using the existing technology is nearly impossible due to limitations in time and spatial resolution of the dispensing technology. To address the vascularization issue, we developed a 3D printing method to construct larger (lumen size of ~1mm) fluidic vascular channels and to create adjacent capillary network through a natural maturation process, thus providing a feasible solution to connect the capillary network to the large perfused vascular channels. In our model, microvascular bed was formed in between two large fluidic vessels, and then connected to the vessels by angiogenic sprouting from the large channel edge. Our bio-printing technology has a great potential in engineering vascularized thick tissues and vascular niches, as the vascular channels are simultaneously created while cells and matrices are printed around the channels in desired 3D patterns. PMID:25484989

  1. Impact of the 3-D model strategy on science learning of the solar system

    NASA Astrophysics Data System (ADS)

    Alharbi, Mohammed

    The purpose of this mixed method study, quantitative and descriptive, was to determine whether the first-middle grade (seventh grade) students at Saudi schools are able to learn and use the Autodesk Maya software to interact and create their own 3-D models and animations and whether their use of the software influences their study habits and their understanding of the school subject matter. The study revealed that there is value to the science students regarding the use of 3-D software to create 3-D models to complete science assignments. Also, this study aimed to address the middle-school students' ability to learn 3-D software in art class, and then ultimately use it in their science class. The success of this study may open the way to consider the impact of 3-D modeling on other school subjects, such as mathematics, art, and geography. When the students start using graphic design, including 3-D software, at a young age, they tend to develop personal creativity and skills. The success of this study, if applied in schools, will provide the community with skillful young designers and increase awareness of graphic design and the new 3-D technology. Experimental method was used to answer the quantitative research question, are there significant differences applying the learning method using 3-D models (no 3-D, premade 3-D, and create 3-D) in a science class being taught about the solar system and its impact on the students' science achievement scores? Descriptive method was used to answer the qualitative research questions that are about the difficulty of learning and using Autodesk Maya software, time that students take to use the basic levels of Polygon and Animation parts of the Autodesk Maya software, and level of students' work quality.

  2. A fast 3D reconstruction system with a low-cost camera accessory

    NASA Astrophysics Data System (ADS)

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-06-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  3. A fast 3D reconstruction system with a low-cost camera accessory.

    PubMed

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  4. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.

  5. A fast 3D reconstruction system with a low-cost camera accessory

    PubMed Central

    Zhang, Yiwei; Gibson, Graham M.; Hay, Rebecca; Bowman, Richard W.; Padgett, Miles J.; Edgar, Matthew P.

    2015-01-01

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object. PMID:26057407

  6. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  7. Quality Analysis of 3d Surface Reconstruction Using Multi-Platform Photogrammetric Systems

    NASA Astrophysics Data System (ADS)

    Lari, Z.; El-Sheimy, N.

    2016-06-01

    In recent years, the necessity of accurate 3D surface reconstruction has been more pronounced for a wide range of mapping, modelling, and monitoring applications. The 3D data for satisfying the needs of these applications can be collected using different digital imaging systems. Among them, photogrammetric systems have recently received considerable attention due to significant improvements in digital imaging sensors, emergence of new mapping platforms, and development of innovative data processing techniques. To date, a variety of techniques haven been proposed for 3D surface reconstruction using imagery collected by multi-platform photogrammetric systems. However, these approaches suffer from the lack of a well-established quality control procedure which evaluates the quality of reconstructed 3D surfaces independent of the utilized reconstruction technique. Hence, this paper aims to introduce a new quality assessment platform for the evaluation of the 3D surface reconstruction using photogrammetric data. This quality control procedure is performed while considering the quality of input data, processing procedures, and photo-realistic 3D surface modelling. The feasibility of the proposed quality control procedure is finally verified by quality assessment of the 3D surface reconstruction using images from different photogrammetric systems.

  8. Development of 3D Woven Ablative Thermal Protection Systems (TPS) for NASA Spacecraft

    NASA Technical Reports Server (NTRS)

    Feldman, Jay D.; Ellerby, Don; Stackpoole, Mairead; Peterson, Keith; Venkatapathy, Ethiraj

    2015-01-01

    The development of a new class of thermal protection system (TPS) materials known as 3D Woven TPS led by the Entry Systems and Technology Division of NASA Ames Research Center (ARC) will be discussed. This effort utilizes 3D weaving and resin infusion technologies to produce heat shield materials that are engineered and optimized for specific missions and requirements. A wide range of architectures and compositions have been produced and preliminarily tested to prove the viability and tailorability of the 3D weaving approach to TPS.

  9. A 3D acquisition system combination of structured-light scanning and shape from silhouette

    NASA Astrophysics Data System (ADS)

    Sun, Changku; Tao, Li; Wang, Peng; He, Li

    2006-05-01

    A robust and accurate three dimensional (3D) acquisition system is presented, which is a combination of structured-light scanning and shape from silhouette. Using common world coordinate system, two groups of point data can be integrated into the final complete 3D model without any integration and registration algorithm. The mathematics model of structured-light scanning is described in detail, and the shape from silhouette algorithm is introduced as well. The complete 3D model of a cup with a handle is obtained successfully by the proposed technique. At last the measurement on a ball bearing is performed, with the measurement precision better than 0.15 mm.

  10. Peach Bottom 2 Turbine Trip Simulation Using TRAC-BF1/COS3D, a Best-Estimate Coupled 3-D Core and Thermal-Hydraulic Code System

    SciTech Connect

    Ui, Atsushi; Miyaji, Takamasa

    2004-10-15

    The best-estimate coupled three-dimensional (3-D) core and thermal-hydraulic code system TRAC-BF1/COS3D has been developed. COS3D, based on a modified one-group neutronic model, is a 3-D core simulator used for licensing analyses and core management of commercial boiling water reactor (BWR) plants in Japan. TRAC-BF1 is a plant simulator based on a two-fluid model. TRAC-BF1/COS3D is a coupled system of both codes, which are connected using a parallel computing tool. This code system was applied to the OECD/NRC BWR Turbine Trip Benchmark. Since the two-group cross-section tables are provided by the benchmark team, COS3D was modified to apply to this specification. Three best-estimate scenarios and four hypothetical scenarios were calculated using this code system. In the best-estimate scenario, the predicted core power with TRAC-BF1/COS3D is slightly underestimated compared with the measured data. The reason seems to be a slight difference in the core boundary conditions, that is, pressure changes and the core inlet flow distribution, because the peak in this analysis is sensitive to them. However, the results of this benchmark analysis show that TRAC-BF1/COS3D gives good precision for the prediction of the actual BWR transient behavior on the whole. Furthermore, the results with the modified one-group model and the two-group model were compared to verify the application of the modified one-group model to this benchmark. This comparison shows that the results of the modified one-group model are appropriate and sufficiently precise.

  11. ProteinVista: a fast molecular visualization system using Microsoft Direct3D.

    PubMed

    Park, Chan-Yong; Park, Sung-Hee; Park, Soo-Jun; Park, Sun-Hee; Hwang, Chi-Jung

    2008-09-01

    Many tools have been developed to visualize protein and molecular structures. Most high quality protein visualization tools use the OpenGL graphics library as a 3D graphics system. Currently, the performance of recent 3D graphics hardware has rapidly improved. Recent high-performance 3D graphics hardware support Microsoft Direct3D graphics library more than OpenGL and have become very popular in personal computers (PCs). In this paper, a molecular visualization system termed ProteinVista is proposed. ProteinVista is well-designed visualization system using the Microsoft Direct3D graphics library. It provides various visualization styles such as the wireframe, stick, ball and stick, space fill, ribbon, and surface model styles, in addition to display options for 3D visualization. As ProteinVista is optimized for recent 3D graphics hardware platforms and because it uses a geometry instancing technique, its rendering speed is 2.7 times faster compared to other visualization tools.

  12. Microscale screening systems for 3D cellular microenvironments: platforms, advances, and challenges

    PubMed Central

    Montanez-Sauri, Sara I.; Beebe, David J.; Sung, Kyung Eun

    2015-01-01

    The increasing interest in studying cells using more in vivo-like three-dimensional (3D) microenvironments has created a need for advanced 3D screening platforms with enhanced functionalities and increased throughput. 3D screening platforms that better mimic in vivo microenvironments with enhanced throughput would provide more in-depth understanding of the complexity and heterogeneity of microenvironments. The platforms would also better predict the toxicity and efficacy of potential drugs in physiologically relevant conditions. Traditional 3D culture models (e.g. spinner flasks, gyratory rotation devices, non-adhesive surfaces, polymers) were developed to create 3D multicellular structures. However, these traditional systems require large volumes of reagents and cells, and are not compatible with high throughput screening (HTS) systems. Microscale technology offers the miniaturization of 3D cultures and allows efficient screening of various conditions. This review will discuss the development, most influential works, and current advantages and challenges of microscale culture systems for screening cells in 3D microenvironments. PMID:25274061

  13. Compact Autonomous Hemispheric Vision System

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.

    2012-01-01

    Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.

  14. 77 FR 2342 - Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 213, Enhanced Flight Vision... Transportation (DOT). ACTION: Notice of RTCA Special Committee 213, Enhanced Flight Vision/ Synthetic Vision... meeting of RTCA Special Committee 213, Enhanced Flight Vision/Synthetic Vision Systems (EFVS/SVS)....

  15. GEO3D - Three-Dimensional Computer Model of a Ground Source Heat Pump System

    SciTech Connect

    James Menart

    2013-06-07

    This file is the setup file for the computer program GEO3D. GEO3D is a computer program written by Jim Menart to simulate vertical wells in conjunction with a heat pump for ground source heat pump (GSHP) systems. This is a very detailed three-dimensional computer model. This program produces detailed heat transfer and temperature field information for a vertical GSHP system.

  16. 3-D subsurface modeling within the framework of an environmental restoration information system: Prototype results using earthvision

    SciTech Connect

    Goeltz, R.T.; Zondlo, T.F.

    1994-12-31

    As a result of the DOE Oak Ridge Reservation (DOE-ORR) placement on the EPA Superfund National Priorities List in December of 1989, all remedial activities, including characterization, remedial alternatives selection, and implementation of remedial measures, must meet the combined requirements of RCRA, CERCLA, and NEPA. The Environmental Restoration Program, therefore, was established with the mission of eliminating or reducing to prescribed safe levels the risks to the environment or to human health and safety posed by inactive and surplus DOE-ORR managed sites and facilities that have been contaminated by radioactive and surplus DOE-ORR managed sites and facilities that have been contaminated by radioactive, hazardous, or mixed wastes. In accordance with an established Federal Facilities Agreement (FFA), waste sites and facilities across the DOE-ORR have been inventoried, prioritized, and are being systematically investigated and remediated under the direction of Environmental Restoration. EarthVision, a product of Dynamic Graphics, Inc., that provides three-dimensional (3-D) modeling and visualization, was exercised within the framework of an environmental restoration (ER) decision support system. The goal of the prototype was to investigate framework integration issues including compatibility and value to decision making. This paper describes the ER program, study site, and information system framework; selected EarthVision results are shown and discussed. EarthVision proved effective in integrating complex data from disparate sources and in providing 3-D visualizations of the spatial relationships of the data, including contaminant plumes. Work is under way to expand the analysis to the full site, covering about 1600 acres, and to include data from new sources, particularly remote-sensing studies.

  17. Mechanically assisted 3D prostate ultrasound imaging and biopsy needle-guidance system

    NASA Astrophysics Data System (ADS)

    Bax, Jeffrey; Williams, Jackie; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Karnik, Vaishali; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

    2010-02-01

    Prostate biopsy procedures are currently limited to using 2D transrectal ultrasound (TRUS) imaging to guide the biopsy needle. Being limited to 2D causes ambiguity in needle guidance and provides an insufficient record to allow guidance to the same suspicious locations or avoid regions that are negative during previous biopsy sessions. We have developed a mechanically assisted 3D ultrasound imaging and needle tracking system, which supports a commercially available TRUS probe and integrated needle guide for prostate biopsy. The mechanical device is fixed to a cart and the mechanical tracking linkage allows its joints to be manually manipulated while fully supporting the weight of the ultrasound probe. The computer interface is provided in order to track the needle trajectory and display its path on a corresponding 3D TRUS image, allowing the physician to aim the needle-guide at predefined targets within the prostate. The system has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe in order to generate 3D image for 3D navigation. Using the system, 3D TRUS prostate images can be generated in approximately 10 seconds. The system reduces most of the user variability from conventional hand-held probes, which make them unsuitable for precision biopsy, while preserving some of the user familiarity and procedural workflow. In this paper, we describe the 3D TRUS guided biopsy system and report on the initial clinical use of this system for prostate biopsy.

  18. 3D scanning characteristics of an amorphous silicon position sensitive detector array system.

    PubMed

    Contreras, Javier; Gomes, Luis; Filonovich, Sergej; Correia, Nuno; Fortunato, Elvira; Martins, Rodrigo; Ferreira, Isabel

    2012-02-13

    The 3D scanning electro-optical characteristics of a data acquisition prototype system integrating a 32 linear array of 1D amorphous silicon position sensitive detectors (PSD) were analyzed. The system was mounted on a platform for imaging 3D objects using the triangulation principle with a sheet-of-light laser. New obtained results reveal a minimum possible gap or simulated defect detection of approximately 350 μm. Furthermore, a first study of the angle for 3D scanning was also performed, allowing for a broad range of angles to be used in the process. The relationship between the scanning angle of the incident light onto the object and the image displacement distance on the sensor was determined for the first time in this system setup. Rendering of 3D object profiles was performed at a significantly higher number of frames than in the past and was possible for an incident light angle range of 15 ° to 85 °.

  19. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    PubMed Central

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  20. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Rassau, Alex; Lachowicz, Stefan; Lee, Mike Myung-Ok; Eshraghian, Kamran

    2006-12-01

    This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D) vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch) through an indium bump interconnection array (IBIA). The configurable array processor (CAP) is an array of heterogeneous processing elements (PEs), while the intelligent configurable switch (ICS) comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA) controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  1. 3D laptop for defense applications

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed numerous 3D display systems using a US Army patented approach. These displays have been developed as prototypes for handheld controllers for robotic systems and closed hatch driving, and as part of a TALON robot upgrade for 3D vision, providing depth perception for the operator for improved manipulation and hazard avoidance. In this paper we discuss the prototype rugged 3D laptop computer and its applications to defense missions. The prototype 3D laptop combines full temporal and spatial resolution display with the rugged Amrel laptop computer. The display is viewed through protective passive polarized eyewear, and allows combined 2D and 3D content. Uses include robot tele-operation with live 3D video or synthetically rendered scenery, mission planning and rehearsal, enhanced 3D data interpretation, and simulation.

  2. A production peripheral vision display system

    NASA Technical Reports Server (NTRS)

    Heinmiller, B.

    1984-01-01

    A small number of peripheral vision display systems in three significantly different configurations were evaluated in various aircraft and simulator situations. The use of these development systems enabled the gathering of much subjective and quantitative data regarding this concept of flight deck instrumentation. However, much was also learned about the limitations of this equipment which needs to be addressed prior to wide spread use. A program at Garrett Manufacturing Limited in which the peripheral vision display system is redesigned and transformed into a viable production avionics system is discussed. Modular design, interchangeable units, optical attenuators, and system fault detection are considered with respect to peripheral vision display systems.

  3. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  4. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  5. Low-Cost 3D Systems: Suitable Tools for Plant Phenotyping

    PubMed Central

    Paulus, Stefan; Behmann, Jan; Mahlein, Anne-Katrin; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor is able to replace an expensive laser scanner in many plant phenotyping scenarios. PMID:24534920

  6. Low-cost 3D systems: suitable tools for plant phenotyping.

    PubMed

    Paulus, Stefan; Behmann, Jan; Mahlein, Anne-Katrin; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor. PMID:24534920

  7. HDTV single camera 3D system and its application in microsurgery

    NASA Astrophysics Data System (ADS)

    Mochizuki, Ryo; Kobayashi, Shigeaki

    1994-04-01

    A 3D high-definition television (HDTV) system suitable for attachment to a stereoscopic operating microscope allowing 3D medical documentation using a single HDTV camera and monitor is described. The system provides 3D HDTV microneurosurgical recorded images suitable for viewing on a screen or monitor, or for printing. Visual documentation using a television and video system is very important in modern medical practice, especially for the eduction of medical students, the training of residents, and the display of records in medical conferences. For the documentation of microsurgery and endoscopic surgery, the video system is essential. The printed images taken from the recording by the HDTV system of the illustrative case clearly demonstrate the high quality and definition achieved, which are comparable to that of the 35 mm movie film. As the system only requires a single camera and recorder, the cost performance and size make it very suitable for microsurgical and endoscopic documentation.

  8. 2D and 3D Mechanobiology in Human and Nonhuman Systems.

    PubMed

    Warren, Kristin M; Islam, Md Mydul; LeDuc, Philip R; Steward, Robert

    2016-08-31

    Mechanobiology involves the investigation of mechanical forces and their effect on the development, physiology, and pathology of biological systems. The human body has garnered much attention from many groups in the field, as mechanical forces have been shown to influence almost all aspects of human life ranging from breathing to cancer metastasis. Beyond being influential in human systems, mechanical forces have also been shown to impact nonhuman systems such as algae and zebrafish. Studies of nonhuman and human systems at the cellular level have primarily been done in two-dimensional (2D) environments, but most of these systems reside in three-dimensional (3D) environments. Furthermore, outcomes obtained from 3D studies are often quite different than those from 2D studies. We present here an overview of a select group of human and nonhuman systems in 2D and 3D environments. We also highlight mechanobiological approaches and their respective implications for human and nonhuman physiology. PMID:27214883

  9. 2D and 3D Mechanobiology in Human and Nonhuman Systems.

    PubMed

    Warren, Kristin M; Islam, Md Mydul; LeDuc, Philip R; Steward, Robert

    2016-08-31

    Mechanobiology involves the investigation of mechanical forces and their effect on the development, physiology, and pathology of biological systems. The human body has garnered much attention from many groups in the field, as mechanical forces have been shown to influence almost all aspects of human life ranging from breathing to cancer metastasis. Beyond being influential in human systems, mechanical forces have also been shown to impact nonhuman systems such as algae and zebrafish. Studies of nonhuman and human systems at the cellular level have primarily been done in two-dimensional (2D) environments, but most of these systems reside in three-dimensional (3D) environments. Furthermore, outcomes obtained from 3D studies are often quite different than those from 2D studies. We present here an overview of a select group of human and nonhuman systems in 2D and 3D environments. We also highlight mechanobiological approaches and their respective implications for human and nonhuman physiology.

  10. System for conveyor belt part picking using structured light and 3D pose estimation

    NASA Astrophysics Data System (ADS)

    Thielemann, J.; Skotheim, Ø.; Nygaard, J. O.; Vollset, T.

    2009-01-01

    Automatic picking of parts is an important challenge to solve within factory automation, because it can remove tedious manual work and save labor costs. One such application involves parts that arrive with random position and orientation on a conveyor belt. The parts should be picked off the conveyor belt and placed systematically into bins. We describe a system that consists of a structured light instrument for capturing 3D data and robust methods for aligning an input 3D template with a 3D image of the scene. The method uses general and robust pre-processing steps based on geometric primitives that allow the well-known Iterative Closest Point algorithm to converge quickly and robustly to the correct solution. The method has been demonstrated for localization of car parts with random position and orientation. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.

  11. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  12. Developing a 3D Road Cadastral System: Comparing Legal Requirements and User Needs

    NASA Astrophysics Data System (ADS)

    Gristina, S.; Ellul, C.; Scianna, A.

    2016-10-01

    Road transport has always played an important role in a country's growth and, in order to manage road networks and ensure a high standard of road performance (e.g. durability, efficiency and safety), both public and private road inventories have been implemented using databases and Geographical Information Systems. They enable registering and managing significant amounts of different road information, but to date do not focus on 3D road information, data integration and interoperability. In an increasingly complex 3D urban environment, and in the age of smart cities, however, applications including intelligent transport systems, mobility and traffic management, road maintenance and safety require digital data infrastructures to manage road data: thus new inventories based on integrated 3D road models (queryable, updateable and shareable on line) are required. This paper outlines the first step towards the implementation of 3D GIS-based road inventories. Focusing on the case study of the "Road Cadastre" (the Italian road inventory as established by law), it investigates current limitations and required improvements, and also compares the required data structure imposed by cadastral legislation with real road users' needs. The study aims to: a) determine whether 3D GIS would improve road cadastre (for better management of data through the complete life-cycle infrastructure projects); b) define a conceptual model for a 3D road cadastre for Italy (whose general principles may be extended also to other countries).

  13. Novel fully integrated computer system for custom footwear: from 3D digitization to manufacturing

    NASA Astrophysics Data System (ADS)

    Houle, Pascal-Simon; Beaulieu, Eric; Liu, Zhaoheng

    1998-03-01

    This paper presents a recently developed custom footwear system, which integrates 3D digitization technology, range image fusion techniques, a 3D graphical environment for corrective actions, parametric curved surface representation and computer numerical control (CNC) machining. In this system, a support designed with the help of biomechanics experts can stabilize the foot in a correct and neutral position. The foot surface is then captured by a 3D camera using active ranging techniques. A software using a library of documented foot pathologies suggests corrective actions on the orthosis. Three kinds of deformations can be achieved. The first method uses previously scanned pad surfaces by our 3D scanner, which can be easily mapped onto the foot surface to locally modify the surface shape. The second kind of deformation is construction of B-Spline surfaces by manipulating control points and modifying knot vectors in a 3D graphical environment to build desired deformation. The last one is a manual electronic 3D pen, which may be of different shapes and sizes, and has an adjustable 'pressure' information. All applied deformations should respect a G1 surface continuity, which ensure that the surface can accustom a foot. Once the surface modification process is completed, the resulting data is sent to manufacturing software for CNC machining.

  14. Advanced resin systems and 3D textile preforms for low cost composite structures

    NASA Technical Reports Server (NTRS)

    Shukla, J. G.; Bayha, T. D.

    1993-01-01

    Advanced resin systems and 3D textile preforms are being evaluated at Lockheed Aeronautical Systems Company (LASC) under NASA's Advanced Composites Technology (ACT) Program. This work is aimed towards the development of low-cost, damage-tolerant composite fuselage structures. Resin systems for resin transfer molding and powder epoxy towpreg materials are being evaluated for processability, performance and cost. Three developmental epoxy resin systems for resin transfer molding (RTM) and three resin systems for powder towpregging are being investigated. Various 3D textile preform architectures using advanced weaving and braiding processes are also being evaluated. Trials are being conducted with powdered towpreg, in 2D weaving and 3D braiding processes for their textile processability and their potential for fabrication in 'net shape' fuselage structures. The progress in advanced resin screening and textile preform development is reviewed here.

  15. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  16. Note: An improved 3D imaging system for electron-electron coincidence measurements

    NASA Astrophysics Data System (ADS)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-01

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  17. Moving from Batch to Field Using the RT3D Reactive Transport Modeling System

    NASA Astrophysics Data System (ADS)

    Clement, T. P.; Gautam, T. R.

    2002-12-01

    The public domain reactive transport code RT3D (Clement, 1997) is a general-purpose numerical code for solving coupled, multi-species reactive transport in saturated groundwater systems. The code uses MODFLOW to simulate flow and several modules of MT3DMS to simulate the advection and dispersion processes. RT3D employs the operator-split strategy which allows the code solve the coupled reactive transport problem in a modular fashion. The coupling between reaction and transport is defined through a separate module where the reaction equations are specified. The code supports a versatile user-defined reaction option that allows users to define their own reaction system through a Fortran-90 subroutine, known as the RT3D-reaction package. Further a utility code, known as BATCHRXN, allows the users to independently test and debug their reaction package. To analyze a new reaction system at a batch scale, users should first run BATCHRXN to test the ability of their reaction package to model the batch data. After testing, the reaction package can simply be ported to the RT3D environment to study the model response under 1-, 2-, or 3-dimensional transport conditions. This paper presents example problems that demonstrate the methods for moving from batch to field-scale simulations using BATCHRXN and RT3D codes. The first example describes a simple first-order reaction system for simulating the sequential degradation of Tetrachloroethene (PCE) and its daughter products. The second example uses a relatively complex reaction system for describing the multiple degradation pathways of Tetrachloroethane (PCA) and its daughter products. References 1) Clement, T.P, RT3D - A modular computer code for simulating reactive multi-species transport in 3-Dimensional groundwater aquifers, Battelle Pacific Northwest National Laboratory Research Report, PNNL-SA-28967, September, 1997. Available at: http://bioprocess.pnl.gov/rt3d.htm.

  18. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate

    PubMed Central

    Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

    2012-01-01

    Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients. PMID:22708023

  19. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  20. A web-based 3D medical image collaborative processing system with videoconference

    NASA Astrophysics Data System (ADS)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  1. A Laser Line Auto-Scanning System for Underwater 3D Reconstruction

    PubMed Central

    Chi, Shukai; Xie, Zexiao; Chen, Wenzhu

    2016-01-01

    In this study, a laser line auto-scanning system was designed to perform underwater close-range 3D reconstructions with high accuracy and resolution. The system changes the laser plane direction with a galvanometer to perform automatic scanning and obtain continuous laser strips for underwater 3D reconstruction. The system parameters were calibrated with the homography constraints between the target plane and image plane. A cost function was defined to optimize the galvanometer’s rotating axis equation. Compensation was carried out for the refraction of the incident and emitted light at the interface. The accuracy and the spatial measurement capability of the system were tested and analyzed with standard balls under laboratory underwater conditions, and the 3D surface reconstruction for a sealing cover of an underwater instrument was proved to be satisfactory. PMID:27657074

  2. Development of a 3D laser scanning system for the cavity

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Zhang, Da; Zhang, Yuan Sheng

    2013-06-01

    Serious geological hazard such as the roof fall-rib spalling-closure deformation of the cavity can exert bad influence to mine, even threaten human life. The traditional monitoring ways have some disadvantages, which are difficulties in obtaining data of the cavity, monitoring the unmanned cavity and calculating volume of the cavity accurately. To solve these problems, this paper describes how to develop a high precision 3D laser scanning system, which enables scanning the cavity rapidly, obtaining the same resolution point cloud, calculating volume of the cavity, marking the deformation area correctly and providing visualized environment. At the same time, this device has realized remote control functionality to avoid people to work on the underground. The measurement accuracy of the 3D laser scanning system is +/-2cm. The 3D laser scanning system can be combined with the mine microseism monitoring system to help with the estimation the cavity's stability and improve the effect of cavity monitoring.

  3. Morphological and Volumetric Assessment of Cerebral Ventricular System with 3D Slicer Software.

    PubMed

    Gonzalo Domínguez, Miguel; Hernández, Cristina; Ruisoto, Pablo; Juanes, Juan A; Prats, Alberto; Hernández, Tomás

    2016-06-01

    We present a technological process based on the 3D Slicer software for the three-dimensional study of the brain's ventricular system with teaching purposes. It values the morphology of this complex brain structure, as a whole and in any spatial position, being able to compare it with pathological studies, where its anatomy visibly changes. 3D Slicer was also used to obtain volumetric measurements in order to provide a more comprehensive and detail representation of the ventricular system. We assess the potential this software has for processing high resolution images, taken from Magnetic Resonance and generate the three-dimensional reconstruction of ventricular system. PMID:27147517

  4. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  5. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  6. Ultra-Wideband Time-Difference-of-Arrival High Resolution 3D Proximity Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dekome, Kent; Dusl, John

    2010-01-01

    This paper describes a research and development effort for a prototype ultra-wideband (UWB) tracking system that is currently under development at NASA Johnson Space Center (JSC). The system is being studied for use in tracking of lunar./Mars rovers and astronauts during early exploration missions when satellite navigation systems are not available. U IATB impulse radio (UWB-IR) technology is exploited in the design and implementation of the prototype location and tracking system. A three-dimensional (3D) proximity tracking prototype design using commercially available UWB products is proposed to implement the Time-Difference- Of-Arrival (TDOA) tracking methodology in this research effort. The TDOA tracking algorithm is utilized for location estimation in the prototype system, not only to exploit the precise time resolution possible with UWB signals, but also to eliminate the need for synchronization between the transmitter and the receiver. Simulations show that the TDOA algorithm can achieve the fine tracking resolution with low noise TDOA estimates for close-in tracking. Field tests demonstrated that this prototype UWB TDOA High Resolution 3D Proximity Tracking System is feasible for providing positioning-awareness information in a 3D space to a robotic control system. This 3D tracking system is developed for a robotic control system in a facility called "Moonyard" at Honeywell Defense & System in Arizona under a Space Act Agreement.

  7. The PLUNC 3D treatment planning system: a dynamic alternative to commercially available systems.

    PubMed

    Tewell, Marshall A; Adams, Robert

    2004-01-01

    Three-dimensional (3D) treatment planning is an integral step in the treatment of various cancers when radiation is prescribed as either the primary or adjunctive modality, especially when the gross tumor volume lies in a difficult to reach area or is proximal to critical bodily structures. Today, 3D systems have made it possible to more precisely localize tumors in order to treat a higher ratio of cancer cells to normal tissue. Over the past 15 years, these systems have evolved into complex tools that utilize powerful computational algorithms that offer diverse functional capabilities, while simultaneously attempting to maintain a user-friendly quality. A major disadvantage of commercial systems is that users do not have access to the programming source code, resulting in significantly limited clinical and technological flexibility. As an alternative, in-house systems such as Plan-UNC (PLUNC) offer optimal flexibility that is vital to research institutions and important to treatment facilities. Despite this weakness, commercially available systems have become the norm because their commissioning time is significantly less and because many facilities do not have computer experts on-site.

  8. Remote 3D Medical Consultation

    NASA Astrophysics Data System (ADS)

    Welch, Greg; Sonnenwald, Diane H.; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Krishnan, Srinivas; Söderholm, Hanna M.

    Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15-20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.

  9. Error control in the set-up of stereo camera systems for 3d animal tracking

    NASA Astrophysics Data System (ADS)

    Cavagna, A.; Creato, C.; Del Castello, L.; Giardina, I.; Melillo, S.; Parisi, L.; Viale, M.

    2015-12-01

    Three-dimensional tracking of animal systems is the key to the comprehension of collective behavior. Experimental data collected via a stereo camera system allow the reconstruction of the 3d trajectories of each individual in the group. Trajectories can then be used to compute some quantities of interest to better understand collective motion, such as velocities, distances between individuals and correlation functions. The reliability of the retrieved trajectories is strictly related to the accuracy of the 3d reconstruction. In this paper, we perform a careful analysis of the most significant errors affecting 3d reconstruction, showing how the accuracy depends on the camera system set-up and on the precision of the calibration parameters.

  10. HMI aspects of the usage of ladar 3D data in pilot DVE support systems

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Völschow, Philipp; Singer, Bernhard; Strobel, Michael; Kramper, Patrick; Bühler, Daniel

    2015-06-01

    The paper discusses specifics of high resolution 3D sensor systems employed in helicopter DVE support systems and the consequences for the resulting HMI. 3D sensors have a number of specifics making them a cornerstone for helicopter pilot support or pilotage systems intended for use in DVE. Retrieving depth information gives specific advantages over 2D imagers. On the other hand certain technology and physics inherent specifics require a more elaborate visualization procedure compared to 2D image visualization. The goal of all displayed information has to be to reduce pilots workload in DVE operations. Therefore especially for displaying the processed information on an HMD as 3D conformal data requires thorough HMI considerations.

  11. 3D Optical Measuring Systems and Laser Technologies for Scientific and Industrial Applications

    NASA Astrophysics Data System (ADS)

    Chugui, Yu.; Verkhoglyad, A.; Poleshchuk, A.; Korolkov, V.; Sysoev, E.; Zavyalov, P.

    2013-12-01

    Modern industry and science require novel 3D optical measuring systems and laser technologies with micro/nanometer resolution for solving actual problems. Such systems, including the 3D dimensional inspection of ceramic parts for electrotechnical industry, laser inspection of wheel pair diagnostic for running trains and 3D superresolution low-coherent micro- /nanoprofilometers are presented. The newest results in the field of laser technologies for high-precision synthesis of microstructures by updated image generator using the semiconductor laser are given. The measuring systems and the laser image generator developed and produced by TDI SIE and IAE SB RAS have been tested by customers and used in different branches of industry and science.

  12. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble

    PubMed Central

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size—a few microns in diameter—and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  13. The in-situ 3D measurement system combined with CNC machine tools

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Jiang, Hongzhi; Li, Xudong; Sui, Shaochun; Tang, Limin; Liang, Xiaoyue; Diao, Xiaochun; Dai, Jiliang

    2013-06-01

    With the development of manufacturing industry, the in-situ 3D measurement for the machining workpieces in CNC machine tools is regarded as the new trend of efficient measurement. We introduce a 3D measurement system based on the stereovision and phase-shifting method combined with CNC machine tools, which can measure 3D profile of the machining workpieces between the key machining processes. The measurement system utilizes the method of high dynamic range fringe acquisition to solve the problem of saturation induced by specular lights reflected from shiny surfaces such as aluminum alloy workpiece or titanium alloy workpiece. We measured two workpieces of aluminum alloy on the CNC machine tools to demonstrate the effectiveness of the developed measurement system.

  14. An optical system for detecting 3D high-speed oscillation of a single ultrasound microbubble.

    PubMed

    Liu, Yuan; Yuan, Baohong

    2013-01-01

    As contrast agents, microbubbles have been playing significant roles in ultrasound imaging. Investigation of microbubble oscillation is crucial for microbubble characterization and detection. Unfortunately, 3-dimensional (3D) observation of microbubble oscillation is challenging and costly because of the bubble size-a few microns in diameter-and the high-speed dynamics under MHz ultrasound pressure waves. In this study, a cost-efficient optical confocal microscopic system combined with a gated and intensified charge-coupled device (ICCD) camera were developed to detect 3D microbubble oscillation. The capability of imaging microbubble high-speed oscillation with much lower costs than with an ultra-fast framing or streak camera system was demonstrated. In addition, microbubble oscillations along both lateral (x and y) and axial (z) directions were demonstrated. Accordingly, this system is an excellent alternative for 3D investigation of microbubble high-speed oscillation, especially when budgets are limited. PMID:24049677

  15. Second order superintegrable systems in conformally flat spaces. IV. The classical 3D Staeckel transform and 3D classification theory

    SciTech Connect

    Kalnins, E.G.; Kress, J.M.; Miller, W. Jr.

    2006-04-15

    This article is one of a series that lays the groundwork for a structure and classification theory of second order superintegrable systems, both classical and quantum, in conformally flat spaces. In the first part of the article we study the Staeckel transform (or coupling constant metamorphosis) as an invertible mapping between classical superintegrable systems on different three-dimensional spaces. We show first that all superintegrable systems with nondegenerate potentials are multiseparable and then that each such system on any conformally flat space is Staeckel equivalent to a system on a constant curvature space. In the second part of the article we classify all the superintegrable systems that admit separation in generic coordinates. We find that there are eight families of these systems.

  16. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    NASA Astrophysics Data System (ADS)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets

  17. Optical design of wavelength selective CPVT system with 3D/2D hybrid concentration

    NASA Astrophysics Data System (ADS)

    Ahmad, N.; Ijiro, T.; Yamada, N.; Kawaguchi, T.; Maemura, T.; Ohashi, H.

    2012-10-01

    Optical design of a concentrating photovoltaic/thermal (CPVT) system is carried out. Using wavelength-selective optics, the system demonstrates 3-D concentration onto a solar cell and 2-D concentration onto a thermal receiver. Characteristics of the two types of concentrator systems are examined with ray-tracing analysis. The first system is a glazed mirror-based concentrator system mounted on a 2-axis pedestal tracker. The size of the secondary optical element is minimized to decrease the cost of the system, and it has a wavelength-selective function for performing 3-D concentration onto a solar cell and 2-D concentration onto a thermal receiver. The second system is a non-glazed beamdown concentrator system containing parabolic mirrors in the lower part. The beam-down selective mirror performs 3-D concentration onto a solar cell placed above the beam-down selective mirror, and 2-D concentration down to a thermal receiver placed at the bottom level. The system is mounted on a two-axis carousel tracker. A parametric study is performed for those systems with different geometrical 2-D/3-D concentration ratios. Wavelength-selective optics such as hot/cold mirrors and spectrum-splitting technologies are taken into account in the analysis. Results show reduced heat load on the solar cell and increased total system efficiency compared to a non-selective CPV system. Requirements for the wavelength-selective properties are elucidated. It is also shown that the hybrid concept with 2-D concentration onto a thermal receiver and 3-D concentration onto a solar cell has an advantageous geometry because of the high total system efficiency and compatibility with the piping arrangement of the thermal receiver.

  18. Dose Verification of Stereotactic Radiosurgery Treatment for Trigeminal Neuralgia with Presage 3D Dosimetry System

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Thomas, A.; Newton, J.; Ibbott, G.; Deasy, J.; Oldham, M.

    2010-11-01

    Achieving adequate verification and quality-assurance (QA) for radiosurgery treatment of trigeminal-neuralgia (TGN) is particularly challenging because of the combination of very small fields, very high doses, and complex irradiation geometries (multiple gantry and couch combinations). TGN treatments have extreme requirements for dosimetry tools and QA techniques, to ensure adequate verification. In this work we evaluate the potential of Presage/Optical-CT dosimetry system as a tool for the verification of TGN distributions in high-resolution and in 3D. A TGN treatment was planned and delivered to a Presage 3D dosimeter positioned inside the Radiological-Physics-Center (RPC) head and neck IMRT credentialing phantom. A 6-arc treatment plan was created using the iPlan system, and a maximum dose of 80Gy was delivered with a Varian Trilogy machine. The delivered dose to Presage was determined by optical-CT scanning using the Duke Large field-of-view Optical-CT Scanner (DLOS) in 3D, with isotropic resolution of 0.7mm3. DLOS scanning and reconstruction took about 20minutes. 3D dose comparisons were made with the planning system. Good agreement was observed between the planned and measured 3D dose distributions, and this work provides strong support for the viability of Presage/Optical-CT as a highly useful new approach for verification of this complex technique.

  19. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem

  20. Investigation of Presage 3D Dosimetry as a Method of Clinically Intuitive Quality Assurance and Comparison to a Semi-3D Delta4 System

    NASA Astrophysics Data System (ADS)

    Crockett, Ethan Van

    The need for clinically intuitive metrics for patient-specific quality assurance in radiation therapy has been well-documented (Zhen, Nelms et al. 2011). A novel transform method has shown to be effective at converting full-density 3D dose measurements made in a phantom to dose values in the patient geometry, enabling comparisons using clinically intuitive metrics such as dose-volume histograms (Oldham et al. 2011). This work investigates the transform method and compares its calculated dose-volume histograms (DVHs) to DVH values calculated by a Delta4 QA device (Scandidos), marking the first comparison of a true 3D system to a semi-3D device using clinical metrics. Measurements were made using Presage 3D dosimeters, which were readout by an in-house optical-CT scanner. Three patient cases were chosen for the study: one head-and-neck VMAT treatment and two spine IMRT treatments. The transform method showed good agreement with the planned dose values for all three cases. Furthermore, the transformed DVHs adhered to the planned dose with more accuracy than the Delta4 DVHs. The similarity between the Delta4 DVHs and the transformed DVHs, however, was greater for one of the spine cases than it was for the head-and-neck case, implying that the accuracy of the Delta4 Anatomy software may vary from one treatment site to another. Overall, the transform method, which incorporates data from full-density 3D dose measurements, provides clinically intuitive results that are more accurate and consistent than the corresponding results from a semi-3D Delta 4 system.

  1. Low-cost structured-light based 3D capture system design

    NASA Astrophysics Data System (ADS)

    Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.

    2014-03-01

    Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.

  2. The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system

    NASA Astrophysics Data System (ADS)

    Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

    2014-05-01

    An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

  3. Unified framework for generation of 3D web visualization for mechatronic systems

    NASA Astrophysics Data System (ADS)

    Severa, O.; Goubej, M.; Konigsmarkova, J.

    2015-11-01

    The paper deals with development of a unified framework for generation of 3D visualizations of complex mechatronic systems. It provides a high-fidelity representation of executed motion by allowing direct employment of a machine geometry model acquired from a CAD system. Open-architecture multi-platform solution based on latest web standards is achieved by utilizing a web browser as a final 3D renderer. The results are applicable both for simulations and development of real-time human machine interfaces. Case study of autonomous underwater vehicle control is provided to demonstrate the applicability of the proposed approach.

  4. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  5. Mobile 3d Mapping with a Low-Cost Uav System

    NASA Astrophysics Data System (ADS)

    Neitzel, F.; Klonowski, J.

    2011-09-01

    In this contribution it is shown how an UAV system can be built at low costs. The components of the system, the equipment as well as the control software are presented. Furthermore an implemented programme for photogrammetric flight planning and its execution are described. The main focus of this contribution is on the generation of 3D point clouds from digital imagery. For this web services and free software solutions are presented which automatically generate 3D point clouds from arbitrary image configurations. Possibilities of georeferencing are described whereas the achieved accuracy has been determined. The presented workflow is finally used for the acquisition of 3D geodata. On the example of a landfill survey it is shown that marketable products can be derived using a low-cost UAV.

  6. PRIMAS: a real-time 3D motion-analysis system

    NASA Astrophysics Data System (ADS)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  7. Development of a 3D ultrasound-guided prostate biopsy system

    NASA Astrophysics Data System (ADS)

    Cool, Derek; Sherebrin, Shi; Izawa, Jonathan; Fenster, Aaron

    2007-03-01

    Biopsy of the prostate using ultrasound guidance is the clinical gold standard for diagnosis of prostate adenocarinoma. However, because early stage tumors are rarely visible under US, the procedure carries high false-negative rates and often patients require multiple biopsies before cancer is detected. To improve cancer detection, it is imperative that throughout the biopsy procedure, physicians know where they are within the prostate and where they have sampled during prior biopsies. The current biopsy procedure is limited to using only 2D ultrasound images to find and record target biopsy core sample sites. This information leaves ambiguity as the physician tries to interpret the 2D information and apply it to their 3D workspace. We have developed a 3D ultrasound-guided prostate biopsy system that provides 3D intra-biopsy information to physicians for needle guidance and biopsy location recording. The system is designed to conform to the workflow of the current prostate biopsy procedure, making it easier for clinical integration. In this paper, we describe the system design and validate its accuracy by performing an in vitro biopsy procedure on US/CT multi-modal patient-specific prostate phantoms. A clinical sextant biopsy was performed by a urologist on the phantoms and the 3D models of the prostates were generated with volume errors less than 4% and mean boundary errors of less than 1 mm. Using the 3D biopsy system, needles were guided to within 1.36 +/- 0.83 mm of 3D targets and the position of the biopsy sites were accurately localized to 1.06 +/- 0.89 mm for the two prostates.

  8. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and

  9. Biomek Cell Workstation: A Flexible System for Automated 3D Cell Cultivation.

    PubMed

    Lehmann, R; Gallert, C; Roddelkopf, T; Junginger, S; Thurow, K

    2016-08-01

    The shift from 2D cultures to 3D cultures enables improvement in cell culture research due to better mimicking of in vivo cell behavior and environmental conditions. Different cell lines and applications require altered 3D constructs. The automation of the manufacturing and screening processes can advance the charge stability, quality, repeatability, and precision. In this study we integrated the automated production of three 3D cell constructs (alginate beads, spheroid cultures, pellet cultures) using the Biomek Cell Workstation and compared them with the traditional manual methods and their consequent bioscreening processes (proliferation, toxicity; days 14 and 35) using a high-throughput screening system. Moreover, the possible influence of antibiotics (penicillin/streptomycin) on the production and screening processes was investigated. The cytotoxicity of automatically produced 3D cell cultures (with and without antibiotics) was mainly decreased. The proliferation showed mainly similar or increased results for the automatically produced 3D constructs. We concluded that the traditional manual methods can be replaced by the automated processes. Furthermore, the formation, cultivation, and screenings can be performed without antibiotics to prevent possible effects.

  10. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  11. A convolutional learning system for object classification in 3-D Lidar data.

    PubMed

    Prokhorov, Danil

    2010-05-01

    In this brief, a convolutional learning system for classification of segmented objects represented in 3-D as point clouds of laser reflections is proposed. Several novelties are discussed: (1) extension of the existing convolutional neural network (CNN) framework to direct processing of 3-D data in a multiview setting which may be helpful for rotation-invariant consideration, (2) improvement of CNN training effectiveness by employing a stochastic meta-descent (SMD) method, and (3) combination of unsupervised and supervised training for enhanced performance of CNN. CNN performance is illustrated on a two-class data set of objects in a segmented outdoor environment.

  12. An orthognathic simulation system integrating teeth, jaw and face data using 3D cephalometry.

    PubMed

    Noguchi, N; Tsuji, M; Shigematsu, M; Goto, M

    2007-07-01

    A method for simulating the movement of teeth, jaw and face caused by orthognathic surgery is proposed, characterized by the use of 3D cephalometric data for 3D simulation. Computed tomography data are not required. The teeth and facial data are obtained by a laser scanner and the data for the patient's mandible are reconstructed and integrated according to 3D cephalometry using a projection-matching technique. The mandibular form is simulated by transforming a generic model to match the patient's cephalometric data. This system permits analysis of bone movement at each individual part, while also helping in the choice of optimal osteotomy design considering the influences on facial soft-tissue form.

  13. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-01

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  14. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  15. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    ERIC Educational Resources Information Center

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning…

  16. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems.

    PubMed

    Jung, Jaehoon; Kim, Jeonghyun; Yoon, Sanghyun; Kim, Sangmin; Cho, Hyoungsig; Kim, Changjae; Heo, Joon

    2015-01-01

    The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system's trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach. PMID:25946627

  17. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  18. A framework for human spine imaging using a freehand 3D ultrasound system.

    PubMed

    Purnama, Ketut E; Wilkinson, Michael H F; Veldhuizen, Albert G; van Ooijen, Peter M A; Lubbers, Jaap; Burgerhof, Johannes G M; Sardjono, Tri A; Verkerke, Gijbertus J

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis which sometimes may develop rapidly. Furthermore, 3D ultrasound imaging provides information in 3D directly in contrast to projection methods. This paper describes a feasibility study of an ultrasound system to provide a 3D image of the human spine, and presents a framework of procedures to perform this task. The framework consist of an ultrasound image acquisition procedure to image a large part of the human spine by means of a freehand 3D ultrasound system and a volume reconstruction procedure which was performed in four stages: bin-filling, hole-filling, volume segment alignment, and volume segment compounding. The overall results of the procedures in this framework show that imaging of the human spine using ultrasound is feasible. Vertebral parts such as the transverse processes, laminae, superior articular processes, and spinous process of the vertebrae appear as clouds of voxels having intensities higher than the surrounding voxels. In sagittal slices, a string of transverse processes appears representing the curvature of the spine. In the bin-filling stage the estimated mean absolute noise level of a single measurement of a single voxel was determined. Our comparative study for the hole-filling methods based on rank sum statistics proved that the pixel nearest neighbour (PNN) method with variable radius and with the proposed olympic operation is the best method. Its mean absolute grey value error was less in magnitude than the noise level of a single measurement.

  19. Compact Through-The-Torch Vision System

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Gutow, David A.

    1992-01-01

    Changes in gas/tungsten-arc welding torch equipped with through-the-torch vision system make it smaller and more resistant to welding environment. Vision subsystem produces image of higher quality, flow of gas enhanced, and parts replaced quicker and easier. Coaxial series of lenses and optical components provides overhead view of joint and weld puddle real-time control. Designed around miniature high-resolution video camera. Smaller size enables torch to weld joints formerly inaccessible.

  20. a 3d Information System for the Documentation of Archaeologica L Excavations

    NASA Astrophysics Data System (ADS)

    Ardissone, P.; Bornaz, L.; Degattis, G.; Domaine, R.

    2013-07-01

    these methodologies and procedures will be presented and described in the article. For the documentation of the archaeological excavations and for the management of the conservation activities (condition assessment, planning, and conservation work). Ad Hoc 3D solutions has costumized 2 special plug-ins of its own software platform Ad Hoc: Ad Hoc Archaeology and Ad Hoc Conservation. The software platform integrates a 3D database management system. All information (measurements, plotting, areas of interests…) are organized according to their correct 3D position. They can be queried using attributes, geometric characteristics or their spatial position. The Ad Hoc Archaeology plug-in allows archeologists to fill out UUSS sheets in an internal database, put them in the correct location within the 3D model of the site, define the mutual relations between the UUSS, divide the different archaeological phases. A simple interface will facilitate the construction of the stratigraphic chart (matrix), in a 3D environment as well (matrix 3D). The Ad Hoc Conservation plug-in permits conservators and restorers to create relationships between the different approaches and descriptions of the same parts of the monument, i.e.: between stratigraphyc units or historical phases and architectural components and/or decay pathologies. The 3D DBMS conservation module uses a codified terminology based on "ICOMOS illustrated glossary of stone deterioration" and other glossary. Specific tools permits restorers to compute correctly surfaces and volumes. In this way decay extension and intensity can be measured with high precision and with an high level of detail, for a correct time and costs estimation of each conservation step.

  1. A 3D modeling and measurement system for cultural heritage preservation

    NASA Astrophysics Data System (ADS)

    Du, Guoguang; Zhou, Mingquan; Ren, Pu; Shui, Wuyang; Zhou, Pengbo; Wu, Zhongke

    2015-07-01

    Cultural Heritage reflects the human production, life style and environmental conditions of various historical periods. It exists as one of the major national carriers of national history and culture. In order to do better protection and utilization for these cultural heritages, a system of three-dimensional (3D) reconstruction and statistical measurement is proposed in this paper. The system solves the problems of cultural heritage's data storage, measurement and analysis. Firstly, for the high precision modeling and measurement problems, range data registration and integration algorithm used to achieve high precision 3D reconstruction. Secondly, multi-view stereo reconstruction method is used to solve the problem of rapid reconstruction by procedures such as the original image data pre-processing, camera calibration, point cloud modeling. At last, the artifacts' measure underlying database is established by calculating the measurements of the 3D model's surface. These measurements contain Euclidean distance between the points on the surface, geodesic distance between the points, normal and curvature in each point, superficial area of a region, volume of model's part and some other measurements. These measurements provide a basis for carrying out information mining of cultural heritage. The system has been applied to the applications of 3D modeling, data measurement of the Terracotta Warriors relics, Tibetan architecture and some other relics.

  2. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  3. The Maintenance Of 3-D Scene Databases Using The Analytical Imagery Matching System (Aims)

    NASA Astrophysics Data System (ADS)

    Hovey, Stanford T.

    1987-06-01

    The increased demand for multi-resolution displays of simulated scene data for aircraft training or mission planning has led to a need for digital databases of 3-dimensional topography and geographically positioned objects. This data needs to be at varying resolutions or levels of detail as well as be positionally accurate to satisfy close-up and long distance scene views. The generation and maintenance processes for this type of digital database requires that relative and absolute spatial positions of geographic and cultural features be carefully controlled in order for the scenes to be representative and useful for simulation applications. Autometric, Incorporated has designed a modular Analytical Image Matching System (AIMS) which allows digital 3-D terrain feature data to be derived from cartographic and imagery sources by a combination of automatic and man-machine techniques. This system provides a means for superimposing the scenes of feature information in 3-D over imagery for updating. It also allows for real-time operator interaction between a monoscopic digital imagery display, a digital map display, a stereoscopic digital imagery display and automatically detected feature changes for transferring 3-D data from one coordinate system's frame of reference to another for updating the scene simulation database. It is an advanced, state-of-the-art means for implementing a modular, 3-D scene database maintenance capability, where original digital or converted-to-digital analog source imagery is used as a basic input to perform accurate updating.

  4. Bore-Sight Calibration of Multiple Laser Range Finders for Kinematic 3D Laser Scanning Systems

    PubMed Central

    Jung, Jaehoon; Kim, Jeonghyun; Yoon, Sanghyun; Kim, Sangmin; Cho, Hyoungsig; Kim, Changjae; Heo, Joon

    2015-01-01

    The Simultaneous Localization and Mapping (SLAM) technique has been used for autonomous navigation of mobile systems; now, its applications have been extended to 3D data acquisition of indoor environments. In order to reconstruct 3D scenes of indoor space, the kinematic 3D laser scanning system, developed herein, carries three laser range finders (LRFs): one is mounted horizontally for system-position correction and the other two are mounted vertically to collect 3D point-cloud data of the surrounding environment along the system’s trajectory. However, the kinematic laser scanning results can be impaired by errors resulting from sensor misalignment. In the present study, the bore-sight calibration of multiple LRF sensors was performed using a specially designed double-deck calibration facility, which is composed of two half-circle-shaped aluminum frames. Moreover, in order to automatically achieve point-to-point correspondences between a scan point and the target center, a V-shaped target was designed as well. The bore-sight calibration parameters were estimated by a constrained least squares method, which iteratively minimizes the weighted sum of squares of residuals while constraining some highly-correlated parameters. The calibration performance was analyzed by means of a correlation matrix. After calibration, the visual inspection of mapped data and residual calculation confirmed the effectiveness of the proposed calibration approach. PMID:25946627

  5. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  6. 3D homogeneity study in PMMA layers using a Fourier domain OCT system

    NASA Astrophysics Data System (ADS)

    Briones-R., Manuel de J.; Torre-Ibarra, Manuel H. De La; Tavera, Cesar G.; Luna H., Juan M.; Mendoza-Santoyo, Fernando

    2016-11-01

    Micro-metallic particles embedded in polymers are now widely used in several industrial applications in order to modify the mechanical properties of the bulk. A uniform distribution of these particles inside the polymers is highly desired for instance, when a biological backscattering is simulated or a bio-framework is designed. A 3D Fourier domain optical coherence tomography system to detect the polymer's internal homogeneity is proposed. This optical system has a 2D camera sensor array that records a fringe pattern used to reconstruct with a single shot the tomographic image of the sample. The system gathers the full 3D tomographic and optical phase information during a controlled deformation by means of a motion linear stage. This stage avoids the use of expensive tilting stages, which in addition are commonly controlled by piezo drivers. As proof of principle, a series of different deformations were proposed to detect the uniform or non-uniform internal deposition of copper micro particles. The results are presented as images coming from the 3D tomographic micro reconstruction of the samples, and the 3D optical phase information that identifies the in-homogeneity regions within the Poly methyl methacrylate (PMMA) volume.

  7. Nondestructive optical testing of 3D disperse systems with micro- and nano-particles

    NASA Astrophysics Data System (ADS)

    Bezrukova, Alexandra G.

    2005-04-01

    Nondestructive testing and analysis of three-dimensional (3D) disperse systems (DS) with micro- and nano-particles of different nature by complex of optical compatible methods can provide further progress in on-line control of water and air. The simultaneous analysis of 3D-DS by refractometry, absorbency, fluorescence and by different types of light scattering can help to elaborate the sensing elements for specific impurity control. In our research we have investigated by complex of optical methods different 3D-DS such as: proteins, nucleoproteids, lipoproteids, liposomes, viruses, virosomes, lipid emulsions, blood substitutes, latexes, liquid crystals, biological cells with various form and size (including bacterial cells), metallic powders, clays, kimberlites, zeolites, oils, crude oils, etc., and mixtures -- proteins with nucleic acids, liposomes and viruses, liquid crystals with surfactants, mixtures of clay with bacterial cells, samples of natural and water-supply waters, etc. This experience suggests that the set of optical parameters of so called second class is unique for each 3D-DS. In another words each DS can be characterized by n-dimensional vector in n-dimensional space of optical parameters. Mixtures can be considered as polycomponent and polymodal 3D-DS (such as natural water and air). Due to the fusion of various optical data it is possible to indicate by information statistical theory the inverse physical problem on the presence of impurities in mixtures (viruses, bacteria, oil, metallic particles, etc.), and in this case polymodality of particle size distribution is not an obstacle. Bank of optical data for 3D-DS is the base for analysis by information-statistical method.

  8. Low Cost Vision Based Personal Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  9. Volumetric imaging system for the ionosphere (VISION)

    NASA Astrophysics Data System (ADS)

    Dymond, Kenneth F.; Budzien, Scott A.; Nicholas, Andrew C.; Thonnard, Stefan E.; Fortna, Clyde B.

    2002-01-01

    The Volumetric Imaging System for the Ionosphere (VISION) is designed to use limb and nadir images to reconstruct the three-dimensional distribution of electrons over a 1000 km wide by 500 km high slab beneath the satellite with 10 km x 10 km x 10 km voxels. The primary goal of the VISION is to map and monitor global and mesoscale (> 10 km) electron density structures, such as the Appleton anomalies and field-aligned irregularity structures. The VISION consists of three UV limb imagers, two UV nadir imagers, a dual frequency Global Positioning System (GPS) receiver, and a coherently emitting three frequency radio beacon. The limb imagers will observe the O II 83.4 nm line (daytime electron density), O I 135.6 nm line (nighttime electron density and daytime O density), and the N2 Lyman-Birge-Hopfield (LBH) bands near 143.0 nm (daytime N2 density). The nadir imagers will observe the O I 135.6 nm line (nighttime electron density and daytime O density) and the N2 LBH bands near 143.0 nm (daytime N2 density). The GPS receiver will monitor the total electron content between the satellite containing the VISION and the GPS constellation. The three frequency radio beacon will be used with ground-based receiver chains to perform computerized radio tomography below the satellite containing the VISION. The measurements made using the two radio frequency instruments will be used to validate the VISION UV measurements.

  10. Automatic 3D power line reconstruction of multi-angular imaging power line inspection system

    NASA Astrophysics Data System (ADS)

    Zhang, Wuming; Yan, Guangjian; Wang, Ning; Li, Qiaozhi; Zhao, Wei

    2007-06-01

    We develop a multi-angular imaging power line inspection system. Its main objective is to monitor the relative distance between high voltage power line and around objects, and alert if the warning threshold is exceeded. Our multi-angular imaging power line inspection system generates DSM of the power line passage, which comprises ground surface and ground objects, for example trees and houses, etc. For the purpose of revealing the dangerous regions, where ground objects are too close to the power line, 3D power line information should be extracted at the same time. In order to improve the automation level of extraction, reduce labour costs and human errors, an automatic 3D power line reconstruction method is proposed and implemented. It can be achieved by using epipolar constraint and prior knowledge of pole tower's height. After that, the proper 3D power line information can be obtained by space intersection using found homologous projections. The flight experiment result shows that the proposed method can successfully reconstruct 3D power line, and the measurement accuracy of the relative distance satisfies the user requirement of 0.5m.

  11. Color LCoS-based full-color electro-holographic 3D display system

    NASA Astrophysics Data System (ADS)

    Moon, Jae-Woong; Lee, Dong-Whi; Kim, Seung-Cheol; Kim, Eun-Soo

    2005-05-01

    In this paper, a new color LCoS(liquid crystal on silicon)-based holographic full-color 3D display system is proposed. As the color LCoS SLM can produce a full-color image pattern using a color wheel, only one LCoS panel is required in this approach for full-color reconstruction of a 3D object. In the proposed method, each color fringe-pattern is generated and tinted with each color beam. R, G, B fringe-patterns are mixed up and displayed on the color LCoS SLM. And then, Red fringe-pattern can be diffracted at the red status of a color wheel and at the same manner Green/ Blue fringe-patterns can be diffracted at the green/ blue status of a color wheel, so that a full-color electro-holographic 3D image can be easily reconstructed by using some simple optics. From some experiments, a possibility of implementation of a new compact LCoS-based holographic full-color 3D video display system is suggested.

  12. CELSS-3D: a broad computer model simulating a controlled ecological life support system.

    PubMed

    Schneegurt, M A; Sherman, L A

    1997-01-01

    CELSS-3D is a dynamic, deterministic, and discrete computer simulation of a controlled ecological life support system (CELSS) focusing on biological issues. A series of linear difference equations within a graphic-based modeling environment, the IThink program, was used to describe a modular CELSS system. The overall model included submodels for crop growth chambers, food storage reservoirs, the human crew, a cyanobacterial growth chamber, a waste processor, fixed nitrogen reservoirs, and the atmospheric gases, CO, O2, and N2. The primary process variable was carbon, although oxygen and nitrogen flows were also modeled. Most of the input data used in CELSS-3D were from published sources. A separate linear optimization program, What'sBest!, was used to compare options for the crew's vegetarian diet. CELSS-3D simulations were run for the equivalent of 3 years with a 1-h time interval. Output from simulations run under nominal conditions was used to illustrate dynamic changes in the concentrations of atmospheric gases. The modular design of CELSS-3D will allow other configurations and various failure scenarios to be tested and compared.

  13. CELSS-3D: a broad computer model simulating a controlled ecological life support system.

    PubMed

    Schneegurt, M A; Sherman, L A

    1997-01-01

    CELSS-3D is a dynamic, deterministic, and discrete computer simulation of a controlled ecological life support system (CELSS) focusing on biological issues. A series of linear difference equations within a graphic-based modeling environment, the IThink program, was used to describe a modular CELSS system. The overall model included submodels for crop growth chambers, food storage reservoirs, the human crew, a cyanobacterial growth chamber, a waste processor, fixed nitrogen reservoirs, and the atmospheric gases, CO, O2, and N2. The primary process variable was carbon, although oxygen and nitrogen flows were also modeled. Most of the input data used in CELSS-3D were from published sources. A separate linear optimization program, What'sBest!, was used to compare options for the crew's vegetarian diet. CELSS-3D simulations were run for the equivalent of 3 years with a 1-h time interval. Output from simulations run under nominal conditions was used to illustrate dynamic changes in the concentrations of atmospheric gases. The modular design of CELSS-3D will allow other configurations and various failure scenarios to be tested and compared. PMID:11540449

  14. GeoCube: A 3D mineral resources quantitative prediction and assessment system

    NASA Astrophysics Data System (ADS)

    Li, Ruixi; Wang, Gongwen; Carranza, Emmanuel John Muico

    2016-04-01

    This paper introduces a software system (GeoCube) for three dimensional (3D) extraction and integration of exploration criteria from spatial data. The software system contains four key modules: (1) Import and Export, supporting many formats from commercial 3D geological modeling software and offering various export options; (2) pre-process, containing basic statistics and fractal/multi-fractal methods (concentration-volume (C-V) fractal method) for extraction of exploration criteria from spatial data (i.e., separation of geological, geochemical and geophysical anomalies from background values in 3D space); (3) assessment, supporting five data-driven integration methods (viz., information entropy, logistic regression, ordinary weights of evidence, weighted weights of evidence, boost weights of evidence) for integration of exploration criteria; and (4) post-process, for classifying integration outcomes into several levels based on mineralization potentiality. The Nanihu Mo (W) camp (5.0 km×4.0 km×2.7 km) of the Luanchuan region was used as a case study. The results show that GeoCube can enhance the use of 3D geological modeling to store, retrieve, process, display, analyze and integrate exploration criteria. Furthermore, it was found that the ordinary weights of evidence, boost weights of evidence and logistic regression methods showed superior performance as integration tools for exploration targeting in this case study.

  15. Information capacity of electronic vision systems

    NASA Astrophysics Data System (ADS)

    Taubkin, Igor I.; Trishenkov, Mikhail A.

    1996-10-01

    The comparison of various electronic-optical vision systems has been conducted based on the criterion ultimate information capacity, C, limited by fluctuations of the flux of quanta. The information capacity of daylight, night, and thermal vision systems is determined first of all by the number of picture elements, M, in the optical system. Each element, under a sufficient level of irradiation, can transfer about one byte of information for the standard frame time and so C ≈ M bytes per frame. The value of the proportionality factor, one byte per picture element, is referred to systems of daylight and thermal vision, in which a photocharge in a unit cell of the imager is limited by storage capacity, and in general it varies within a small interval of 0.5 byte per frame for night vision systems to 2 bytes per frame for ideal thermal imagers. The ultimate specific information capacity, C ∗, of electronic vision systems under low irradiation levels rises with increasing density of optical channels until the number of the irradiance gradations that can be distinguished becomes less than two in each channel. In this case, the maximum value of C ∗ turns out to be proportional to the flux of quanta coming from an object under observation. Under a high level of irradiation, C ∗ is limited by difraction effects and amounts oto 1/ λ2 bytes/cm 2 frame.

  16. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  17. Pipeline inwall 3D measurement system based on the cross structured light

    NASA Astrophysics Data System (ADS)

    Shen, Da; Lin, Zhipeng; Xue, Lei; Zheng, Qiang; Wang, Zichi

    2014-01-01

    In order to accurately realize the defect detection of pipeline inwall, this paper proposes a measurement system made up of cross structured light, single CCD camera and a smart car, etc. Based on structured light measurement technology, this paper mainly introduces the structured light measurement system, the imaging mathematical model, and the parameters and method of camera calibration. Using these measuring principles and methods, the camera in remote control car platform achieves continuous shooting of objects and real-time rebound processing as well as utilizing established model to extract 3D point cloud coordinate to reconstruct pipeline defects, so it is possible to achieve 3D automatic measuring, and verifies the correctness and feasibility of this system. It has been found that this system has great measurement accuracy in practice.

  18. Prediction of parallel NIKE3D performance on the KSR1 system

    SciTech Connect

    Su, P.S.; Zacharia, T.; Fulton, R.E.

    1995-05-01

    Finite element method is one of the bases for numerical solutions to engineering problems. Complex engineering problems using finite element analysis typically imply excessively large computational time. Parallel supercomputers have the potential for significantly increasing calculation speeds in order to meet these computational requirements. This paper predicts parallel NIKE3D performance on the Kendall Square Research (KSR1) system. The first part of the prediction is based on the implementation of parallel Cholesky (U{sup T}DU) matrix decomposition algorithm through actual computations on the KSRI multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The other predictions are based on actual computations for parallel element matrix generation, parallel global stiffness matrix assembly, and parallel forward/backward substitution on the BBN TC2000 multiprocessor system at Lawrence Livermore National Laboratory. The preliminary results indicate that parallel NIKE3D performance can be attractive under local/shared-memory multiprocessor system environments.

  19. Micro-precise spatiotemporal delivery system embedded in 3D printing for complex tissue regeneration.

    PubMed

    Tarafder, Solaiman; Koch, Alia; Jun, Yena; Chou, Conrad; Awadallah, Mary R; Lee, Chang H

    2016-06-01

    Three dimensional (3D) printing has emerged as an efficient tool for tissue engineering and regenerative medicine, given its advantages for constructing custom-designed scaffolds with tunable microstructure/physical properties. Here we developed a micro-precise spatiotemporal delivery system embedded in 3D printed scaffolds. PLGA microspheres (μS) were encapsulated with growth factors (GFs) and then embedded inside PCL microfibers that constitute custom-designed 3D scaffolds. Given the substantial difference in the melting points between PLGA and PCL and their low heat conductivity, μS were able to maintain its original structure while protecting GF's bioactivities. Micro-precise spatial control of multiple GFs was achieved by interchanging dispensing cartridges during a single printing process. Spatially controlled delivery of GFs, with a prolonged release, guided formation of multi-tissue interfaces from bone marrow derived mesenchymal stem/progenitor cells (MSCs). To investigate efficacy of the micro-precise delivery system embedded in 3D printed scaffold, temporomandibular joint (TMJ) disc scaffolds were fabricated with micro-precise spatiotemporal delivery of CTGF and TGFβ3, mimicking native-like multiphase fibrocartilage. In vitro, TMJ disc scaffolds spatially embedded with CTGF/TGFβ3-μS resulted in formation of multiphase fibrocartilaginous tissues from MSCs. In vivo, TMJ disc perforation was performed in rabbits, followed by implantation of CTGF/TGFβ3-μS-embedded scaffolds. After 4 wks, CTGF/TGFβ3-μS embedded scaffolds significantly improved healing of the perforated TMJ disc as compared to the degenerated TMJ disc in the control group with scaffold embedded with empty μS. In addition, CTGF/TGFβ3-μS embedded scaffolds significantly prevented arthritic changes on TMJ condyles. In conclusion, our micro-precise spatiotemporal delivery system embedded in 3D printing may serve as an efficient tool to regenerate complex and inhomogeneous tissues. PMID

  20. Micro-precise spatiotemporal delivery system embedded in 3D printing for complex tissue regeneration.

    PubMed

    Tarafder, Solaiman; Koch, Alia; Jun, Yena; Chou, Conrad; Awadallah, Mary R; Lee, Chang H

    2016-04-25

    Three dimensional (3D) printing has emerged as an efficient tool for tissue engineering and regenerative medicine, given its advantages for constructing custom-designed scaffolds with tunable microstructure/physical properties. Here we developed a micro-precise spatiotemporal delivery system embedded in 3D printed scaffolds. PLGA microspheres (μS) were encapsulated with growth factors (GFs) and then embedded inside PCL microfibers that constitute custom-designed 3D scaffolds. Given the substantial difference in the melting points between PLGA and PCL and their low heat conductivity, μS were able to maintain its original structure while protecting GF's bioactivities. Micro-precise spatial control of multiple GFs was achieved by interchanging dispensing cartridges during a single printing process. Spatially controlled delivery of GFs, with a prolonged release, guided formation of multi-tissue interfaces from bone marrow derived mesenchymal stem/progenitor cells (MSCs). To investigate efficacy of the micro-precise delivery system embedded in 3D printed scaffold, temporomandibular joint (TMJ) disc scaffolds were fabricated with micro-precise spatiotemporal delivery of CTGF and TGFβ3, mimicking native-like multiphase fibrocartilage. In vitro, TMJ disc scaffolds spatially embedded with CTGF/TGFβ3-μS resulted in formation of multiphase fibrocartilaginous tissues from MSCs. In vivo, TMJ disc perforation was performed in rabbits, followed by implantation of CTGF/TGFβ3-μS-embedded scaffolds. After 4 wks, CTGF/TGFβ3-μS embedded scaffolds significantly improved healing of the perforated TMJ disc as compared to the degenerated TMJ disc in the control group with scaffold embedded with empty μS. In addition, CTGF/TGFβ3-μS embedded scaffolds significantly prevented arthritic changes on TMJ condyles. In conclusion, our micro-precise spatiotemporal delivery system embedded in 3D printing may serve as an efficient tool to regenerate complex and inhomogeneous tissues.

  1. Development and characterization of 3D-printed feed spacers for spiral wound membrane systems.

    PubMed

    Siddiqui, Amber; Farhat, Nadia; Bucs, Szilárd S; Linares, Rodrigo Valladares; Picioreanu, Cristian; Kruithof, Joop C; van Loosdrecht, Mark C M; Kidwell, James; Vrouwenvelder, Johannes S

    2016-03-15

    Feed spacers are important for the impact of biofouling on the performance of spiral-wound reverse osmosis (RO) and nanofiltration (NF) membrane systems. The objective of this study was to propose a strategy for developing, characterizing, and testing of feed spacers by numerical modeling, three-dimensional (3D) printing of feed spacers and experimental membrane fouling simulator (MFS) studies. The results of numerical modeling on the hydrodynamic behavior of various feed spacer geometries suggested that the impact of spacers on hydrodynamics and biofouling can be improved. A good agreement was found for the modeled and measured relationship between linear flow velocity and pressure drop for feed spacers with the same geometry, indicating that modeling can serve as the first step in spacer characterization. An experimental comparison study of a feed spacer currently applied in practice and a 3D printed feed spacer with the same geometry showed (i) similar hydrodynamic behavior, (ii) similar pressure drop development with time and (iii) similar biomass accumulation during MFS biofouling studies, indicating that 3D printing technology is an alternative strategy for development of thin feed spacers with a complex geometry. Based on the numerical modeling results, a modified feed spacer with low pressure drop was selected for 3D printing. The comparison study of the feed spacer from practice and the modified geometry 3D printed feed spacer established that the 3D printed spacer had (i) a lower pressure drop during hydrodynamic testing, (ii) a lower pressure drop increase in time with the same accumulated biomass amount, indicating that modifying feed spacer geometries can reduce the impact of accumulated biomass on membrane performance. The combination of numerical modeling of feed spacers and experimental testing of 3D printed feed spacers is a promising strategy (rapid, low cost and representative) to develop advanced feed spacers aiming to reduce the impact of

  2. Development and characterization of 3D-printed feed spacers for spiral wound membrane systems.

    PubMed

    Siddiqui, Amber; Farhat, Nadia; Bucs, Szilárd S; Linares, Rodrigo Valladares; Picioreanu, Cristian; Kruithof, Joop C; van Loosdrecht, Mark C M; Kidwell, James; Vrouwenvelder, Johannes S

    2016-03-15

    Feed spacers are important for the impact of biofouling on the performance of spiral-wound reverse osmosis (RO) and nanofiltration (NF) membrane systems. The objective of this study was to propose a strategy for developing, characterizing, and testing of feed spacers by numerical modeling, three-dimensional (3D) printing of feed spacers and experimental membrane fouling simulator (MFS) studies. The results of numerical modeling on the hydrodynamic behavior of various feed spacer geometries suggested that the impact of spacers on hydrodynamics and biofouling can be improved. A good agreement was found for the modeled and measured relationship between linear flow velocity and pressure drop for feed spacers with the same geometry, indicating that modeling can serve as the first step in spacer characterization. An experimental comparison study of a feed spacer currently applied in practice and a 3D printed feed spacer with the same geometry showed (i) similar hydrodynamic behavior, (ii) similar pressure drop development with time and (iii) similar biomass accumulation during MFS biofouling studies, indicating that 3D printing technology is an alternative strategy for development of thin feed spacers with a complex geometry. Based on the numerical modeling results, a modified feed spacer with low pressure drop was selected for 3D printing. The comparison study of the feed spacer from practice and the modified geometry 3D printed feed spacer established that the 3D printed spacer had (i) a lower pressure drop during hydrodynamic testing, (ii) a lower pressure drop increase in time with the same accumulated biomass amount, indicating that modifying feed spacer geometries can reduce the impact of accumulated biomass on membrane performance. The combination of numerical modeling of feed spacers and experimental testing of 3D printed feed spacers is a promising strategy (rapid, low cost and representative) to develop advanced feed spacers aiming to reduce the impact of

  3. Flight Testing an Integrated Synthetic Vision System

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-01-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  4. Flight testing an integrated synthetic vision system

    NASA Astrophysics Data System (ADS)

    Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III

    2005-05-01

    NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream G-V aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

  5. 3D interactive augmented reality-enhanced digital learning systems for mobile devices

    NASA Astrophysics Data System (ADS)

    Feng, Kai-Ten; Tseng, Po-Hsuan; Chiu, Pei-Shuan; Yang, Jia-Lin; Chiu, Chun-Jie

    2013-03-01

    With enhanced processing capability of mobile platforms, augmented reality (AR) has been considered a promising technology for achieving enhanced user experiences (UX). Augmented reality is to impose virtual information, e.g., videos and images, onto a live-view digital display. UX on real-world environment via the display can be e ectively enhanced with the adoption of interactive AR technology. Enhancement on UX can be bene cial for digital learning systems. There are existing research works based on AR targeting for the design of e-learning systems. However, none of these work focuses on providing three-dimensional (3-D) object modeling for en- hanced UX based on interactive AR techniques. In this paper, the 3-D interactive augmented reality-enhanced learning (IARL) systems will be proposed to provide enhanced UX for digital learning. The proposed IARL systems consist of two major components, including the markerless pattern recognition (MPR) for 3-D models and velocity-based object tracking (VOT) algorithms. Realistic implementation of proposed IARL system is conducted on Android-based mobile platforms. UX on digital learning can be greatly improved with the adoption of proposed IARL systems.

  6. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  7. Display of real-time 3D sensor data in a DVE system

    NASA Astrophysics Data System (ADS)

    Völschow, Philipp; Münsterer, Thomas; Strobel, Michael; Kuhn, Michael

    2016-05-01

    This paper describes the implementation of displaying real-time processed LiDAR 3D data in a DVE pilot assistance system. The goal is to display to the pilot a comprehensive image of the surrounding world without misleading or cluttering information. 3D data which can be attributed, i.e. classified, to terrain or predefined obstacle classes is depicted differently from data belonging to elevated objects which could not be classified. Display techniques may be different for head-down and head-up displays to avoid cluttering of the outside view in the latter case. While terrain is shown as shaded surfaces with grid structures or as grid structures alone, respectively, classified obstacles are typically displayed with obstacle symbols only. Data from objects elevated above ground are displayed as shaded 3D points in space. In addition the displayed 3D points are accumulated over a certain time frame allowing on the one hand side a cohesive structure being displayed and on the other hand displaying moving objects correctly. In addition color coding or texturing can be applied based on known terrain features like land use.

  8. A 3D human neural cell culture system for modeling Alzheimer’s disease

    PubMed Central

    Kim, Young Hye; Choi, Se Hoon; D’Avanzo, Carla; Hebisch, Matthias; Sliwinski, Christopher; Bylykbashi, Enjana; Washicosky, Kevin J.; Klee, Justin B.; Brüstle, Oliver; Tanzi, Rudolph E.; Kim, Doo Yeon

    2015-01-01

    Stem cell technologies have facilitated the development of human cellular disease models that can be used to study pathogenesis and test therapeutic candidates. These models hold promise for complex neurological diseases such as Alzheimer’s disease (AD) because existing animal models have been unable to fully recapitulate all aspects of pathology. We recently reported the characterization of a novel three-dimensional (3D) culture system that exhibits key events in AD pathogenesis, including extracellular aggregation of β-amyloid and accumulation of hyperphosphorylated tau. Here we provide instructions for the generation and analysis of 3D human neural cell cultures, including the production of genetically modified human neural progenitor cells (hNPCs) with familial AD mutations, the differentiation of the hNPCs in a 3D matrix, and the analysis of AD pathogenesis. The 3D culture generation takes 1–2 days. The aggregation of β-amyloid is observed after 6-weeks of differentiation followed by robust tau pathology after 10–14 weeks. PMID:26068894

  9. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  10. First 3D reconstruction of the rhizocephalan root system using MicroCT

    NASA Astrophysics Data System (ADS)

    Noever, Christoph; Keiler, Jonas; Glenner, Henrik

    2016-07-01

    Parasitic barnacles (Cirripedia: Rhizocephala) are highly specialized parasites of crustaceans. Instead of an alimentary tract for feeding they utilize a system of roots, which infiltrates the body of their hosts to absorb nutrients. Using X-ray micro computer tomography (MicroCT) and computer-aided 3D-reconstruction, we document the spatial organization of this root system, the interna, inside the intact host and also demonstrate its use for morphological examinations of the parasites reproductive part, the externa. This is the first 3D visualization of the unique root system of the Rhizocephala in situ, showing how it is related to the inner organs of the host. We investigated the interna from different parasitic barnacles of the family Peltogastridae, which are parasitic on anomuran crustaceans. Rhizocephalan parasites of pagurid hermit crabs and lithodid crabs were analysed in this study.

  11. Three-Dimensional Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Nguyen, Thinh V.

    1989-01-01

    Stereoscopy and motion provide clues to outlines of objects. Digital image-processing system acts as "intelligent" automatic machine-vision system by processing views from stereoscopic television cameras into three-dimensional coordinates of moving object in view. Epipolar-line technique used to find corresponding points in stereoscopic views. Robotic vision system analyzes views from two television cameras to detect rigid three-dimensional objects and reconstruct numerically in terms of coordinates of corner points. Stereoscopy and effects of motion on two images complement each other in providing image-analyzing subsystem with clues to natures and locations of principal features.

  12. Development of Mobile Mapping System for 3D Road Asset Inventory

    PubMed Central

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  13. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  14. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed.

  15. 3D Image Acquisition System Based on Shape from Focus Technique

    PubMed Central

    Billiot, Bastien; Cointault, Frédéric; Journaux, Ludovic; Simon, Jean-Claude; Gouton, Pierre

    2013-01-01

    This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene. PMID:23591964

  16. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  17. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  18. An approach to 3D model fusion in GIS systems and its application in a future ECDIS

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Zhao, Depeng; Pan, Mingyang

    2016-04-01

    Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.

  19. A comprehensive evaluation of the PRESAGE∕optical-CT 3D dosimetry system

    PubMed Central

    Sakhalkar, H. S.; Adamovics, J.; Ibbott, G.; Oldham, M.

    2009-01-01

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5× scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H&N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1×3 cm2) impinging on the flat circular face of the dosimeter. A repetitious sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5× commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to ∼2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in opaqueness

  20. A comprehensive evaluation of the PRESAGE/optical-CT 3D dosimetry system.

    PubMed

    Sakhalkar, H S; Adamovics, J; Ibbott, G; Oldham, M

    2009-01-01

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5x scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H&N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1 x 3 cm2) impinging on the flat circular face of the dosimeter. A repetitious sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5x commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to approximately 2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in

  1. A comprehensive evaluation of the PRESAGE/optical-CT 3D dosimetry system

    SciTech Connect

    Sakhalkar, H. S.; Adamovics, J.; Ibbott, G.; Oldham, M.

    2009-01-15

    This work presents extensive investigations to evaluate the robustness (intradosimeter consistency and temporal stability of response), reproducibility, precision, and accuracy of a relatively new 3D dosimetry system comprising a leuco-dye doped plastic 3D dosimeter (PRESAGE) and a commercial optical-CT scanner (OCTOPUS 5x scanner from MGS Research, Inc). Four identical PRESAGE 3D dosimeters were created such that they were compatible with the Radiologic Physics Center (RPC) head-and-neck (H and N) IMRT credentialing phantom. Each dosimeter was irradiated with a rotationally symmetric arrangement of nine identical small fields (1x3 cm{sup 2}) impinging on the flat circular face of the dosimeter. A repetitious sequence of three dose levels (4, 2.88, and 1.28 Gy) was delivered. The rotationally symmetric treatment resulted in a dose distribution with high spatial variation in axial planes but only gradual variation with depth along the long axis of the dosimeter. The significance of this treatment was that it facilitated accurate film dosimetry in the axial plane, for independent verification. Also, it enabled rigorous evaluation of robustness, reproducibility and accuracy of response, at the three dose levels. The OCTOPUS 5x commercial scanner was used for dose readout from the dosimeters at daily time intervals. The use of improved optics and acquisition technique yielded substantially improved noise characteristics (reduced to {approx}2%) than has been achieved previously. Intradosimeter uniformity of radiochromic response was evaluated by calculating a 3D gamma comparison between each dosimeter and axially rotated copies of the same dosimeter. This convenient technique exploits the rotational symmetry of the distribution. All points in the gamma comparison passed a 2% difference, 1 mm distance-to-agreement criteria indicating excellent intradosimeter uniformity even at low dose levels. Postirradiation, the dosimeters were all found to exhibit a slight increase in

  2. An innovative system for 3D clinical photography in the resource-limited settings

    PubMed Central

    2014-01-01

    Background Kaposi’s sarcoma (KS) is the most frequently occurring cancer in Mozambique among men and the second most frequently occurring cancer among women. Effective therapeutic treatments for KS are poorly understood in this area. There is an unmet need to develop a simple but accurate tool for improved monitoring and diagnosis in a resource-limited setting. Standardized clinical photographs have been considered to be an essential part of the evaluation. Methods When a therapeutic response is achieved, nodular KS often exhibits a reduction of the thickness without a change in the base area of the lesion. To evaluate the vertical space along with other characters of a KS lesion, we have created an innovative imaging system with a consumer light-field camera attached to a miniature “photography studio” adaptor. The image file can be further processed by computational methods for quantification. Results With this novel imaging system, each high-quality 3D image was consistently obtained with a single camera shot at bedside by minimally trained personnel. After computational processing, all-focused photos and measurable 3D parameters were obtained. More than 80 KS image sets were processed in a semi-automated fashion. Conclusions In this proof-of-concept study, the feasibility to use a simple, low-cost and user-friendly system has been established for future clinical study to monitor KS therapeutic response. This 3D imaging system can be also applied to obtain standardized clinical photographs for other diseases. PMID:24929434

  3. Tree root systems competing for soil moisture in a 3D soil-plant model

    NASA Astrophysics Data System (ADS)

    Manoli, Gabriele; Bonetti, Sara; Domec, Jean-Christophe; Putti, Mario; Katul, Gabriel; Marani, Marco

    2014-04-01

    Competition for water among multiple tree rooting systems is investigated using a soil-plant model that accounts for soil moisture dynamics and root water uptake (RWU), whole plant transpiration, and leaf-level photosynthesis. The model is based on a numerical solution to the 3D Richards equation modified to account for a 3D RWU, trunk xylem, and stomatal conductances. The stomatal conductance is determined by combining a conventional biochemical demand formulation for photosynthesis with an optimization hypothesis that selects stomatal aperture so as to maximize carbon gain for a given water loss. Model results compare well with measurements of soil moisture throughout the rooting zone, of total sap flow in the trunk xylem, as well as of leaf water potential collected in a Loblolly pine forest. The model is then used to diagnose plant responses to water stress in the presence of competing rooting systems. Unsurprisingly, the overlap between rooting zones is shown to enhance soil drying. However, the 3D spatial model yielded transpiration-bulk root-zone soil moisture relations that do not deviate appreciably from their proto-typical form commonly assumed in lumped eco-hydrological models. The increased overlap among rooting systems primarily alters the timing at which the point of incipient soil moisture stress is reached by the entire soil-plant system.

  4. Development and application of 3-D foot-shape measurement system under different loads

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-03-01

    The 3-D foot-shape measurement system under different loads based on laser-line-scanning principle was designed and the model of the measurement system was developed. 3-D foot-shape measurements without blind areas under different loads and the automatic extraction of foot-parameter are achieved with the system. A global calibration method for CCD cameras using a one-axis motion unit in the measurement system and the specialized calibration kits is presented. Errors caused by the nonlinearity of CCD cameras and other devices and caused by the installation of the one axis motion platform, the laser plane and the toughened glass plane can be eliminated by using the nonlinear coordinate mapping function and the Powell optimized method in calibration. Foot measurements under different loads for 170 participants were conducted and the statistic foot parameter measurement results for male and female participants under non-weight condition and changes of foot parameters under half-body-weight condition, full-body-weight condition and over-body-weight condition compared with non-weight condition are presented. 3-D foot-shape measurement under different loads makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization, and establishment of a feet database for consumers and athletes.

  5. 3D real-time measurement system of seam with laser

    NASA Astrophysics Data System (ADS)

    Huang, Min-shuang; Huang, Jun-fen

    2014-02-01

    3-D Real-time Measurement System of seam outline based on Moiré Projection is proposed and designed. The system is composed of LD, grating, CCD, video A/D, FPGA, DSP and an output interface. The principle and hardware makeup of high-speed and real-time image processing circuit based on a Digital Signal Processor (DSP) and a Field Programmable Gate Array (FPGA) are introduced. Noise generation mechanism in poor welding field conditions is analyzed when Moiré stripes are projected on a welding workpiece surface. Median filter is adopted to smooth the acquired original laser image of seam, and then measurement results of a 3-D outline image of weld groove are provided.

  6. An efficient solid modeling system based on a hand-held 3D laser scan device

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming

    2014-12-01

    The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.

  7. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  8. Implementation of parallel matrix decomposition for NIKE3D on the KSR1 system

    SciTech Connect

    Su, Philip S.; Fulton, R.E.; Zacharia, T.

    1995-06-01

    New massively parallel computer architecture has revolutionized the design of computer algorithms and promises to have significant influence on algorithms for engineering computations. Realistic engineering problems using finite element analysis typically imply excessively large computational requirements. Parallel supercomputers that have the potential for significantly increasing calculation speeds can meet these computational requirements. This report explores the potential for the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm on NIKE3D through actual computations. The examples of two- and three-dimensional nonlinear dynamic finite element problems are presented on the Kendall Square Research (KSR1) multiprocessor system, with 64 processors, at Oak Ridge National Laboratory. The numerical results indicate that the parallel Cholesky (U{sup T}DU) matrix decomposition algorithm is attractive for NIKE3D under multi-processor system environments.

  9. A 3D-elastography-guided system for laparoscopic partial nephrectomies

    NASA Astrophysics Data System (ADS)

    Stolka, Philipp J.; Keil, Matthias; Sakas, Georgios; McVeigh, Elliot; Allaf, Mohamad E.; Taylor, Russell H.; Boctor, Emad M.

    2010-02-01

    We present an image-guided intervention system based on tracked 3D elasticity imaging (EI) to provide a novel interventional modality for registration with pre-operative CT. The system can be integrated in both laparoscopic and robotic partial nephrectomies scenarios, where this new use of EI makes exact intra-operative execution of pre-operative planning possible. Quick acquisition and registration of 3D-B-Mode and 3D-EI volume data allows intra-operative registration with CT and thus with pre-defined target and critical regions (e.g. tumors and vasculature). Their real-time location information is then overlaid onto a tracked endoscopic video stream to help the surgeon avoid vessel damage and still completely resect tumors including safety boundaries. The presented system promises to increase the success rate for partial nephrectomies and potentially for a wide range of other laparoscopic and robotic soft tissue interventions. This is enabled by the three components of robust real-time elastography, fast 3D-EI/CT registration, and intra-operative tracking. With high quality, robust strain imaging (through a combination of parallelized 2D-EI, optimal frame pair selection, and optimized palpation motions), kidney tumors that were previously unregistrable or sometimes even considered isoechoic with conventional B-mode ultrasound can now be imaged reliably in interventional settings. Furthermore, this allows the transformation of planning CT data of kidney ROIs to the intra-operative setting with a markerless mutual-information-based registration, using EM sensors for intraoperative motion tracking. Overall, we present a complete procedure and its development, including new phantom models - both ex vivo and synthetic - to validate image-guided technology and training, tracked elasticity imaging, real-time EI frame selection, registration of CT with EI, and finally a real-time, distributed software architecture. Together, the system allows the surgeon to concentrate

  10. Development of hybrid 3-D hydrological modeling for the NCAR Community Earth System Model (CESM)

    SciTech Connect

    Zeng, Xubin; Troch, Peter; Pelletier, Jon; Niu, Guo-Yue; Gochis, David

    2015-11-15

    This is the Final Report of our four-year (3-year plus one-year no cost extension) collaborative project between the University of Arizona (UA) and the National Center for Atmospheric Research (NCAR). The overall objective of our project is to develop and evaluate the first hybrid 3-D hydrological model with a horizontal grid spacing of 1 km for the NCAR Community Earth System Model (CESM).

  11. Template protection and its implementation in 3D face recognition systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xuebing

    2007-04-01

    As biometric recognition systems are widely applied in various application areas, security and privacy risks have recently attracted the attention of the biometric community. Template protection techniques prevent stored reference data from revealing private biometric information and enhance the security of biometrics systems against attacks such as identity theft and cross matching. This paper concentrates on a template protection algorithm that merges methods from cryptography, error correction coding and biometrics. The key component of the algorithm is to convert biometric templates into binary vectors. It is shown that the binary vectors should be robust, uniformly distributed, statistically independent and collision-free so that authentication performance can be optimized and information leakage can be avoided. Depending on statistical character of the biometric template, different approaches for transforming biometric templates into compact binary vectors are presented. The proposed methods are integrated into a 3D face recognition system and tested on the 3D facial images of the FRGC database. It is shown that the resulting binary vectors provide an authentication performance that is similar to the original 3D face templates. A high security level is achieved with reasonable false acceptance and false rejection rates of the system, based on an efficient statistical analysis. The algorithm estimates the statistical character of biometric templates from a number of biometric samples in the enrollment database. For the FRGC 3D face database, the small distinction of robustness and discriminative power between the classification results under the assumption of uniquely distributed templates and the ones under the assumption of Gaussian distributed templates is shown in our tests.

  12. Ultra-Compact, High-Resolution LADAR System for 3D Imaging

    NASA Technical Reports Server (NTRS)

    Xu, Jing; Gutierrez, Roman

    2009-01-01

    An eye-safe LADAR system weighs under 500 grams and has range resolution of 1 mm at 10 m. This laser uses an adjustable, tiny microelectromechanical system (MEMS) mirror that was made in SiWave to sweep laser frequency. The size of the laser device is small (70x50x13 mm). The LADAR uses all the mature fiber-optic telecommunication technologies in the system, making this innovation an efficient performer. The tiny size and light weight makes the system useful for commercial and industrial applications including surface damage inspections, range measurements, and 3D imaging.

  13. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  14. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    SciTech Connect

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  15. A complete system for 3D reconstruction of roots for phenotypic analysis.

    PubMed

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis. PMID:25381112

  16. AN INVESTIGATION OF VISION PROBLEMS AND THE VISION CARE SYSTEM IN RURAL CHINA.

    PubMed

    Bai, Yunli; Yi, Hongmei; Zhang, Linxiu; Shi, Yaojiang; Ma, Xiaochen; Congdon, Nathan; Zhou, Zhongqiang; Boswell, Matthew; Rozelle, Scott

    2014-11-01

    This paper examines the prevalence of vision problems and the accessibility to and quality of vision care in rural China. We obtained data from 4 sources: 1) the National Rural Vision Care Survey; 2) the Private Optometrists Survey; 3) the County Hospital Eye Care Survey; and 4) the Rural School Vision Care Survey. The data from each of the surveys were collected by the authors during 2012. Thirty-three percent of the rural population surveyed self-reported vision problems. Twenty-two percent of subjects surveyed had ever had a vision exam. Among those who self-reported having vision problems, 34% did not wear eyeglasses. Fifty-four percent of those with vision problems who had eyeglasses did not have a vision exam prior to receiving glasses. However, having a vision exam did not always guarantee access to quality vision care. Four channels of vision care service were assessed. The school vision examination program did not increase the usage rate of eyeglasses. Each county-hospital was staffed with three eye-doctors having one year of education beyond high school, serving more than 400,000 residents. Private optometrists often had low levels of education and professional certification. In conclusion, our findings shows that the vision care system in rural China is inadequate and ineffective in meeting the needs of the rural population sampled.

  17. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  18. Processing system for an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Yelton, Dennis J.; Bernier, Ken L.; Sanders-Reed, John N.

    2004-08-01

    Enhanced Vision Systems (EVS) combines imagery from multiple sensors, possibly running at different frame rates and pixel counts, on to a display. In the case of a Helmet Mounted Display (HMD), the user line of sight is continuously changing with the result that the sensor pixels rendered on the display are changing in real time. In an EVS, the various sensors provide overlapping fields of view which requires stitching imagery together to provide a seamless mosaic to the user. Further, different modality sensors may be present requiring the fusion of imagery from the sensors. All of this takes place in a dynamic flight environment where the aircraft (with fixed mounted sensors) is changing position and orientation while the users are independently changing their lines of sight. In order to provide well registered, seamless imagery, very low throughput latencies are required, while dealing with huge volumes of data. This provides both algorithmic and processing challenges which must be overcome to provide a suitable system. This paper discusses system architecture, efficient stitching and fusing algorithms, and hardware implementation issues.

  19. 3D ultrasound system to investigate intraventricular hemorrhage in preterm neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; de Ribaupierre, S.; Lee, D. S. C.; Mehta, R.; St. Lawrence, K.; Fenster, A.

    2013-11-01

    Intraventricular hemorrhage (IVH) is a common disorder among preterm neonates that is routinely diagnosed and monitored by 2D cranial ultrasound (US). The cerebral ventricles of patients with IVH often have a period of ventricular dilation (ventriculomegaly). This initial increase in ventricle size can either spontaneously resolve, which often shows clinically as a period of stabilization in ventricle size and eventual decline back towards a more normal size, or progressive ventricular dilation that does not stabilize and which may require interventional therapy to reduce symptoms relating to increased intracranial pressure. To improve the characterization of ventricle dilation, we developed a 3D US imaging system that can be used with a conventional clinical US scanner to image the ventricular system of preterm neonates at risk of ventriculomegaly. A motorized transducer housing was designed specifically for hand-held use inside an incubator using a transducer commonly used for cranial 2D US scans. This system was validated using geometric phantoms, US/MRI compatible ventricle volume phantoms, and patient images to determine 3D reconstruction accuracy and inter- and intra-observer volume estimation variability. 3D US geometric reconstruction was found to be accurate with an error of <0.2%. Measured volumes of a US/MRI compatible ventricle-like phantom were within 5% of gold standard water displacement measurements. Intra-class correlation for the three observers was 0.97, showing very high agreement between observers. The coefficient of variation was between 1.8-6.3% for repeated segmentations of the same patient. The minimum detectable difference was calculated to be 0.63 cm3 for a single observer. Results from ANOVA for three observers segmenting three patients of IVH grade II did not show any significant differences (p > 0.05) for the measured ventricle volumes between observers. This 3D US system can reliably produce 3D US images of the neonatal ventricular

  20. Design of 3D measurement system based on multi-sensor data fusion technique

    NASA Astrophysics Data System (ADS)

    Zhang, Weiguang; Han, Jun; Yu, Xun

    2009-05-01

    With the rapid development of shape measurement technique, multi-sensor approach becomes one of valid way to improve the accuracy, to expend measuring range, to reduce occlusion, to realize multi-resolution measurement, and to increase measuring speed simultaneously. Sensors in multi-sensor system can have different system parameters, and they may have different measuring range and different precision. Light sectioning method is one of useful measurement technique for 3D profile measurement. It is insensitive to the surface optical property of 3D object, has scarcely any demand on surrounding. A multi-sensor system scheme, which uses light sectioning method and multi-sensor data fusion techniques, is presented for blade of aviation engine and spiral bevel gear measurement. The system model is developed to build the relationship between measuring range & precision and system parameters. The system parameters were set according to system error analysis, measuring range and precision. The result shows that the system is more universal than it's ancestor, and that the accuracy of the system is about 0.05mm for the 60× 60mm2 measuring range, and that the system is successful for the aero-dynamical data curve of blade of aviation engine and tooth profile of spiral bevel gear measurement with 3600 multi-resolution measuring character.

  1. Evaluating the Effects of Dimensionality in Advanced Avionic Display Concepts for Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Alexander, Amy L.; Prinzel, Lawrence J., III; Wickens, Christopher D.; Kramer, Lynda J.; Arthur, Jarvis J.; Bailey, Randall E.

    2007-01-01

    Synthetic vision systems provide an in-cockpit view of terrain and other hazards via a computer-generated display representation. Two experiments examined several display concepts for synthetic vision and evaluated how such displays modulate pilot performance. Experiment 1 (24 general aviation pilots) compared three navigational display (ND) concepts: 2D coplanar, 3D, and split-screen. Experiment 2 (12 commercial airline pilots) evaluated baseline 'blue sky/brown ground' or synthetic vision-enabled primary flight displays (PFDs) and three ND concepts: 2D coplanar with and without synthetic vision and a dynamic multi-mode rotatable exocentric format. In general, the results pointed to an overall advantage for a split-screen format, whether it be stand-alone (Experiment 1) or available via rotatable viewpoints (Experiment 2). Furthermore, Experiment 2 revealed benefits associated with utilizing synthetic vision in both the PFD and ND representations and the value of combined ego- and exocentric presentations.

  2. Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.; Altschuler, B. R.; Taboada, J.

    1981-01-01

    It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.

  3. A web-based 3D visualisation and assessment system for urban precinct scenario modelling

    NASA Astrophysics Data System (ADS)

    Trubka, Roman; Glackin, Stephen; Lade, Oliver; Pettit, Chris

    2016-07-01

    Recent years have seen an increasing number of spatial tools and technologies for enabling better decision-making in the urban environment. They have largely arisen because of the need for cities to be more efficiently planned to accommodate growing populations while mitigating urban sprawl, and also because of innovations in rendering data in 3D being well suited for visualising the urban built environment. In this paper we review a number of systems that are better known and more commonly used in the field of urban planning. We then introduce Envision Scenario Planner (ESP), a web-based 3D precinct geodesign, visualisation and assessment tool, developed using Agile and Co-design methods. We provide a comprehensive account of the tool, beginning with a discussion of its design and development process and concluding with an example use case and a discussion of the lessons learned in its development.

  4. Genetically Encoded Sender–Receiver System in 3D Mammalian Cell Culture

    PubMed Central

    2013-01-01

    Engineering spatial patterning in mammalian cells, employing entirely genetically encoded components, requires solving several problems. These include how to code secreted activator or inhibitor molecules and how to send concentration-dependent signals to neighboring cells, to control gene expression. The Madin–Darby Canine Kidney (MDCK) cell line is a potential engineering scaffold as it forms hollow spheres (cysts) in 3D culture and tubulates in response to extracellular hepatocyte growth factor (HGF). We first aimed to graft a synthetic patterning system onto single developing MDCK cysts. We therefore developed a new localized transfection method to engineer distinct sender and receiver regions. A stable reporter line enabled reversible EGFP activation by HGF and modulation by a secreted repressor (a truncated HGF variant, NK4). By expanding the scale to wide fields of cysts, we generated morphogen diffusion gradients, controlling reporter gene expression. Together, these components provide a toolkit for engineering cell–cell communication networks in 3D cell culture. PMID:24313393

  5. A novel sensor system for 3D face scanning based on infrared coded light

    NASA Astrophysics Data System (ADS)

    Modrow, Daniel; Laloni, Claudio; Doemens, Guenter; Rigoll, Gerhard

    2008-02-01

    In this paper we present a novel sensor system for three-dimensional face scanning applications. Its operating principle is based on active triangulation with a color coded light approach. As it is implemented in the near infrared band, the used light is invisible for human perception. Though the proposed sensor is primarily designed for face scanning and biometric applications, its performance characteristics are beneficial for technical applications as well. The acquisition of 3d data is real-time capable, provides accurate and high resolution depthmaps and shows high robustness against ambient light. Hence most of the limiting factors of other sensors for 3d and face scanning applications are eliminated, such as blinding and annoying light patterns, motion constraints and highly restricted scenarios due to ambient light constraints.

  6. Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.

    PubMed

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.

  7. Design of a 3D Magnetic Diagnostic System for DIII-D

    NASA Astrophysics Data System (ADS)

    King, J. D.; Strait, E. J.; Boivin, R. L.; La Haye, R. J.; Lao, L. L.; Battaglia, D. J.; Logan, N. C.; Hanson, J. M.; Lanctot, M. J.; Sontag, A. C.

    2012-10-01

    A new set of magnetic sensors has been designed to diagnose the 3D plasma response due to applied resonant magnetic perturbations (RMPs). The system will also allow for detailed investigation of locked modes and the effects of error fields. This upgrade adds more than 100 co-located radial and poloidal field sensors positioned on the high and low field sides of the tokamak. The sensors are arranged in toroidal and poloidal arrays. Their dimensions and spacing are determined using MARS-F and IPEC model predictions to maximize sensitivity to expected 3D field perturbations. Irregular toroidal spacing is used to minimize the condition numbers for simultaneous recovery of toroidal mode numbers n<=4. A subset of closely spaced sensors will also be installed to measure short wavelength MHD such as ELM precursors and TAEs.

  8. A novel 3D constellation-masked method for physical security in hierarchical OFDMA system.

    PubMed

    Zhang, Lijia; Liu, Bo; Xin, Xiangjun; Liu, Deming

    2013-07-01

    This paper proposes a novel 3D constellation-masked method to ensure the physical security in hierarchical optical orthogonal frequency division multiplexing access (OFDMA) system. The 3D constellation masking is executed on the two levels of hierarchical modulation and among different OFDM subcarriers, which is realized by the masking vectors. The Lorenz chaotic model is adopted for the generation of masking vectors in the proposed scheme. A 9.85 Gb/s encrypted hierarchical QAM OFDM signal is successfully demonstrated in the experiment. The performance of illegal optical network unit (ONU) with different masking vectors is also investigated. The proposed method is demonstrated to be secure and efficient against the commonly known attacks in the experiment.

  9. Stereovision-based 3D field recognition for automatic guidance system of off-road vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Fangming; Ying, Yibin; Shen, Chuan; Jiang, Huanyu; Zhang, Qin

    2005-11-01

    A stereovision-based disparity evaluation algorithm was developed for rice crop field recognition. The gray level intensities and the correlation relation were integrated to produce the disparities of stereo-images. The surface of ground and rice were though as two rough planes, but their disparities waved in a narrow range. The cut/uncut edges of rice crops were first detected and track through the images. We used a step model to locate those edge positions. The points besides the edges were matched respectively to get disparity values using area correlation method. The 3D camera coordinates were computed based on those disparities. The vehicle coordinates were obtained by multiplying the 3D camera coordinates with a transform formula. It has been implemented on an agricultural robot and evaluated in rice crop field with straight rows. The results indicated that the developed stereovision navigation system is capable of reconstructing the field image.

  10. Multi-Camera Sensor System for 3D Segmentation and Localization of Multiple Mobile Robots

    PubMed Central

    Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta

    2010-01-01

    This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence. PMID:22319297

  11. In vivo 3D visualization of peripheral circulatory system using linear optoacoustic array

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Brecht, Hans-Peter; Fronheiser, Matthew P.; Nadvoretsky, Vyacheslav; Su, Richard; Conjusteau, Andre; Oraevsky, Alexander A.

    2010-02-01

    In this work we modified light illumination of the laser optoacoustic (OA) imaging system to improve the 3D visualization of human forearm vasculature. The computer modeling demonstrated that the new illumination design that features laser beams converging on the surface of the skin in the imaging plane of the probe provides superior OA images in comparison to the images generated by the illumination with parallel laser beams. We also developed the procedure for vein/artery differentiation based on OA imaging with 690 nm and 1080 nm laser wavelengths. The procedure includes statistical analysis of the intensities of OA images of the neighboring blood vessels. Analysis of the OA images generated by computer simulation of a human forearm illuminated at 690 nm and 1080 nm resulted in successful differentiation of veins and arteries. In vivo scanning of a human forearm provided high contrast 3D OA image of a forearm skin and a superficial blood vessel. The blood vessel image contrast was further enhanced after it was automatically traced using the developed software. The software also allowed evaluation of the effective blood vessel diameter at each step of the scan. We propose that the developed 3D OA imaging system can be used during preoperative mapping of forearm vessels that is essential for hemodialysis treatment.

  12. Dense point-cloud creation using superresolution for a monocular 3D reconstruction system

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-05-01

    We present an enhanced 3D reconstruction algorithm designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera. In this way, the system can be used to construct a point cloud model of its unknown surroundings. The original reconstruction process, resulting with a point cloud was computed based on feature matching and depth triangulation analysis. Although dense, this original model was hindered due to its low disparity resolution. As feature points were matched from frame to frame, the resolution of the input images and the discrete nature of disparities limited the depth computations within a scene. With the recent addition of the preprocessing steps of nonlinear super resolution, the accuracy of the point cloud which relies on precise disparity measurement has significantly increased. Using a pixel by pixel approach, the super resolution technique computes the phase congruency of each pixel's neighborhood and produces nonlinearly interpolated high resolution input frames. Thus, a feature point travels a more precise discrete disparity. Also, the quantity of points within the 3D point cloud model is significantly increased since the number of features is directly proportional to the resolution and high frequencies of the input image. The contribution of the newly added preprocessing steps is measured by evaluating the density and accuracy of the reconstructed point cloud for autonomous navigation and mapping tasks within unknown environments.

  13. Holographic full-color 3D display system using color-LCoS spatial light modulator

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Cheol; Moon, Jaw-Woong; Lee, Dong-Hwi; Son, Kwang-Chul; Kim, Eun-Soo

    2005-04-01

    In this paper, a new color LCoS (liquid crystal on silicon)-based holographic full-color 3D display system is proposed. As the color LCoS SLM (spatial light modulator) can produce a full-color image pattern using a color wheel, only one LCoS panel is required for full-color reconstruction of a 3D object contrary to the conventional three-panel method. That is, in the proposed method, each color fringe-pattern is generated and tinted with each color beam. R, G, B fringe-patterns are mixed up and displayed on the color LCoS SLM. And then, the red, green and blue fringe patterns can be diffracted at the corresponding status of a color wheel, so that a full-color holographic image could be easily reconstructed with simple optics. From some experiments, a possibility of implementation of a new LCoS-based holographic full-color 3D video display system is suggested.

  14. 3D cell culture systems modeling tumor growth determinants in cancer target discovery.

    PubMed

    Thoma, Claudio R; Zimmermann, Miriam; Agarkova, Irina; Kelm, Jens M; Krek, Wilhelm

    2014-04-01

    Phenotypic heterogeneity of cancer cells, cell biological context, heterotypic crosstalk and the microenvironment are key determinants of the multistep process of tumor development. They sign responsible, to a significant extent, for the limited response and resistance of cancer cells to molecular-targeted therapies. Better functional knowledge of the complex intra- and intercellular signaling circuits underlying communication between the different cell types populating a tumor tissue and of the systemic and local factors that shape the tumor microenvironment is therefore imperative. Sophisticated 3D multicellular tumor spheroid (MCTS) systems provide an emerging tool to model the phenotypic and cellular heterogeneity as well as microenvironmental aspects of in vivo tumor growth. In this review we discuss the cellular, chemical and physical factors contributing to zonation and cellular crosstalk within tumor masses. On this basis, we further describe 3D cell culture technologies for growth of MCTS as advanced tools for exploring molecular tumor growth determinants and facilitating drug discovery efforts. We conclude with a synopsis on technological aspects for on-line analysis and post-processing of 3D MCTS models.

  15. Approach to constructing reconfigurable computer vision system

    NASA Astrophysics Data System (ADS)

    Xue, Jianru; Zheng, Nanning; Wang, Xiaoling; Zhang, Yongping

    2000-10-01

    In this paper, we propose an approach to constructing reconfigurable vision system. We found that timely and efficient execution of early tasks can significantly enhance the performance of whole computer vision tasks, and abstract out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in a series of specific front-end processors. These processors which based on FPGAs (Field programmable gate arrays) can be re-programmable to permit a range of different types of feature maps, such as edge detection & linking, image filtering. Front-end processors and a powerful DSP constitute a computing platform which can perform many CV tasks. Additionally we adopt the focus-of-attention technologies to reduce the I/O and computational demands by performing early vision processing only within a particular region of interest. Then we implement a multi-page, dual-ported image memory interface between the image input and computing platform (including front-end processors, DSP). Early vision features were loaded into banks of dual-ported image memory arrays, which are continually raster scan updated at high speed from the input image or video data stream. Moreover, the computing platform can be complete asynchronous, random access to the image data or any other early vision feature maps through the dual-ported memory banks. In this way, the computing platform resources can be properly allocated to a region of interest and decoupled from the task of dealing with a high speed serial raster scan input. Finally, we choose PCI Bus as the main channel between the PC and computing platform. Consequently, front-end processors' control registers and DSP's program memory were mapped into the PC's memory space, which provides user access to reconfigure the system at any time. We also present test result of a computer vision application based on the system.

  16. 3-D ultrasonic strain imaging based on a linear scanning system.

    PubMed

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  17. TBIdoc: 3D content-based CT image retrieval system for traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Gong, Tianxia; Wang, Jie; Liu, Ruizhe; Tan, Chew Lim; Leong, Tze Yun; Pang, Boon Chuan; Lim, C. C. Tchoyoson; Lee, Cheng Kiang; Tian, Qi; Zhang, Zhuo

    2010-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. Computed Tomography (CT) scan is widely used in the diagnosis of TBI. Nowadays, large amount of TBI CT data is stacked in the hospital radiology department. Such data and the associated patient information contain valuable information for clinical diagnosis and outcome prediction. However, current hospital database system does not provide an efficient and intuitive tool for doctors to search out cases relevant to the current study case. In this paper, we present the TBIdoc system: a content-based image retrieval (CBIR) system which works on the TBI CT images. In this web-based system, user can query by uploading CT image slices from one study, retrieval result is a list of TBI cases ranked according to their 3D visual similarity to the query case. Specifically, cases of TBI CT images often present diffuse or focal lesions. In TBIdoc system, these pathological image features are represented as bin-based binary feature vectors. We use the Jaccard-Needham measure as the similarity measurement. Based on these, we propose a 3D similarity measure for computing the similarity score between two series of CT slices. nDCG is used to evaluate the system performance, which shows the system produces satisfactory retrieval results. The system is expected to improve the current hospital data management in TBI and to give better support for the clinical decision-making process. It may also contribute to the computer-aided education in TBI.

  18. Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish

    PubMed Central

    Maaswinkel, Hans; Zhu, Liqun; Weng, Wei

    2013-01-01

    Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189

  19. Characterization of a novel bioreactor system for 3D cellular mechanobiology studies.

    PubMed

    Cook, Colin A; Huri, Pinar Y; Ginn, Brian P; Gilbert-Honick, Jordana; Somers, Sarah M; Temple, Joshua P; Mao, Hai-Quan; Grayson, Warren L

    2016-08-01

    In vitro engineering systems can be powerful tools for studying tissue development in response to biophysical stimuli as well as for evaluating the functionality of engineered tissue grafts. It has been challenging, however, to develop systems that adequately integrate the application of biomimetic mechanical strain to engineered tissue with the ability to assess functional outcomes in real time. The aim of this study was to design a bioreactor system capable of real-time conditioning (dynamic, uniaxial strain, and electrical stimulation) of centimeter-long 3D tissue engineered constructs simultaneously with the capacity to monitor local strains. The system addresses key limitations of uniform sample loading and real-time imaging capabilities. Our system features an electrospun fibrin scaffold, which exhibits physiologically relevant stiffness and uniaxial alignment that facilitates cell adhesion, alignment, and proliferation. We have demonstrated the capacity for directly incorporating human adipose-derived stromal/stem cells into the fibers during the electrospinning process and subsequent culture of the cell-seeded constructs in the bioreactor. The bioreactor facilitates accurate pre-straining of the 3D constructs as well as the application of dynamic and static uniaxial strains while monitoring bulk construct tensions. The incorporation of fluorescent nanoparticles throughout the scaffolds enables in situ monitoring of local strain fields using fluorescent digital image correlation techniques, since the bioreactor is imaging compatible, and allows the assessment of local sample stiffness and stresses when coupled with force sensor measurements. In addition, the system is capable of measuring the electromechanical coupling of skeletal muscle explants by applying an electrical stimulus and simultaneously measuring the force of contraction. The packaging of these technologies, biomaterials, and analytical methods into a single bioreactor system has produced a

  20. Combination of a vision system and a coordinate measuring machine for rapid coordinate metrology

    NASA Astrophysics Data System (ADS)

    Qu, Yufu; Pu, Zhaobang; Liu, Guodong

    2002-09-01

    This paper presents a novel methodology that integrates a vision system and a coordinate measuring machine for rapid coordinate metrology. Rapid acquisition of coordinate data from parts having tiny dimension, complex geometry and soft or fragile material has many applications. Typical examples include Large Scale Integrated circuit, glass or plastic part measurement, and reverse engineering in rapid product design and realization. In this paper, a novel approach to a measuring methodology for a vision integrated coordinate measuring system is developed and demonstrated. The vision coordinate measuring system is characterized by an integrated use of a high precision coordinate measuring machine (CMM), a vision system, advanced computational software, and the associated electronics. The vision system includes a charge-coupled device (CCD) camera, a self-adapt brightness power, and a graphics workstation with an image processing board. The vision system along with intelligent feature recognition and auto-focus algorithms provides the feature point space coordinate of global part profile after the system has been calibrated. The measured data may be fitted to geometry element of part profile. The obtained results are subsequently used to compute parameters consist of curvature radius, distance, shape error and surface reconstruction. By integrating the vision system with the CMM, a highly automated, high speed, 3D coordinate acquisition system is developed. It has potential applications in a whole spectrum of manufacturing problems with a major impact on metrology, inspection, and reverse engineering.

  1. Multimodal 3-D reconstruction of human anatomical structures using SurLens Visualization System.

    PubMed

    Adeshina, A M; Hashim, R; Khalid, N E A; Abidin, S Z Z

    2013-03-01

    In the medical diagnosis and treatment planning, radiologists and surgeons rely heavily on the slices produced by medical imaging devices. Unfortunately, these image scanners could only present the 3-D human anatomical structure in 2-D. Traditionally, this requires medical professional concerned to study and analyze the 2-D images based on their expert experience. This is tedious, time consuming and prone to error; expecially when certain features are occluding the desired region of interest. Reconstruction procedures was earlier proposed to handle such situation. However, 3-D reconstruction system requires high performance computation and longer processing time. Integrating efficient reconstruction system into clinical procedures involves high resulting cost. Previously, brain's blood vessels reconstruction with MRA was achieved using SurLens Visualization System. However, adapting such system to other image modalities, applicable to the entire human anatomical structures, would be a meaningful contribution towards achieving a resourceful system for medical diagnosis and disease therapy. This paper attempts to adapt SurLens to possible visualisation of abnormalities in human anatomical structures using CT and MR images. The study was evaluated with brain MR images from the department of Surgery, University of North Carolina, United States and CT abdominal pelvic, from the Swedish National Infrastructure for Computing. The MR images contain around 109 datasets each of T1-FLASH, T2-Weighted, DTI and T1-MPRAGE. Significantly, visualization of human anatomical structure was achieved without prior segmentation. SurLens was adapted to visualize and display abnormalities, such as an indication of walderstrom's macroglobulinemia, stroke and penetrating brain injury in the human brain using Magentic Resonance (MR) images. Moreover, possible abnormalities in abdominal pelvic was also visualized using Computed Tomography (CT) slices. The study shows SurLens' functionality as

  2. a Low-Cost and Portable System for 3d Reconstruction of Texture-Less Objects

    NASA Astrophysics Data System (ADS)

    Hosseininaveh, A.; Yazdan, R.; Karami, A.; Moradi, M.; Ghorbani, F.

    2015-12-01

    The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.

  3. A compact mechatronic system for 3D ultrasound guided prostate interventions

    SciTech Connect

    Bax, Jeffrey; Smith, David; Bartha, Laura; Montreuil, Jacques; Sherebrin, Shi; Gardi, Lori; Edirisinghe, Chandima; Fenster, Aaron

    2011-02-15

    Purpose: Ultrasound imaging has improved the treatment of prostate cancer by producing increasingly higher quality images and influencing sophisticated targeting procedures for the insertion of radioactive seeds during brachytherapy. However, it is critical that the needles be placed accurately within the prostate to deliver the therapy to the planned location and avoid complications of damaging surrounding tissues. Methods: The authors have developed a compact mechatronic system, as well as an effective method for guiding and controlling the insertion of transperineal needles into the prostate. This system has been designed to allow guidance of a needle obliquely in 3D space into the prostate, thereby reducing pubic arch interference. The choice of needle trajectory and location in the prostate can be adjusted manually or with computer control. Results: To validate the system, a series of experiments were performed on phantoms. The 3D scan of the string phantom produced minimal geometric error, which was less than 0.4 mm. Needle guidance accuracy tests in agar prostate phantoms showed that the mean error of bead placement was less then 1.6 mm along parallel needle paths that were within 1.2 mm of the intended target and 1 deg. from the preplanned trajectory. At oblique angles of up to 15 deg. relative to the probe axis, beads were placed to within 3.0 mm along a trajectory that were within 2.0 mm of the target with an angular error less than 2 deg. Conclusions: By combining 3D TRUS imaging system to a needle tracking linkage, this system should improve the physician's ability to target and accurately guide a needle to selected targets without the need for the computer to directly manipulate and insert the needle. This would be beneficial as the physician has complete control of the system and can safely maneuver the needle guide around obstacles such as previously placed needles.

  4. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    PubMed

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. PMID:27410124

  5. The role of 3D plating system in mandibular fractures: A prospective study

    PubMed Central

    Prasad, Rajendra; Thangavelu, Kavin; John, Reena

    2013-01-01

    Aim: The aim of our study was to evaluate the advantages and disadvantages of 3D plating system in the treatment of mandibular fractures. Patients and Methods: 20 mandibular fractures in 18 patients at various anatomic locations and were treated by open reduction and internal fixation using 3D plates. All patients were followed at regular intervals of 4th, 8th and 12th weeks respectively. Patients were assessed post-operatively for lingual splay and occlusal stability. The incidence of neurosensory deficit, infection, masticatory difficulty, non-union, malunion was also assessed. Results: A significant reduction in lingual splay (72.2%) and occlusal stability (72.2%) was seen. The overall complication rate was (16.6%) which included two patients who developed post-operative paresthesia of lip, three patients had infection and two cases of masticatory difficulty which later subsided by higher antibiotics and 4 weeks of MMF. No evidence of non-union, malunion was noted. Conclusion: A single 3D 2 mm miniplate with 2 mm × 8 mm screws is a reliable and an effective treatment modality for mandibular fracture. PMID:23946559

  6. System crosstalk measurement of a time-sequential 3D display using ideal shutter glasses

    NASA Astrophysics Data System (ADS)

    Chen, Fu-Hao; Huang, Kuo-Chung; Lin, Lang-Chin; Chou, Yi-Heng; Lee, Kuen

    2011-03-01

    The market of stereoscopic 3D TV grows up fast recently; however, for 3D TV really taking off, the interoperability of shutter glasses (SG) to view different TV sets must be solved, so we developed a measurement method with ideal shutter glasses (ISG) to separate time-sequential stereoscopic displays and SG. For measuring the crosstalk from time-sequential stereoscopic 3D displays, the influences from SG must be eliminated. The advantages are that the sources to crosstalk are distinguished, and the interoperability of SG is broadened. Hence, this paper proposed ideal shutter glasses, whose non-ideal properties are eliminated, as a platform to evaluate the crosstalk purely from the display. In the ISG method, the illuminance of the display was measured in time domain to analyze the system crosstalk SCT of the display. In this experiment, the ISG method was used to measure SCT with a high-speed-response illuminance meter. From the time-resolved illuminance signals, the slow time response of liquid crystal leading to SCT is visualized and quantified. Furthermore, an intriguing phenomenon that SCT measured through SG increases with shortening view distance was observed, and it may arise from LC leakage of the display and shutter leakage at large view angle. Thus, we measured how LC and shutter leakage depending on view angle and verified our argument. Besides, we used the ISG method to evaluate two displays.

  7. Planar Gradient Diffusion System to Investigate Chemotaxis in a 3D Collagen Matrix.

    PubMed

    Stout, David A; Toyjanova, Jennet; Franck, Christian

    2015-01-01

    The importance of cell migration can be seen through the development of human life. When cells migrate, they generate forces and transfer these forces to their surrounding area, leading to cell movement and migration. In order to understand the mechanisms that can alter and/or affect cell migration, one can study these forces. In theory, understanding the fundamental mechanisms and forces underlying cell migration holds the promise of effective approaches for treating diseases and promoting cellular transplantation. Unfortunately, modern chemotaxis chambers that have been developed are usually restricted to two dimensions (2D) and have complex diffusion gradients that make the experiment difficult to interpret. To this end, we have developed, and describe in this paper, a direct-viewing chamber for chemotaxis studies, which allows one to overcome modern chemotaxis chamber obstacles able to measure cell forces and specific concentration within the chamber in a 3D environment to study cell 3D migration. More compelling, this approach allows one to successfully model diffusion through 3D collagen matrices and calculate the coefficient of diffusion of a chemoattractant through multiple different concentrations of collagen, while keeping the system simple and user friendly for traction force microscopy (TFM) and digital volume correlation (DVC) analysis. PMID:26131645

  8. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  9. Knowledge-based system for computer-aided process planning of laser sensor 3D digitizing

    NASA Astrophysics Data System (ADS)

    Bernard, Alain; Davillerd, Stephane; Sidot, Benoit

    1999-11-01

    This paper introduces some results of a research work carried out on the automation of digitizing process of complex part using a precision 3D-laser sensor. Indeed, most of the operations are generally still manual to perform digitalization. In fact, redundancies, lacks or forgetting in point acquisition are possible. Moreover, digitization time of a part, i.e. immobilization of the machine, is thus not optimized overall. So, it is important, for time- compression during product development, to minimize time consuming of reverse engineering step. A new way to scan automatically a complex 3D part is presented to order to measure and to compare the acquired data with the reference CAD model. After introducing digitization, the environment used for the experiments is presented, based on a CMM machine and a plane laser sensor. Then the proposed strategy is introduced for the adaptation of this environment to a robotic CAD software in order to be able to simulate and validate 3D-laser-scanning paths. The CAPP (Computer Aided Process Planning) system used for the automatic generation of the laser scanning process is also presented.

  10. 3D-model building of the jaw impression

    NASA Astrophysics Data System (ADS)

    Ahmed, Moumen T.; Yamany, Sameh M.; Hemayed, Elsayed E.; Farag, Aly A.

    1997-03-01

    A novel approach is proposed to obtain a record of the patient's occlusion using computer vision. Data acquisition is obtained using intra-oral video cameras. The technique utilizes shape from shading to extract 3D information from 2D views of the jaw, and a novel technique for 3D data registration using genetic algorithms. The resulting 3D model can be used for diagnosis, treatment planning, and implant purposes. The overall purpose of this research is to develop a model-based vision system for orthodontics to replace traditional approaches. This system will be flexible, accurate, and will reduce the cost of orthodontic treatments.

  11. Scripting in Radiation Therapy: An Automatic 3D Beam-Naming System

    SciTech Connect

    Holdsworth, Clay; Hummel-Kramer, Sharon M.; Phillips, Mark

    2011-10-01

    Scripts can be executed within the radiation treatment planning software framework to reduce human error, increase treatment planning efficiency, reduce confusion, and promote consistency within an institution or even among institutions. Scripting is versatile, and one application is an automatic 3D beam-naming system that describes the position of the beam relative to the patient in 3D space. The naming system meets the need for nomenclature that is conducive for clear and accurate communication of beam entry relative to patient anatomy. In radiation oncology in particular, where miscommunication can cause significant harm to patients, a system that minimizes error is essential. Frequent sharing of radiation treatment information occurs not only among members within a department but also between different treatment centers. Descriptions of treatment beams are perhaps the most commonly shared information about a patient's course of treatment in radiation oncology. Automating the naming system by the use of a script reduces the potential for human error, improves efficiency, enforces consistency, and would allow an institution to convert to a new naming system with greater ease. This script has been implemented in the Department of Radiation Oncology at the University of Washington Medical Center since December 2009. It is currently part of the dosimetry protocol and is accessible by medical dosimetrists, radiation oncologists, and medical physicists. This paper highlights the advantages of using an automatic 3D beam-naming script to flawlessly and quickly identify treatment beams with unique names. Scripting in radiation treatment planning software has many uses and great potential for improving clinical care.

  12. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  13. A GIS-based 3D online information system for underground energy storage in northern Germany

    NASA Astrophysics Data System (ADS)

    Nolde, Michael; Malte, Schwanebeck; Ehsan, Biniyaz; Rainer, Duttmann

    2015-04-01

    We would like to present the concept and current state of development of a GIS-based 3D online information system for underground energy storage. Its aim is to support the local authorities through pre-selection of possible sites for thermal, electrical and substantial underground energy storages. Since the extension of renewable energies has become legal requirement in Germany, the underground storing of superfluously produced green energy (such as during a heavy wind event) in the form of compressed air, gas or heated water has become increasingly important. However, the selection of suitable sites is a complex task. The presented information system uses data of geological features such as rock layers, salt domes and faults enriched with attribute data such as rock porosity and permeability. This information is combined with surface data of the existing energy infrastructure, such as locations of wind and biogas stations, powerline arrangement and cable capacity, and energy distribution stations. Furthermore, legal obligations such as protected areas on the surface and current underground mining permissions are used for the process of pre-selecting sites suitable for energy storage. Not only the current situation but also prospective scenarios, such as expected growth in produced amount of energy are incorporated in the system. While the process of pre-selection itself is completely automated, the user has full control of the weighting of the different factors via the web interface. The system is implemented as an online 3D server GIS environment, so that it can easily be utilized in any web browser. The results are visualized online as interactive 3d graphics. The information system is implemented in the Python programming language in combination with current Web standards, and is build using only free and open source software. It is being developed at Kiel University as part of the ANGUS+ project (lead by Prof. Sebastian Bauer) for the federal state of

  14. Tactical 3D model generation using structure-from-motion on video from unmanned systems

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Bilinski, Mark; Nguyen, Kim B.; Powell, Darren

    2015-05-01

    Unmanned systems have been cited as one of the future enablers of all the services to assist the warfighter in dominating the battlespace. The potential benefits of unmanned systems are being closely investigated -- from providing increased and potentially stealthy surveillance, removing the warfighter from harms way, to reducing the manpower required to complete a specific job. In many instances, data obtained from an unmanned system is used sparingly, being applied only to the mission at hand. Other potential benefits to be gained from the data are overlooked and, after completion of the mission, the data is often discarded or lost. However, this data can be further exploited to offer tremendous tactical, operational, and strategic value. To show the potential value of this otherwise lost data, we designed a system that persistently stores the data in its original format from the unmanned vehicle and then generates a new, innovative data medium for further analysis. The system streams imagery and video from an unmanned system (original data format) and then constructs a 3D model (new data medium) using structure-from-motion. The 3D generated model provides warfighters additional situational awareness, tactical and strategic advantages that the original video stream lacks. We present our results using simulated unmanned vehicle data with Google Earth™providing the imagery as well as real-world data, including data captured from an unmanned aerial vehicle flight.

  15. Development of a 3D Potential Field Forward Modelling System in Python

    NASA Astrophysics Data System (ADS)

    Cole, P.

    2012-12-01

    The collection of potential field data has long been a standard part of geophysical exploration. Specifically, airborne magnetic data is collected routinely in any brown-fields area, because of the low cost and fast acquisition rate compared to other geophysical techniques. However, the interpretation of such data can be a daunting task, especially when 3D models are becoming more necessary. The current trend in modelling software is to follow either the modelling of individual profiles, which are then "joined" up into 3D sections, or to model in a full 3D using polygonal based models (Singh and Guptasarma, 2001). Unfortunately, both techniques have disadvantages. When modelling in 2.5D the impact of other profiles is not truly available on your current profile being modelled, and vice versa. The problem is not present in 3D, but 3D polygonal models, while being easy to construct the initial model, are not as easy to make fast changes to. In some cases, the entire model must be recreated from scratch. The ability to easily change a model is the very basis of forward modelling. With this is mind, the objective of the project was to: 1) Develop software which was truly modelling in 3D 2) Create a system which would allow the rapid changing of the 3D model, without the need to recreate the model. The solution was to adopt a voxel based approach, rather than a polygonal approach. The solution for a cube (Blakely 1996) was used to calculate potential field for each voxel. The voxels are then summed over the entire volume. The language used was python, because of its huge capacity for scientific development. It enables full 3D visualisation as well as complex mathematical routines. Some properties worth noting are: 1) Although 200 rows by 200 columns by 200 layers would imply 8 million calculations, in reality, since the calculation for adjacent voxels produces the same result, only 200 calculations are necessary. 2) Changes to susceptibility and density do not affect

  16. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/.

  17. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  18. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  19. Stereotactic vacuum-assisted biopsies on a digital breast 3D-tomosynthesis system.

    PubMed

    Viala, Juliette; Gignier, Pierre; Perret, Baudouin; Hovasse, Claudie; Hovasse, Denis; Chancelier-Galan, Marie-Dominique; Bornet, Gregoire; Hamrouni, Adel; Lasry, Jean-Louis; Convard, Jean-Paul

    2013-01-01

    The purpose of this study was to describe our operating process and to report results of 118 stereotactic vacuum-assisted biopsies performed on a digital breast 3D-tomosynthesis system. From October 2009 to December 2010, 118 stereotactic vacuum assisted biopsies have been performed on a digital breast 3D-tomosynthesis system. Informed consent was obtained for all patients. A total of 106 patients had a lesion, six had two lesions. Sixty-one lesions were clusters of micro-calcifications, 54 were masses and three were architectural distortions. Patients were in lateral decubitus position to allow shortest skin-target approach (or sitting). Specific compression paddle, adapted on the system, performed, and graduated, allowing localization in X-Y. Tomosynthesis views define the depth of lesion. Graduated Coaxial localization kit determines the beginning of the biopsy window. Biopsies were performed with an ATEC-Suros, 9 Gauge handpiece. All biopsies, except one, have reached the lesions. Five hemorrhages were incurred in the process, but no interruption was needed. Eight breast hematomas, were all spontaneously resolved. One was an infection. About 40% of patients had a skin ecchymosis. Processing is fast, easy, and requires lower irradiation dose than with classical stereotactic biopsies. Histology analysis reported 45 benign clusters of micro-calcifications, 16 malignant clusters of micro-calcifications, 24 benign masses, and 33 malignant masses. Of 13 malignant lesions, digital 2D-mammography failed to detect eight lesions and underestimated the classification of five lesions. Digital breast 3D-tomosynthesis depicts malignant lesions not visualized on digital 2D-mammography. Development of tomosynthesis biopsy unit integrated to stereotactic system will permit histology analysis for suspicious lesions.

  20. Design and Performance Evaluation on Ultra-Wideband Time-Of-Arrival 3D Tracking System

    NASA Technical Reports Server (NTRS)

    Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Dusl, John

    2012-01-01

    A three-dimensional (3D) Ultra-Wideband (UWB) Time--of-Arrival (TOA) tracking system has been studied at NASA Johnson Space Center (JSC) to provide the tracking capability inside the International Space Station (ISS) modules for various applications. One of applications is to locate and report the location where crew experienced possible high level of carbon-dioxide and felt upset. In order to accurately locate those places in a multipath intensive environment like ISS modules, it requires a robust real-time location system (RTLS) which can provide the required accuracy and update rate. A 3D UWB TOA tracking system with two-way ranging has been proposed and studied. The designed system will be tested in the Wireless Habitat Testbed which simulates the ISS module environment. In this presentation, we discuss the 3D TOA tracking algorithm and the performance evaluation based on different tracking baseline configurations. The simulation results show that two configurations of the tracking baseline are feasible. With 100 picoseconds standard deviation (STD) of TOA estimates, the average tracking error 0.2392 feet (about 7 centimeters) can be achieved for configuration Twisted Rectangle while the average tracking error 0.9183 feet (about 28 centimeters) can be achieved for configuration Slightly-Twisted Top Rectangle . The tracking accuracy can be further improved with the improvement of the STD of TOA estimates. With 10 picoseconds STD of TOA estimates, the average tracking error 0.0239 feet (less than 1 centimeter) can be achieved for configuration "Twisted Rectangle".

  1. Evaluation of 3D Gamma index calculation implemented in two commercial dosimetry systems

    NASA Astrophysics Data System (ADS)

    Xing, Aitang; Arumugam, Sankar; Deshpande, Shrikant; George, Armia; Vial, Philip; Holloway, Lois; Goozee, Gary

    2015-01-01

    3D Gamma index is one of the metrics which have been widely used for clinical routine patient specific quality assurance for IMRT, Tomotherapy and VMAT. The algorithms for calculating the 3D Gamma index using global and local methods implemented in two software tools: PTW- VeriSoft® as a part of OCTIVIUS 4D dosimeter systems and 3DVHTM from Sun Nuclear were assessed. The Gamma index calculated by the two systems was compared with manual calculated for one data set. The Gamma pass rate calculated by the two systems was compared using 3%/3mm, 2%/2mm, 3%/2mm and 2%/3mm for two additional data sets. The Gamma indexes calculated by the two systems were accurate, but Gamma pass rates calculated by the two software tools for same data set with the same dose threshold were different due to the different interpolation of raw dose data by the two systems and different implementation of Gamma index calculation and other modules in the two software tools. The mean difference was -1.3%±3.38 (1SD) with a maximum difference of 11.7%.

  2. Design of a large area 3D surface structure measurement system

    NASA Astrophysics Data System (ADS)

    Wang, Shenghuai; Li, Xin; Chen, Yurong; Xie, Tiebang

    2010-10-01

    Surface texture plays a vital role in modern engineering products. Currently surface metrology discipline is undergoing a paradigm shift from 2D profile to 3D areal and from stochastic to structured surface characterization. Areal surface texture measurements have greater fully functional significance parameters, better repeatability and more effectively visual express than profile measurements. The existing white light microscopy interference measurement can be used for the non-contact measurement of areal surface texture. However, the measurement field and lateral resolution of this method is restricted to the numerical aperture of objective. To address this issue, a type of vertical scanning white light interference stitching measurement system with large area and seamless has been built up in this paper. This system is based on the compound optical microscopy system and 3D precision displacement system with large travel, nanometer level and displacement measurement. The CCD calibration and angles calculation between CCD and level worktables are settled depending on the measurement system itself. A non-orthogonal worktable moving strategy is used for the seamless stitching measurement of this measurement method, which reduces the cost of stitching and enlarges the measurement field. Therefore the problem, which the lateral resolution and the measurement filed are restricted to the numerical aperture of objective, is solved. An automatic search and location method of fringe for white light interference measurement based on the normalized standard deviation of gray value of interference microscopy images is proposed to solve the problem of inefficiency for the search of interference fringe by hand.

  3. A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; Abernathy, Mike

    2004-01-01

    A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.

  4. 3-D synthetic aperture processing on high-frequency wide-beam microwave systems

    NASA Astrophysics Data System (ADS)

    Cristofani, Edison; Brook, Anna; Vandewal, Marijke

    2012-06-01

    The use of High-Frequency MicroWaves (HFMW) for high-resolution imagery has gained interest over the last years. Very promising in-depth applications can be foreseen for composite non-metal, non-polarized materials, widely used in the aeronautic and aerospace industries. Most of these materials present a high transparency in the HFMW range and, therefore, defects, delaminations or occlusions within the material can be located. This property can be exploited by applying 3-D HFMW imaging where conventional focused imaging systems are typically used but a different approach such as Synthetic Aperture (SA) radar can be addressed. This paper will present an end-to-end 3-D imagery system for short-range, non-destructive testing based on a frequency-modulated continuous-wave HFMWsensor operating at 100 GHz, implying no health concerns to the human body as well as relatively low cost and limited power requirements. The sensor scans the material while moving sequentially in every elevation plane following a 2-D grid and uses a significantly wide beam antenna for data acquisition, in contrast to focused systems. Collected data must be coherently combined using a SA algorithm to form focused images. Range-independent, synthetically improved cross-range resolutions are remarkable added values of SA processing. Such algorithms can be found in the literature and operate in the time or frequency domains, being the former computationally impractical and the latter the best option for in-depth 3-D imaging. A balanced trade-off between performance and image focusing quality is investigated for several SA algorithms.

  5. Mobile robot on-board vision system

    SciTech Connect

    McClure, V.W.; Nai-Yung Chen.

    1993-06-15

    An automatic robot system is described comprising: an AGV transporting and transferring work piece, a control computer on board the AGV, a process machine for working on work pieces, a flexible robot arm with a gripper comprising two gripper fingers at one end of the arm, wherein the robot arm and gripper are controllable by the control computer for engaging a work piece, picking it up, and setting it down and releasing it at a commanded location, locating beacon means mounted on the process machine, wherein the locating beacon means are for locating on the process machine a place to pick up and set down work pieces, vision means, including a camera fixed in the coordinate system of the gripper means, attached to the robot arm near the gripper, such that the space between said gripper fingers lies within the vision field of said vision means, for detecting the locating beacon means, wherein the vision means provides the control computer visual information relating to the location of the locating beacon means, from which information the computer is able to calculate the pick up and set down place on the process machine, wherein said place for picking up and setting down work pieces on the process machine is a nest means and further serves the function of holding a work piece in place while it is worked on, the robot system further comprising nest beacon means located in the nest means detectable by the vision means for providing information to the control computer as to whether or not a work piece is present in the nest means.

  6. 3D-AQS: a three-dimensional air quality system

    NASA Astrophysics Data System (ADS)

    Hoff, Raymond M.; Engel-Cox, Jill A.; Dimmick, Fred; Szykman, James J.; Johns, Brad; Kondragunta, Shobha; Rogers, Raymond; McCann, Kevin; Chu, D. Allen; Torres, Omar; Prados, Ana; Al-Saadi, Jassim; Kittaka, Chieko; Boothe, Vickie; Ackerman, Steve; Wimmers, Anthony

    2006-08-01

    In 2006, we began a three-year project funded by the NASA Integrated Decisions Support program to develop a three-dimensional air quality system (3D-AQS). The focus of 3D-AQS is on the integration of aerosol-related NASA Earth Science Data into key air quality decision support systems used for air quality management, forecasting, and public health tracking. These will include the U.S. Environmental Protection Agency (EPA)'s Air Quality System/AirQuest and AIRNow, Infusing satellite Data into Environmental Applications (IDEA) product, U.S. Air Quality weblog (Smog Blog) and the Regional East Atmospheric Lidar Mesonet (REALM). The project will result in greater accessibility of satellite and lidar datasets that, when used in conjunction with the ground-based particulate matter monitors, will enable monitoring across horizontal and vertical dimensions. Monitoring in multiple dimensions will enhance the air quality community's ability to monitor and forecast the geospatial extent and transboundary transport of air pollutants, particularly fine particulate matter. This paper describes the concept of this multisensor system and gives current examples of the types of products that will result from it.

  7. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing.

  8. A 3D undulatory locomotion system inspired by nematode C. elegans.

    PubMed

    Deng, Xin; Xu, Jian-Xin

    2014-01-01

    This paper provides an undulatory locomotion model inspired by C. elegans, whose nervous system and muscular structure are well studied. C. elegans is divided into 11 muscle segments according to its anatomical structure, and represented as a multi-joint rigid link model in this work. In each muscle segment, there are four pieces of muscles located in four quadrants. The muscles change their lengths according to the outputs of nervous system. In this work, the dynamic neural networks (DNN) are adopted to represent the nervous system. The DNN are divided into the head DNN and the body DNN. The head DNN produces the sinusoid waves to generate the forward and backward undulatory movements. The body DNN with 11 segments is responsible for passing the sinusoid wave and creating the phase lag. The 3D locomotion of this system are implemented by using the DNN to control the muscle lengths, and then using the muscle lengths to control the angles between two consecutive links on both horizontal and vertical planes. The test results show good performances of this model in both forward and backward locomotion in 3D, which could serve as a prototype of the micro-robot for clinical use. PMID:24211936

  9. A 3D undulatory locomotion system inspired by nematode C. elegans.

    PubMed

    Deng, Xin; Xu, Jian-Xin

    2014-01-01

    This paper provides an undulatory locomotion model inspired by C. elegans, whose nervous system and muscular structure are well studied. C. elegans is divided into 11 muscle segments according to its anatomical structure, and represented as a multi-joint rigid link model in this work. In each muscle segment, there are four pieces of muscles located in four quadrants. The muscles change their lengths according to the outputs of nervous system. In this work, the dynamic neural networks (DNN) are adopted to represent the nervous system. The DNN are divided into the head DNN and the body DNN. The head DNN produces the sinusoid waves to generate the forward and backward undulatory movements. The body DNN with 11 segments is responsible for passing the sinusoid wave and creating the phase lag. The 3D locomotion of this system are implemented by using the DNN to control the muscle lengths, and then using the muscle lengths to control the angles between two consecutive links on both horizontal and vertical planes. The test results show good performances of this model in both forward and backward locomotion in 3D, which could serve as a prototype of the micro-robot for clinical use.

  10. Innovative LIDAR 3D Dynamic Measurement System to estimate fruit-tree leaf area.

    PubMed

    Sanz-Cortiella, Ricardo; Llorens-Calveras, Jordi; Escolà, Alexandre; Arnó-Satorra, Jaume; Ribes-Dasi, Manel; Masip-Vilalta, Joan; Camp, Ferran; Gràcia-Aguilá, Felip; Solanelles-Batlle, Francesc; Planas-DeMartí, Santiago; Pallejà-Cabré, Tomàs; Palacin-Roca, Jordi; Gregorio-Lopez, Eduard; Del-Moral-Martínez, Ignacio; Rosell-Polo, Joan R

    2011-01-01

    In this work, a LIDAR-based 3D Dynamic Measurement System is presented and evaluated for the geometric characterization of tree crops. Using this measurement system, trees were scanned from two opposing sides to obtain two three-dimensional point clouds. After registration of the point clouds, a simple and easily obtainable parameter is the number of impacts received by the scanned vegetation. The work in this study is based on the hypothesis of the existence of a linear relationship between the number of impacts of the LIDAR sensor laser beam on the vegetation and the tree leaf area. Tests performed under laboratory conditions using an ornamental tree and, subsequently, in a pear tree orchard demonstrate the correct operation of the measurement system presented in this paper. The results from both the laboratory and field tests confirm the initial hypothesis and the 3D Dynamic Measurement System is validated in field operation. This opens the door to new lines of research centred on the geometric characterization of tree crops in the field of agriculture and, more specifically, in precision fruit growing. PMID:22163926

  11. ProVac3D and Application to the Neutral Beam Injection System of ITER

    SciTech Connect

    Luo, X.; Dremel, M.; Day, Ch.

    2008-12-31

    In order to heat the confined plasma up to 100 million degrees Celsius and initiate a sustained fusion reaction, ITER will use several heating mechanisms at the same time, of which Neutral Beam Injection (NBI) systems play an important role. The NBI includes several internal gas sources and has to be operated under vacuum conditions. We have developed ProVac3D, a Monte Carlo simulation code, to calculate gas dynamics and the density profiles in volumes of interest inside NBI. This enables us to elaborate our in-situ and state-of-the-art cryogenic pump design and estimate the corresponding pumping speed.

  12. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  13. A 3D- and 4D-ESR imaging system for small animals.

    PubMed

    Oikawa, K; Ogata, T; Togashi, H; Yokoyama, H; Ohya-Nishiguchi, H; Kamada, H

    1996-01-01

    A new version of in vivo ESR-CT system composed of custom-made 0.7 GHz ESR spectrometer, air-core magnet with a field-scanning coil, three field-gradient coils, and two computers enables up- and down-field, and rapid magnetic-field scanning linearly controlled by computer. 3D-pictures of distribution of nitroxide radicals injected in brains and livers of rats and mice were obtained in 1.5 min with resolution of 1 mm. We have also succeeded in obtaining spatial-time imagings of the animals.

  14. 3D monitoring and quality control using intraoral optical camera systems.

    PubMed

    Mehl, A; Koch, R; Zaruba, M; Ender, A

    2013-01-01

    The quality of intraoral scanning systems is steadily improving, and they are becoming easier and more reliable to operate. This opens up possibilities for routine clinical applications. A special aspect is that overlaying (superimposing) situations recorded at different times facilitates an accurate three-dimensional difference analysis. Such difference analyses can also be used to advantage in other areas of dentistry where target/actual comparisons are required. This article presents potential indications using a newly developed software, explaining the functionality of the evaluation process and the prerequisites and limitations of 3D monitoring.

  15. Prototype Optical Correlator For Robotic Vision System

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.

    1993-01-01

    Known and unknown images fed in electronically at high speed. Optical correlator and associated electronic circuitry developed for vision system of robotic vehicle. System recognizes features of landscape by optical correlation between input image of scene viewed by video camera on robot and stored reference image. Optical configuration is Vander Lugt correlator, in which Fourier transform of scene formed in coherent light and spatially modulated by hologram of reference image to obtain correlation.

  16. Zoom Vision System For Robotic Welding

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Hudyma, Russell M.

    1990-01-01

    Rugged zoom lens subsystem proposed for use in along-the-torch vision system of robotic welder. Enables system to adapt, via simple mechanical adjustments, to gas cups of different lengths, electrodes of different protrusions, and/or different distances between end of electrode and workpiece. Unnecessary to change optical components to accommodate changes in geometry. Easy to calibrate with respect to object in view. Provides variable focus and variable magnification.

  17. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  18. Vision enhanced navigation for unmanned systems

    NASA Astrophysics Data System (ADS)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  19. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    NASA Astrophysics Data System (ADS)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  20. Stanford 3D hyperthermia treatment planning system. Technical review and clinical summary.

    PubMed

    Sullivan, D M; Ben-Yosef, R; Kapp, D S

    1993-01-01

    In the field of deep regional hyperthermia cancer therapy the Sigma 60 applicator of the BSD-2000 Hyperthermia System is one of the most widely used devices. This device employs four independent sources of radiofrequency electromagnetic energy to heat tumour sites deep within the body. The difficulty in determining the input parameters for the four sources has motivated the development of a computer-based three-dimensional (3D) treatment planning system. The Stanford 3D Hyperthermia Treatment Planning System has been in clinical use at Stanford Medical Center for the past 2 years. It utilizes a patient-specific, three-dimensional computer simulation to determine safe and effective power deposition plans. An optimization programme for the selection of the amplitudes, phases and frequency for the sources has been developed and used in the clinic. Examples of the application of the treatment planning for hyperthermia treatment of pulmonary, pelvic, and mediastinal tumours are presented. Methods for quantifying the relative effectiveness of various treatment plans are reviewed.

  1. 3D printed miniaturized spectral system for tissue fluorescence lifetime measurements

    NASA Astrophysics Data System (ADS)

    Zou, Luwei; Mahmoud, Mohamad; Fahs, Mehdi; Liu, Rui; Lo, Joe F.

    2016-04-01

    Various types of collagens, e.g. type I and III, represent the main load-bearing components in biological tissues. Their composition changes during processes like wound healing and fibrosis. Collagens exhibit autofluorescence when excited by ultra-violet light, distinguishable by their unique fluorescent lifetimes across a range of emission wavelengths. Therefore, we designed a miniaturized spectral-lifetime detection system for collagens as a non-invasive probe for monitoring tissue in wound healing and scarring applications. A sine modulated LED illumination was applied to enable frequency domain (FD) fluorescence lifetime measurements under different wavelengths bands, separated via a series of longpass dichroics at 387nm, 409nm and 435nm. To achieve the minute scale of optomechanics, we employed a stereolithography based 3D printer with <50 μm resolution to create a custom designed optical mount in a hand-held form factor. We examined the characteristics of the 3D printed optical system with finite element modeling to simulate the effect of thermal (LED) and mechanical (handling) strain on the optical system. Using this device, the phase shift and demodulation of collagen types were measured, where the separate spectral bands enhanced the differentiation of their lifetimes.

  2. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  3. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  4. Accuracy evaluation of a 3D ultrasound-guided biopsy system

    NASA Astrophysics Data System (ADS)

    Wooten, Walter J.; Nye, Jonathan A.; Schuster, David M.; Nieh, Peter T.; Master, Viraj A.; Votaw, John R.; Fei, Baowei

    2013-03-01

    Early detection of prostate cancer is critical in maximizing the probability of successful treatment. Current systematic biopsy approach takes 12 or more randomly distributed core tissue samples within the prostate and can have a high potential, especially with early disease, for a false negative diagnosis. The purpose of this study is to determine the accuracy of a 3D ultrasound-guided biopsy system. Testing was conducted on prostate phantoms created from an agar mixture which had embedded markers. The phantoms were scanned and the 3D ultrasound system was used to direct the biopsy. Each phantom was analyzed with a CT scan to obtain needle deflection measurements. The deflection experienced throughout the biopsy process was dependent on the depth of the biopsy target. The results for markers at a depth of less than 20 mm, 20-30 mm, and greater than 30 mm were 3.3 mm, 4.7 mm, and 6.2 mm, respectively. This measurement encapsulates the entire biopsy process, from the scanning of the phantom to the firing of the biopsy needle. Increased depth of the biopsy target caused a greater deflection from the intended path in most cases which was due to an angular incidence of the biopsy needle. Although some deflection was present, this system exhibits a clear advantage in the targeted biopsy of prostate cancer and has the potential to reduce the number of false negative biopsies for large lesions.

  5. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    SciTech Connect

    Langenbuch, S.; Velkov, K.; Lizorkin, M.

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  6. Bioinspired minimal machine multiaperture apposition vision system.

    PubMed

    Davis, John D; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2008-01-01

    Traditional machine vision systems have an inherent data bottleneck that arises because data collected in parallel must be serialized for transfer from the sensor to the processor. Furthermore, much of this data is not useful for information extraction. This project takes inspiration from the visual system of the house fly, Musca domestica, to reduce this bottleneck by employing early (up front) analog preprocessing to limit the data transfer. This is a first step toward an all analog, parallel vision system. While the current implementation has serial stages, nothing would prevent it from being fully parallel. A one-dimensional photo sensor array with analog pre-processing is used as the sole sensory input to a mobile robot. The robot's task is to chase a target car while avoiding obstacles in a constrained environment. Key advantages of this approach include passivity and the potential for very high effective "frame rates."

  7. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  8. Visual Turing test for computer vision systems

    PubMed Central

    Geman, Donald; Geman, Stuart; Hallonquist, Neil; Younes, Laurent

    2015-01-01

    Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a “visual Turing test”: an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (“just-in-time truthing”). The test is then administered to the computer-vision system, one question at a time. After the system’s answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers—the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects. PMID:25755262

  9. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  10. Salt as a 3D element in structural modelling - example from the Central European Basin System

    NASA Astrophysics Data System (ADS)

    Maystrenko, Y. P.; Scheck-Wenderoth, M.; Bayer, U.

    2010-12-01

    The Central European Basin System (CEBS) covers the northern part of Central and Western Europe and contains up to 12 km of Permian to Cenozoic deposits. Initiated in the Early Permian, the Central European Basin System accumulated Lower Permian clastics overlain by significant amount of Upper Permian (Zechstein) salt. Post-Permian differentiation of the basin system was controlled by several phases of tectonic activity. These tectonic phases not only provoked regional shifts in subsidence and erosion but also triggered movements of the Upper Permian (Zechstein) salt. Salt rise strongly influenced the Meso-Cenozoic structural evolution in terms of mechanical decoupling of the sedimentary cover from its basement. As a result of several phases of salt tectonics, the CEBS displays a wide variety of salt structures (walls, diapirs and pillows). In order to investigate the interaction of salt movements, deposition and tectonics, the 3D structural model of the CEBS has been constructed covering the entire salt basin (Northern and Southern Permian basins). Seismic interpretation and 3D backstripping have been used to investigate both the present-day structure and the evolution of the CEBS. 3D backstripping includes 3D salt redistribution in response to the changing load conditions in the salt cover. The results of 3D modelling of salt movements and seismic data indicate that the primary initiation of salt movements occurred during the Triassic. The Triassic regional extensional event initiated a phase of salt movements within the coeval depocenters of the CEBS, such as the Glueckstadt Graben, the Horn Graben, the Fjerritslev Trough and the adjacent Himmerland Graben in Denmark, as well as the Polish Basin. The Early Triassic (Buntsandstein) and the Late Triassic (Middle-Late Keuper) extensional events triggered strongest salt movements within the central part of the Glueckstadt Graben. During the Late Jurassic-Early Cretaceous, major erosion regionally truncated the study

  11. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    PubMed Central

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  12. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    PubMed

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-01-01

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement. PMID:27136553

  13. A preliminary evaluation work on a 3D ultrasound imaging system for 2D array transducer

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaoli; Li, Xu; Yang, Jiali; Li, Chunyu; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a preliminary evaluation work on a pre-designed 3-D ultrasound imaging system. The system mainly consists of four parts, a 7.5MHz, 24×24 2-D array transducer, the transmit/receive circuit, power supply, data acquisition and real-time imaging module. The row-column addressing scheme is adopted for the transducer fabrication, which greatly reduces the number of active channels . The element area of the transducer is 4.6mm by 4.6mm. Four kinds of tests were carried out to evaluate the imaging performance, including the penetration depth range, axial and lateral resolution, positioning accuracy and 3-D imaging frame rate. Several strong reflection metal objects , fixed in a water tank, were selected for the purpose of imaging due to a low signal-to-noise ratio of the transducer. The distance between the transducer and the tested objects , the thickness of aluminum, and the seam width of the aluminum sheet were measured by a calibrated micrometer to evaluate the penetration depth, the axial and lateral resolution, respectively. The experiment al results showed that the imaging penetration depth range was from 1.0cm to 6.2cm, the axial and lateral resolution were 0.32mm and 1.37mm respectively, the imaging speed was up to 27 frames per second and the positioning accuracy was 9.2%.

  14. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    PubMed

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time. PMID:26954841

  15. A PET/CT Directed, 3D Ultrasound-Guided Biopsy System for Prostate Cancer

    PubMed Central

    Master, Viraj; Nieh, Peter; Akbari, Hamed; Yang, Xiaofeng; Fenster, Aaron; Schuster, David

    2015-01-01

    Prostate cancer affects 1 in 6 men in the USA. Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this “blind” biopsy approach can miss at least 20% of prostate cancers. In this study, we are developing a PET/CT directed, 3D ultrasound image-guided biopsy system for improved detection of prostate cancer. In order to plan biopsy in three dimensions, we developed an automatic segmentation method based wavelet transform for 3D TRUS images of the prostate. The segmentation was tested in five patients with a DICE overlap ratio of more than 91%. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed a nonrigid registration algorithm for TRUS and PET/CT images. The registration method has been tested in a prostate phantom with a target registration error (TRE) of less than 0.4 mm. The segmentation and registration methods are two key components of the multimodality molecular image-guided biopsy system. PMID:26866061

  16. A biofidelic 3D culture model to study the development of brain cellular systems

    PubMed Central

    Ren, M.; Du, C.; Herrero Acero, E.; Tang-Schomer, M. D.; Özkucur, N.

    2016-01-01

    Little is known about how cells assemble as systems during corticogenesis to generate collective functions. We built a neurobiology platform that consists of fetal rat cerebral cortical cells grown within 3D silk scaffolds (SF). Ivermectin (Ivm), a glycine receptor (GLR) agonist, was used to modulate cell resting membrane potential (Vmem) according to methods described in a previous work that implicated Ivm in the arrangement and connectivity of cortical cell assemblies. The cells developed into distinct populations of neuroglial stem/progenitor cells, mature neurons or epithelial-mesenchymal cells. Importantly, the synchronized electrical activity in the newly developed cortical assemblies could be recorded as local field potential (LFP) measurements. This study therefore describes the first example of the development of a biologically relevant cortical plate assembly outside of the body. This model provides i) a preclinical basis for engineering cerebral cortex tissue autografts and ii) a biofidelic 3D culture model for investigating biologically relevant processes during the functional development of cerebral cortical cellular systems. PMID:27112667

  17. Active optical system for advanced 3D surface structuring by laser remelting

    NASA Astrophysics Data System (ADS)

    Pütsch, O.; Temmler, A.; Stollenwerk, J.; Willenborg, E.; Loosen, P.

    2015-03-01

    Structuring by laser remelting enables completely new possibilities for designing surfaces since material is redistributed but not wasted. In addition to technological advantages, cost and time benefits yield from shortened process times, the avoidance of harmful chemicals and the elimination of subsequent finishing steps such as cleaning and polishing. The functional principle requires a completely new optical machine technology that maintains the spatial and temporal superposition and manipulation of three different laser beams emitted from two laser sources of different wavelength. The optical system has already been developed and demonstrated for the processing of flat samples of hot and cold working steel. However, since particularly the structuring of 3D-injection molds represents an application example of high innovation potential, the optical system has to take into account the elliptical beam geometry that occurs when the laser beams irradiate a curved surface. To take full advantage of structuring by remelting for the processing of 3D surfaces, additional optical functionality, called EPS (elliptical pre-shaping) has to be integrated into the existing set-up. The development of the beam shaping devices not only requires the analysis of the mechanisms of the beam projection but also a suitable optical design. Both aspects are discussed in this paper.

  18. Detecting method of subjects' 3D positions and experimental advanced camera control system

    NASA Astrophysics Data System (ADS)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  19. Experimental validation of a commercial 3D dose verification system for intensity-modulated arc therapies

    NASA Astrophysics Data System (ADS)

    Boggula, Ramesh; Lorenz, Friedlieb; Mueller, Lutz; Birkner, Mattias; Wertz, Hansjoerg; Stieler, Florian; Steil, Volker; Lohr, Frank; Wenz, Frederik

    2010-10-01

    We validate the dosimetric performance of COMPASS®, a novel 3D quality assurance system for verification of volumetric-modulated arc therapy (VMAT) treatment plans that can correlate the delivered dose to the patient's anatomy, taking into account the tissue inhomogeneity. The accuracy of treatment delivery was assessed by the COMPASS® for 12 VMAT plans, and the resulting assessments were evaluated using an ionization chamber and film measurements. Dose-volume relationships were evaluated by the COMPASS® for three additional treatment plans and these were used to verify the accuracy of treatment planning dose calculations. The results matched well between COMPASS® and measurements for the ionization chamber (<=3%) and film (73-99% for gamma(3%/3 mm) < 1 and 98-100% for gamma(5%/5 mm) < 1) for the phantom plans. Differences in dose-volume statistics for the average dose to the PTV were within 2.5% for three treatment plans. For the structures located in the low-dose region, a maximum difference of <9% was observed. In its current implementation, the system could measure the delivered dose with sufficient accuracy and could project the 3D dose distribution directly on the patient's anatomy. Slight deviations were found for large open fields. These could be minimized by improving the COMPASS® in-built beam model.

  20. An efficient topology adaptation system for parametric active contour segmentation of 3D images

    NASA Astrophysics Data System (ADS)

    Abhau, Jochen; Scherzer, Otmar

    2008-03-01

    Active contour models have already been used succesfully for segmentation of organs from medical images in 3D. In implicit models, the contour is given as the isosurface of a scalar function, and therefore topology adaptations are handled naturally during a contour evolution. Nevertheless, explicit or parametric models are often preferred since user interaction and special geometric constraints are usually easier to incorporate. Although many researchers have studied topology adaptation algorithms in explicit mesh evolutions, no stable algorithm is known for interactive applications. In this paper, we present a topology adaptation system, which consists of two novel ingredients: A spatial hashing technique is used to detect self-colliding triangles of the mesh whose expected running time is linear with respect to the number of mesh vertices. For the topology change procedure, we have developed formulas by homology theory. During a contour evolution, we just have to choose between a few possible mesh retriangulations by local triangle-triangle intersection tests. Our algorithm has several advantages compared to existing ones: Since the new algorithm does not require any global mesh reparametrizations, it is very efficient. Since the topology adaptation system does not require constant sampling density of the mesh vertices nor especially smooth meshes, mesh evolution steps can be performed in a stable way with a rather coarse mesh. We apply our algorithm to 3D ultrasonic data, showing that accurate segmentation is obtained in some seconds.

  1. A small animal image guided irradiation system study using 3D dosimeters

    NASA Astrophysics Data System (ADS)

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  2. Single Nanoparticle to 3D Supercage: Framing for an Artificial Enzyme System.

    PubMed

    Cai, Ren; Yang, Dan; Peng, Shengjie; Chen, Xigao; Huang, Yun; Liu, Yuan; Hou, Weijia; Yang, Shengyuan; Liu, Zhenbao; Tan, Weihong

    2015-11-01

    A facile strategy has been developed to fabricate Cu(OH)2 supercages (SCs) as an artificial enzyme system with intrinsic peroxidase-mimic activities (PMA). SCs with high catalytic activity and excellent recyclability were generated via direct conversion of amorphous Cu(OH)2 nanoparticles (NPs) at room temperature. More specifically, the process that takes a single nanoparticle to a 3D supercage involves two basic steps. First, with addition of a copper-ammonia complex, the Cu(2+) ions that are located on the surface of amorphous Cu(OH)2 NPs would evolve into a fine lamellar structure by coordination and migration and eventually convert to 1D nanoribbons around the NPs. Second, accompanied by the migration of Cu(2+), a hollow cavity is generated in the inner NPs, such that a single nanoparticle eventually becomes a nanoribbon-assembled 3D hollow cage. These Cu(OH)2 SCs were then engineered as an artificial enzymatic system with higher efficiency for intrinsic PMA than the peroxidase activity of a natural enzyme, horseradish peroxidase. PMID:26464081

  3. Model-based vision system for mobile robot position estimation

    NASA Astrophysics Data System (ADS)

    D'Orazio, Tiziana; Capozzo, Liborio; Ianigro, Massimo; Distante, Arcangelo

    1994-02-01

    The development of an autonomous mobile robot is a central problem in artificial intelligence and robotics. A vision system can be used to recognize naturally occurring landmarks located in known positions. The problem considered here is that of finding the location and orientation of a mobile robot using a 3-D image taken by a CCD camera located on the robot. The naturally occurring landmarks that we use are the corners of the room extracted by an edge detection algorithm from a 2-D image of the indoor scene. Then, the location and orientation of the vehicle are calculated by perspective information of the landmarks in the scene of the room where the robot moves.

  4. Feasibility Study for Ballet E-Learning: Automatic Composition System for Ballet "Enchainement" with Online 3D Motion Data Archive

    ERIC Educational Resources Information Center

    Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako

    2009-01-01

    This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…

  5. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  6. THREE DIMENSIONAL INTEGRATED CHARACTERIZATION AND ARCHIVING SYSTEM (3D-ICAS)

    SciTech Connect

    George Jarvis

    2001-06-18

    The overall objective of this project is to develop an integrated system that remotely characterizes, maps, and archives measurement data of hazardous decontamination and decommissioning (D&D) areas. The system will generate a detailed 3-dimensional topography of the area as well as real-time quantitative measurements of volatile organics and radionuclides. The system will analyze substrate materials consisting of concrete, asbestos, and transite. The system will permanently archive the data measurements for regulatory and data integrity documentation. Exposure limits, rest breaks, and donning and removal of protective garments generate waste in the form of contaminated protective garments and equipment. Survey times are increased and handling and transporting potentially hazardous materials incur additional costs. Off-site laboratory analysis is expensive and time-consuming, often necessitating delay of further activities until results are received. The Three Dimensional Integrated Characterization and Archiving System (3D-ICAS) has been developed to alleviate some of these problems. 3D-ICAS provides a flexible system for physical, chemical and nuclear measurements reduces costs and improves data quality. Operationally, 3D-ICAS performs real-time determinations of hazardous and toxic contamination. A prototype demonstration unit is available for use in early 2000. The tasks in this Phase included: (1) Mobility Platforms: Integrate hardware onto mobility platforms, upgrade surface sensors, develop unit operations and protocol. (2) System Developments: Evaluate metals detection capability using x-ray fluorescence technology. (3) IWOS Upgrades: Upgrade the IWOS software and hardware for compatibility with mobility platform. The system was modified, tested and debugged during 1999 and 2000. The 3D-ICAS was shipped on 11 May 2001 to FIU-HCET for demonstration and validation of the design modifications. These modifications included simplifying the design from a two

  7. Noise analysis for near field 3-D FM-CW radar imaging systems

    SciTech Connect

    Sheen, David M.

    2015-06-19

    Near field radar imaging systems are used for several applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit the performance in several ways including reduction in system sensitivity and reduction of image dynamic range. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of these noise sources on a fast-chirping FM-CW system.

  8. Multitask neural network for vision machine systems

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1991-02-01

    A multi-task dynamic neural network that can be programmed for storing processing and encoding spatio-temporal visual information is presented in this paper. This dynamic neural network called the PNnetwork is comprised of numerous densely interconnected neural subpopulations which reside in one of the two coupled sublayers P or N. The subpopulations in the P-sublayer transmit an excitatory or a positive influence onto all interconnected units whereas the subpopulations in the N-sublayer transmit an inhibitory or negative influence. The dynamical activity generated by each subpopulation is given by a nonlinear first-order system. By varying the coupling strength between these different subpopulations it is possible to generate three distinct modes of dynamical behavior useful for performing vision related tasks. It is postulated that the PN-network can function as a basic programmable processor for novel vision machine systems. 1. 0

  9. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation

    PubMed Central

    Peli, Eli; Luo, Gang; Bowers, Alex; Rensing, Noa

    2007-01-01

    Vision loss typically affects either the wide peripheral vision (important for mobility), or central vision (important for seeing details). Traditional optical visual aids usually recover the lost visual function, but at a high cost for the remaining visual function. We have developed a novel concept of vision-multiplexing using augmented vision head-mounted display systems to address vision loss. Two applications are discussed in this paper. In the first, minified edge images from a head-mounted video camera are presented on a see-through display providing visual field expansion for people with peripheral vision loss, while still enabling the full resolution of the residual central vision to be maintained. The concept has been applied in daytime and nighttime devices. A series of studies suggested that the system could help with visual search, obstacle avoidance, and nighttime mobility. Subjects were positive in their ratings of device cosmetics and ergonomics. The second application is for people with central vision loss. Using an on-axis aligned camera and display system, central visibility is enhanced with 1:1 scale edge images, while still enabling the wide field of the unimpaired peripheral vision to be maintained. The registration error of the system was found to be low in laboratory testing. PMID:18172511

  10. Spatially monitoring oxygen level in 3D microfabricated cell culture systems using optical oxygen sensing beads.

    PubMed

    Wang, Lin; Acosta, Miguel A; Leach, Jennie B; Carrier, Rebecca L

    2013-04-21

    Capability of measuring and monitoring local oxygen concentration at the single cell level (tens of microns scale) is often desirable but difficult to achieve in cell culture. In this study, biocompatible oxygen sensing beads were prepared and tested for their potential for real-time monitoring and mapping of local oxygen concentration in 3D micro-patterned cell culture systems. Each oxygen sensing bead is composed of a silica core loaded with both an oxygen sensitive Ru(Ph2phen3)Cl2 dye and oxygen insensitive Nile blue reference dye, and a poly-dimethylsiloxane (PDMS) shell rendering biocompatibility. Human intestinal epithelial Caco-2 cells were cultivated on a series of PDMS and type I collagen based substrates patterned with micro-well arrays for 3 or 7 days, and then brought into contact with oxygen sensing beads. Using an image analysis algorithm to convert florescence intensity of beads to partial oxygen pressure in the culture system, tens of microns-size oxygen sensing beads enabled the spatial measurement of local oxygen concentration in the microfabricated system. Results generally indicated lower oxygen level inside wells than on top of wells, and local oxygen level dependence on structural features of cell culture surfaces. Interestingly, chemical composition of cell culture substrates also appeared to affect oxygen level, with type-I collagen based cell culture systems having lower oxygen concentration compared to PDMS based cell culture systems. In general, results suggest that oxygen sensing beads can be utilized to achieve real-time and local monitoring of micro-environment oxygen level in 3D microfabricated cell culture systems.

  11. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    PubMed Central

    2014-01-01

    Purpose: The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. Methods: To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. Results: In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Conclusion: Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs. PMID:25038809

  12. Comparison of Failure Modes in 2-D and 3-D Woven Carbon Phenolic Systems

    NASA Technical Reports Server (NTRS)

    Rossman, Grant A.; Stackpoole, Mairead; Feldman, Jay; Venkatapathy, Ethiraj; Braun, Robert D.

    2013-01-01

    NASA Ames Research Center is developing Woven Thermal Protection System (WTPS) materials as a new class of heatshields for entry vehicles (Stackpoole). Currently, there are few options for ablative entry heatshield materials, none of which is ideally suited to the planetary probe missions currently of interest to NASA. While carbon phenolic was successfully used for the missions Pioneer Venus and Galileo (to Jupiter), the heritage constituents are no longer available. An alternate carbon phenolic would need to be qualified for probe missions, which is most efficient at heat fluxes greater than those currently of interest. Additional TPS materials such as Avcoat and PICA are not sufficiently robust for the heat fluxes required. As a result, there is a large TPS gap between the materials efficient at very high conditions (carbon phenolic) and those that are effective at low-moderate conditions (all others). Development of 3D Woven TPS is intended to fill this gap, targeting mid-density weaves that could with withstand mid-range heat fluxes between 1100 W/sq cm and 8000 W/sq cm (Venkatapathy (2012). Preliminary experimental studies have been performed to show the feasibility of WTPS as a future mid-range TPS material. One study performed in the mARC Jet Facility at NASA Ames Research Center characterized the performance of a 3D Woven TPS sample and compared it to 2D carbon phenolic samples at ply angles of 0deg, 23.5deg, and 90deg. Each sample contained similar compositions of phenolic and carbon fiber volume fractions for experimental consistency. The goal of this study was to compare the performance of the TPS materials by evaluating resulting recession and failure modes. After exposing both samples to similar heat flux and pressure conditions, the 2D carbon phenolic laminate was shown to experience significant delamination between layers and further pocketing underneath separated layers. The 3D Woven TPS sample did not experience the delamination or pocketing

  13. Stereoscopic Vision System For Robotic Vehicle

    NASA Technical Reports Server (NTRS)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  14. Arena3D: visualizing time-driven phenotypic differences in biological systems

    PubMed Central

    2012-01-01

    comparison and identification of high impact knockdown targets. Conclusions We present a new visualization approach for perturbation screens with multiple phenotypic outcomes. The novel functionality implemented in Arena3D enables effective understanding and comparison of temporal patterns within morphological layers, to help with the system-wide analysis of dynamic processes. Arena3D is available free of charge for academics as a downloadable standalone application from: http://arena3d.org/. PMID:22439608

  15. A GIS Based 3D Online Decision Assistance System for Underground Energy Storage in Northern Germany

    NASA Astrophysics Data System (ADS)

    Nolde, M.; Schwanebeck, M.; Biniyaz, E.; Duttmann, R.

    2014-12-01

    We would like to present a GIS-based 3D online decision assistance system for underground energy storage. Its aim is to support the local land use planning authorities through pre-selection of possible sites for thermal, electrical and substantial underground energy storages. Since the extension of renewable energies has become legal requirement in Germany, the underground storing of superfluously produced green energy (such as during a heavy wind event) in the form of compressed air, gas or heated water has become increasingly important. However, the selection of suitable sites is a complex task. The assistance system uses data of geological features such as rock layers, salt caverns and faults enriched with attribute data such as rock porosity and permeability. This information is combined with surface data of the existing energy infrastructure, such as locations of wind and biogas stations, power line arrangement and cable capacity, and energy distribution stations. Furthermore, legal obligations such as protected areas on the surface and current underground mining permissions are used for the decision finding process. Not only the current situation but also prospective scenarios, such as expected growth in produced amount of energy are incorporated in the system. The decision process is carried out via the 'Analytic Hierarchy Process' (AHP) methodology of the 'Multi Object Decision Making' (MODM) approach. While the process itself is completely automated, the user has full control of the weighting of the different factors via the web interface. The system is implemented as an online 3D server GIS environment, with no software needed to be installed on the user side. The results are visualized as interactive 3d graphics. The implementation of the assistance system is based exclusively on free and open source software, and utilizes the 'Python' programming language in combination with current web technologies, such as 'HTML5', 'CSS3' and 'JavaScript'. It is

  16. Jammed elastic shells - a 3D experimental soft frictionless granular system

    NASA Astrophysics Data System (ADS)

    Jose, Jissy; Blab, Gerhard A.; van Blaaderen, Alfons; Imhof, Arnout

    2015-03-01

    We present a new experimental system of monodisperse, soft, frictionless, fluorescent labelled elastic shells for the characterization of structure, universal scaling laws and force networks in 3D jammed matter. The interesting fact about these elastic shells is that they can reversibly deform and therefore serve as sensors of local stress in jammed matter. Similar to other soft particles, like emulsion droplets and bubbles in foam, the shells can be packed to volume fractions close to unity, which allows us to characterize the contact force distribution and universal scaling laws as a function of volume fraction, and to compare them with theoretical predictions and numerical simulations. However, our shells, unlike other soft particles, deform rather differently at large stresses. They deform without conserving their inner volume, by forming dimples at contact regions. At each contact one of the shells buckled with a dimple and the other remained spherical, closely resembling overlapping spheres. We conducted 3D quantitative analysis using confocal microscopy and image analysis routines specially developed for these particles. In addition, we analysed the randomness of the process of dimpling, which was found to be volume fraction dependent.

  17. An open source 3-d printed modular micro-drive system for acute neurophysiology.

    PubMed

    Patel, Shaun R; Ghose, Kaushik; Eskandar, Emad N

    2014-01-01

    Current, commercial, electrode micro-drives that allow independent positioning of multiple electrodes are expensive. Custom designed solutions developed by individual laboratories require fabrication by experienced machinists working in well equipped machine shops and are therefore difficult to disseminate into widespread use. Here, we present an easy to assemble modular micro-drive system for acute primate neurophysiology (PriED) that utilizes rapid prototyping (3-d printing) and readily available off the shelf-parts. The use of 3-d printed parts drastically reduces the cost of the device, making it available to labs without the resources of sophisticated machine shops. The direct transfer of designs from electronic files to physical parts also gives researchers opportunities to easily modify and implement custom solutions to specific recording needs. We also demonstrate a novel model of data sharing for the scientific community: a publicly available repository of drive designs. Researchers can download the drive part designs from the repository, print, assemble and then use the drives. Importantly, users can upload their modified designs with annotations making them easily available for others to use. PMID:24736691

  18. Probabilistic 3-D time-lapse inversion of magnetotelluric data: application to an enhanced geothermal system

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, M.; Linde, N.; Peacock, J.; Zyserman, F. I.; Kalscheuer, T.; Thiel, S.

    2015-12-01

    S