A 3D photographic capsule endoscope system with full field of view
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng
2013-09-01
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
3D fingerprint imaging system based on full-field fringe projection profilometry
NASA Astrophysics Data System (ADS)
Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili
2014-01-01
As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
Three-dimensional ghost imaging lidar via sparsity constraint
NASA Astrophysics Data System (ADS)
Gong, Wenlin; Zhao, Chengqiang; Yu, Hong; Chen, Mingliang; Xu, Wendong; Han, Shensheng
2016-05-01
Three-dimensional (3D) remote imaging attracts increasing attentions in capturing a target’s characteristics. Although great progress for 3D remote imaging has been made with methods such as scanning imaging lidar and pulsed floodlight-illumination imaging lidar, either the detection range or application mode are limited by present methods. Ghost imaging via sparsity constraint (GISC), enables the reconstruction of a two-dimensional N-pixel image from much fewer than N measurements. By GISC technique and the depth information of targets captured with time-resolved measurements, we report a 3D GISC lidar system and experimentally show that a 3D scene at about 1.0 km range can be stably reconstructed with global measurements even below the Nyquist limit. Compared with existing 3D optical imaging methods, 3D GISC has the capability of both high efficiency in information extraction and high sensitivity in detection. This approach can be generalized in nonvisible wavebands and applied to other 3D imaging areas.
Real-time 3D video compression for tele-immersive environments
NASA Astrophysics Data System (ADS)
Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William
2006-01-01
Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).
Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.
Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin
2018-04-02
Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.
3D palmprint data fast acquisition and recognition
NASA Astrophysics Data System (ADS)
Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua
2014-11-01
This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
Comparison of three-dimensional surface-imaging systems.
Tzou, Chieh-Han John; Artner, Nicole M; Pona, Igor; Hold, Alina; Placheta, Eva; Kropatsch, Walter G; Frey, Manfred
2014-04-01
In recent decades, three-dimensional (3D) surface-imaging technologies have gained popularity worldwide, but because most published articles that mention them are technical, clinicians often have difficulties gaining a proper understanding of them. This article aims to provide the reader with relevant information on 3D surface-imaging systems. In it, we compare the most recent technologies to reveal their differences. We have accessed five international companies with the latest technologies in 3D surface-imaging systems: 3dMD, Axisthree, Canfield, Crisalix and Dimensional Imaging (Di3D; in alphabetical order). We evaluated their technical equipment, independent validation studies and corporate backgrounds. The fastest capturing devices are the 3dMD and Di3D systems, capable of capturing images within 1.5 and 1 ms, respectively. All companies provide software for tissue modifications. Additionally, 3dMD, Canfield and Di3D can fuse computed tomography (CT)/cone-beam computed tomography (CBCT) images into their 3D surface-imaging data. 3dMD and Di3D provide 4D capture systems, which allow capturing the movement of a 3D surface over time. Crisalix greatly differs from the other four systems as it is purely web based and realised via cloud computing. 3D surface-imaging systems are becoming important in today's plastic surgical set-ups, taking surgeons to a new level of communication with patients, surgical planning and outcome evaluation. Technologies used in 3D surface-imaging systems and their intended field of application vary within the companies evaluated. Potential users should define their requirements and assignment of 3D surface-imaging systems in their clinical as research environment before making the final decision for purchase. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yi; Chen, Wei; Xu, Hongyi
To provide a seamless integration of manufacturing processing simulation and fiber microstructure modeling, two new stochastic 3D microstructure reconstruction methods are proposed for two types of random fiber composites: random short fiber composites, and Sheet Molding Compounds (SMC) chopped fiber composites. A Random Sequential Adsorption (RSA) algorithm is first developed to embed statistical orientation information into 3D RVE reconstruction of random short fiber composites. For the SMC composites, an optimized Voronoi diagram based approach is developed for capturing the substructure features of SMC chopped fiber composites. The proposed methods are distinguished from other reconstruction works by providing a way ofmore » integrating statistical information (fiber orientation tensor) obtained from material processing simulation, as well as capturing the multiscale substructures of the SMC composites.« less
3D kinematic measurement of human movement using low cost fish-eye cameras
NASA Astrophysics Data System (ADS)
Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.
2017-02-01
3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.
Depth-tunable three-dimensional display with interactive light field control
NASA Astrophysics Data System (ADS)
Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan
2016-07-01
A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.
Multiplexed phase-space imaging for 3D fluorescence microscopy.
Liu, Hsiou-Yuan; Zhong, Jingshan; Waller, Laura
2017-06-26
Optical phase-space functions describe spatial and angular information simultaneously; examples of optical phase-space functions include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500µm with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture.
Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second
NASA Astrophysics Data System (ADS)
Zuo, Chao; Tao, Tianyang; Feng, Shijie; Huang, Lei; Asundi, Anand; Chen, Qian
2018-03-01
Fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time, we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.
Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo, Chao; Tao, Tianyang; Feng, Shijie
We report that fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time,more » we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Lastly, based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.« less
Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second
Zuo, Chao; Tao, Tianyang; Feng, Shijie; ...
2017-11-06
We report that fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time,more » we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Lastly, based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.« less
Real-Time 3D Ultrasound for Physiological Monitoring 22258.
1999-10-01
their software to acquire positioning information using a high precision mechanical arm ( MicroScribe arm from Immersion Corp., San Jose, CA) instead of...mechanical arm (Immersion MicroScribe ™) for 3D data acquisition, also adopted by EchoTech for 3D FreeScan. • Medical quality video capture by a...MHz Dell Dimen- sion XPS computer9 (under desk), MUSTPAC-2 Vir- tual Ultrasound Probe based on the Microscribe 3D articulated arm10 (on table
Samba: a real-time motion capture system using wireless camera sensor networks.
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-03-20
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.
Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-01-01
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618
Determine Age-structure of Gelatinous Zooplankton Using Optical Coherence Tomography
NASA Astrophysics Data System (ADS)
Bi, H.; Shahrestani, S.; He, Y.
2016-02-01
Gelatinous are delicate and transparent by nature, but are conspicuous in many ecosystems when in bloom. Their proliferations are a bothersome and costly nuisance and influencing important food webs and species interactions. More importantly, gelatinous zooplankton respond to climate change rapidly and understanding their upsurge needs information on their recruitment and population dynamics which in turn require their age-structure. However, ageing gelatinous zooplankton is often restricted by the fact that they shrink under unfavorable conditions. In the present study, we examine the potential of using optical coherence tomography (OCT) to age gelatinous zooplankton. OCT is a non-invasive imaging technique that uses light waves to examine 2D or 3D structure of target objects at a resolution of 3-5 µm. We were able to successfully capture both 3D and 2D images of sea nettle muscle fibers. Preliminary results on ctenophores will be discussed. Overall, this non-destructive sampling allows us to scan and capture images of mesoglea from jellyfish cultured in the lab, using the same individual repeatedly through time, documenting its growth which will provide precise measurements to construct an age key that will be applied to gelatinous zooplankton captured in the field. Coupled with information on abundance, we can start to quantify their recruitment timing and success rate.
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Optimization of compressive 4D-spatio-spectral snapshot imaging
NASA Astrophysics Data System (ADS)
Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing
2017-10-01
In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.
Fusion of light-field and photogrammetric surface form data
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.
2017-08-01
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
2017-07-01
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Yang, Y X; Teo, S-K; Van Reeth, E; Tan, C H; Tham, I W K; Poh, C L
2015-08-01
Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors' proposed approach. A novel hybrid approach based on deformable image registration (DIR) and finite element method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors' proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots.
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-11
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-01
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
NASA Astrophysics Data System (ADS)
Chatterjee, Amit; Bhatia, Vimal; Prakash, Shashi
2017-08-01
Fingerprint is a unique, un-alterable and easily collected biometric of a human being. Although it is a 3D biological characteristic, traditional methods are designed to provide only a 2D image. This touch based mapping of 3D shape to 2D image losses information and leads to nonlinear distortions. Moreover, as only topographic details are captured, conventional systems are potentially vulnerable to spoofing materials (e.g. artificial fingers, dead fingers, false prints, etc.). In this work, we demonstrate an anti-spoof touchless 3D fingerprint detection system using a combination of single shot fringe projection and biospeckle analysis. For fingerprint detection using fringe projection, light from a low power LED source illuminates a finger through a sinusoidal grating. The fringe pattern modulated because of features on the fingertip is captured using a CCD camera. Fourier transform method based frequency filtering is used for the reconstruction of 3D fingerprint from the captured fringe pattern. In the next step, for spoof detection using biospeckle analysis a visuo-numeric algorithm based on modified structural function and non-normalized histogram is proposed. High activity biospeckle patterns are generated because of interaction of collimated laser light with internal fluid flow of the real finger sample. This activity reduces abruptly in case of layered fake prints, and is almost absent in dead or fake fingers. Furthermore, the proposed setup is fast, low-cost, involves non-mechanical scanning and is highly stable.
D Reconstruction from Uav-Based Hyperspectral Images
NASA Astrophysics Data System (ADS)
Liu, L.; Xu, L.; Peng, J.
2018-04-01
Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. X.; Van Reeth, E.; Poh, C. L., E-mail: clpoh@ntu.edu.sg
2015-08-15
Purpose: Accurate visualization of lung motion is important in many clinical applications, such as radiotherapy of lung cancer. Advancement in imaging modalities [e.g., computed tomography (CT) and MRI] has allowed dynamic imaging of lung and lung tumor motion. However, each imaging modality has its advantages and disadvantages. The study presented in this paper aims at generating synthetic 4D-CT dataset for lung cancer patients by combining both continuous three-dimensional (3D) motion captured by 4D-MRI and the high spatial resolution captured by CT using the authors’ proposed approach. Methods: A novel hybrid approach based on deformable image registration (DIR) and finite elementmore » method simulation was developed to fuse a static 3D-CT volume (acquired under breath-hold) and the 3D motion information extracted from 4D-MRI dataset, creating a synthetic 4D-CT dataset. Results: The study focuses on imaging of lung and lung tumor. Comparing the synthetic 4D-CT dataset with the acquired 4D-CT dataset of six lung cancer patients based on 420 landmarks, accurate results (average error <2 mm) were achieved using the authors’ proposed approach. Their hybrid approach achieved a 40% error reduction (based on landmarks assessment) over using only DIR techniques. Conclusions: The synthetic 4D-CT dataset generated has high spatial resolution, has excellent lung details, and is able to show movement of lung and lung tumor over multiple breathing cycles.« less
3D augmented reality with integral imaging display
NASA Astrophysics Data System (ADS)
Shen, Xin; Hua, Hong; Javidi, Bahram
2016-06-01
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
Super Normal Vector for Human Activity Recognition with Depth Cameras.
Yang, Xiaodong; Tian, YingLi
2017-05-01
The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.
A CNN Regression Approach for Real-Time 2D/3D Registration.
Shun Miao; Wang, Z Jane; Rui Liao
2016-05-01
In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.
Three-dimensional capture, representation, and manipulation of Cuneiform tablets
NASA Astrophysics Data System (ADS)
Woolley, Sandra I.; Flowers, Nicholas J.; Arvanitis, Theodoros N.; Livingstone, Alasdair; Davis, Tom R.; Ellison, John
2001-04-01
This paper presents the digital imaging results of a collaborative research project working toward the generation of an on-line interactive digital image database of signs from ancient cuneiform tablets. An important aim of this project is the application of forensic analysis to the cuneiform symbols to identify scribal hands. Cuneiform tablets are amongst the earliest records of written communication, and could be considered as one of the original information technologies; an accessible, portable and robust medium for communication across distance and time. The earliest examples are up to 5,000 years old, and the writing technique remained in use for some 3,000 years. Unfortunately, only a small fraction of these tablets can be made available for display in museums and much important academic work has yet to be performed on the very large numbers of tablets to which there is necessarily restricted access. Our paper will describe the challenges encountered in the 2D image capture of a sample set of tablets held in the British Museum, explaining the motivation for attempting 3D imaging and the results of initial experiments scanning the smaller, more densely inscribed cuneiform tablets. We will also discuss the tractability of 3D digital capture, representation and manipulation, and investigate the requirements for scaleable data compression and transmission methods. Additional information can be found on the project website: www.cuneiform.net
Quality and matching performance analysis of three-dimensional unraveled fingerprints
NASA Astrophysics Data System (ADS)
Wang, Yongchang; Hao, Qi; Fatehpuria, Abhishika; Hassebrook, Laurence G.; Lau, Daniel L.
2010-07-01
The use of fingerprints as a biometric is both the oldest mode of computer-aided personal identification and the most-relied-on technology in use today. However, current acquisition methods have some challenging and peculiar difficulties. For higher performance fingerprint data acquisition and verification, a novel noncontact 3-D fingerprint scanner is investigated, where both the detailed 3-D and albedo information of the finger is obtained. The obtained high-resolution 3-D prints are further converted into 3-D unraveled prints, to be compatible with traditional 2-D automatic fingerprint identification systems. As a result, many limitations imposed on conventional fingerprint capture and processing can be reduced by the unobtrusiveness of this approach and the extra depth information acquired. To compare the quality and matching performances of 3-D unraveled with traditional 2-D plain fingerprints, we collect both 3-D prints and their 2-D plain counterparts. The print quality and matching performances are evaluated and analyzed by using National Institute of Standard Technology fingerprint software. Experimental results show that the 3-D unraveled print outperforms the 2-D print in both quality and matching performances.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
NASA Astrophysics Data System (ADS)
McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul
2011-06-01
Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera
NASA Astrophysics Data System (ADS)
Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.
2011-12-01
This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.
Three-dimensional displacement measurement of image point by point-diffraction interferometry
NASA Astrophysics Data System (ADS)
He, Xiao; Chen, Lingfeng; Meng, Xiaojie; Yu, Lei
2018-01-01
This paper presents a method for measuring the three-dimensional (3-D) displacement of an image point based on point-diffraction interferometry. An object Point-light-source (PLS) interferes with a fixed PLS and its interferograms are captured by an exit pupil. When the image point of the object PLS is slightly shifted to a new position, the wavefront of the image PLS changes. And its interferograms also change. Processing these figures (captured before and after the movement), the wavefront difference of the image PLS can be obtained and it contains the information of three-dimensional (3-D) displacement of the image PLS. However, the information of its three-dimensional (3-D) displacement cannot be calculated until the distance between the image PLS and the exit pupil is calibrated. Therefore, we use a plane-parallel-plate with a known refractive index and thickness to determine this distance, which is based on the Snell's law for small angle of incidence. Thus, since the distance between the exit pupil and the image PLS is a known quantity, the 3-D displacement of the image PLS can be simultaneously calculated through two interference measurements. Preliminary experimental results indicate that its relative error is below 0.3%. With the ability to accurately locate an image point (whatever it is real or virtual), a fiber point-light-source can act as the reticle by itself in optical measurement.
Capture locations and growth rates of Atlantic sturgeon in the Chesapeake Bay
Welsh, S.A.; Eyler, S.M.; Mangold, M.F.; Spells, A.J.
2002-01-01
Little information exists on temporal and spatial distributions of wild and hatchery-reared Atlantic sturgeon Acipenser oxyrinchus oxyrinchus in the Chesapeake Bay. Approximately 3,300 hatchery-reared Atlantic sturgeon comprised of two size groups were released into the Nanticoke River, a tributary of the Chesapeake Bay, on 8 July 1996. During January 1996-May 2000, 1099 Atlantic sturgeon were captured incidentally (i.e., bycatch) by commercial watermen in the Chesapeake Bay, including 420 hatchery-reared individuals. Wild and hatchery-reared Atlantic sturgeon were captured primarily in pound nets and gill nets. Biologists tagged each fish and recorded weight, length, and location of capture. Although two adults greater than 2000 mm fork length (FL) were captured in Maryland waters, wild sturgeon were primarily juveniles from Maryland and Virginia waters (415 and 259 individuals below 1000 mm FL, respectively). A growth rate of 0.565 mm/d (N = 15, SE = 0.081) was estimated for wild individuals (487-944 mm TL at release) at liberty from 30 to 622 d. The average growth of the group of hatchery-reared Atlantic sturgeon raised at 10??C exceeded that of the group raised at 17??C. Our distributional data based on capture locations are biased by fishery dependence and gear selectivity. These data are informative to managers, however, because commercial effort is widely distributed in the Chesapeake Bay, and little distributional data were available before this study.
NASA Astrophysics Data System (ADS)
Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.
2016-03-01
The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.
Informing neutron capture nucleosynthesis on short-lived nuclei with (d,p) reactions
NASA Astrophysics Data System (ADS)
Cizewski, Jolie A.; Ratkiewicz, Andrew; Escher, Jutta E.; Lepailleur, Alexandre; Pain, Steven D.; Potel, Gregory
2018-01-01
Neutron capture on unstable nuclei is important in understanding abundances in r-process nucleosynthesis. Previously, the non-elastic breakup of the deuteron in the (d,p) reaction has been shown to provide a neutron that can be captured by the nucleus and the gamma-ray decay of the subsequent compound nucleus can be modelled to predict the gamma-ray decay of the compound nucleus in the (n,γ) reaction. Preliminary results from the 95Mo(d,pγ) reaction in normal kinematics support the (d,pγ) reaction as a valid surrogate for neutron capture. The techniques to measure the (d,pγ) reaction in inverse kinematics have been developed.
Acquiring a 2D rolled equivalent fingerprint image from a non-contact 3D finger scan
NASA Astrophysics Data System (ADS)
Fatehpuria, Abhishika; Lau, Daniel L.; Hassebrook, Laurence G.
2006-04-01
The use of fingerprints as a biometric is both the oldest mode of computer aided personal identification and the most relied-upon technology in use today. But current fingerprint scanning systems have some challenging and peculiar difficulties. Often skin conditions and imperfect acquisition circumstances cause the captured fingerprint image to be far from ideal. Also some of the acquisition techniques can be slow and cumbersome to use and may not provide the complete information required for reliable feature extraction and fingerprint matching. Most of the difficulties arise due to the contact of the fingerprint surface with the sensor platen. To attain a fast-capture, non-contact, fingerprint scanning technology, we are developing a scanning system that employs structured light illumination as a means for acquiring a 3-D scan of the finger with sufficiently high resolution to record ridge-level details. In this paper, we describe the postprocessing steps used for converting the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image.
3D image processing architecture for camera phones
NASA Astrophysics Data System (ADS)
Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje
2011-03-01
Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.
Capsule endoscope localization based on computer vision technique.
Liu, Li; Hu, Chao; Cai, Wentao; Meng, Max Q H
2009-01-01
To build a new type of wireless capsule endoscope with interactive gastrointestinal tract examination, a localization and orientation system is needed for tracking 3D location and 3D orientation of the capsule movement. The magnetic localization and orientation method produces only 5 DOF, but misses the information of rotation angle along capsule's main axis. In this paper, we presented a complementary orientation approach for the capsule endoscope, and the 3D rotation can be determined by applying computer vision technique on the captured endoscopic images. The experimental results show that the complementary orientation method has good accuracy and high feasibility.
Preparation of 2D sequences of corneal images for 3D model building.
Elbita, Abdulhakim; Qahwaji, Rami; Ipson, Stanley; Sharif, Mhd Saeed; Ghanchi, Faruque
2014-04-01
A confocal microscope provides a sequence of images, at incremental depths, of the various corneal layers and structures. From these, medical practioners can extract clinical information on the state of health of the patient's cornea. In this work we are addressing problems associated with capturing and processing these images including blurring, non-uniform illumination and noise, as well as the displacement of images laterally and in the anterior-posterior direction caused by subject movement. The latter may cause some of the captured images to be out of sequence in terms of depth. In this paper we introduce automated algorithms for classification, reordering, registration and segmentation to solve these problems. The successful implementation of these algorithms could open the door for another interesting development, which is the 3D modelling of these sequences. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Rapid 3-D analysis of rockfalls
Stock, Greg M.; Guerin, A.; Avdievitch, Nikita N.; Collins, Brian D.; Jaboyedoff, Michel
2018-01-01
Recent fatal and damaging rockfalls in Yosemite National Park indicate the need for rapid response data collection methods to inform public safety and assist with management response. Here we show the use of multiple-platform remote sensing methods to rapidly capture pertinent data needed to inform management and the public following a several large rockfalls from El Capitan cliff in Yosemite Valley, California.
Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lu, Jian
Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.
NASA Astrophysics Data System (ADS)
Guo, Zhenyan; Song, Yang; Yuan, Qun; Wulan, Tuya; Chen, Lei
2017-06-01
In this paper, a transient multi-parameter three-dimensional (3D) reconstruction method is proposed to diagnose and visualize a combustion flow field. Emission and transmission tomography based on spatial phase-shifted technology are combined to reconstruct, simultaneously, the various physical parameter distributions of a propane flame. Two cameras triggered by the internal trigger mode capture the projection information of the emission and moiré tomography, respectively. A two-step spatial phase-shifting method is applied to extract the phase distribution in the moiré fringes. By using the filtered back-projection algorithm, we reconstruct the 3D refractive-index distribution of the combustion flow field. Finally, the 3D temperature distribution of the flame is obtained from the refractive index distribution using the Gladstone-Dale equation. Meanwhile, the 3D intensity distribution is reconstructed based on the radiation projections from the emission tomography. Therefore, the structure and edge information of the propane flame are well visualized.
Evaluation of capture techniques for long-billed curlews wintering in Texas
Woodin, Marc C.; Skoruppa, Mary K.; Edwardson, Jeremy W.; Austin, Jane E.
2012-01-01
Texas coast harbors the largest, eastern-most populations of Long-billed Curlews (Numenius americanus) in North America; however, very little is known about their migration and wintering ecology. Curlews are readily captured on their breeding grounds, but experience with capturing the species during the non-breeding season is extremely limited. We assessed the efficacy of 6 capture techniques for Long-billed Curlews in winter: 1) modified noose ropes, 2) remotely controlled bow net, 3) Coda Netgun, 4) Super Talon net gun, 5) Hawkseye whoosh net, and 6) cast net. The Coda Netgun had the highest rate of captures per unit of effort (CPUE = 0.31; 4 curlew captures/13 d of trapping effort), followed by bow net (CPUE = 0.17; 1 capture/6 d of effort), whoosh net (CPUE = 0.14; 1 capturel7 d of effort), and noose ropes (CPUE = 0.07; 1 capturel15 d of effort). No curlews were captured using the Super Talon net gun or a cast net (3 d and 1 d of effort, respectively). Multiple capture techniques should be readily available for maximum flexibility in matching capture methods with neophobic curlews that often unpredictably change referred feeding locations among extremely different habitat types.
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition
2017-01-01
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.
Choi, Hyo-Rim; Kim, TaeYong
2017-08-17
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.
Koban, K C; Leitsch, S; Holzbach, T; Volkmer, E; Metz, P M; Giunta, R E
2014-04-01
A new approach of using photographs from smartphones for three-dimensional (3D) imaging was introduced besides the standard high quality 3D camera systems. In this work, we investigated different capture preferences and compared the accuracy of this 3D reconstruction method with manual tape measurement and an established commercial 3D camera system. The facial region of one plastic mannequin head was labelled with 21 landmarks. A 3D reference model was captured with the Vectra 3D Imaging System®. In addition, 3D imaging was executed with the Autodesk 123d Catch® application using 16, 12, 9, 6 and 3 pictures from Apple® iPhone 4 s® and iPad® 3rd generation. The accuracy of 3D reconstruction was measured in 2 steps. First, 42 distance measurements from manual tape measurement and the 2 digital systems were compared. Second, the surface-to-surface deviation of different aesthetic units from the Vectra® reference model to Catch® generated models was analysed. For each 3D system the capturing and processing time was measured. The measurement showed no significant (p>0.05) difference between manual tape measurement and both digital distances from the Catch® application and Vectra®. Surface-to-surface deviation to the Vectra® reference model showed sufficient results for the 3D reconstruction of Catch® with 16, 12 and 9 picture sets. Use of 6 and 3 pictures resulted in large deviations. Lateral aesthetic units showed higher deviations than central units. Catch® needed 5 times longer to capture and compute 3D models (average 10 min vs. 2 min). The Autodesk 123d Catch® computed models suggests good accuracy of the 3D reconstruction for a standard mannequin model, in comparison to manual tape measurement and the surface-to-surface analysis with a 3D reference model. However, the prolonged capture time with multiple pictures is prone to errors. Further studies are needed to investigate its application and quality in capturing volunteer models. Soon mobile applications may offer an alternative for plastic surgeons to today's cost intensive, stationary 3D camera systems. © Georg Thieme Verlag KG Stuttgart · New York.
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
Integral imaging with Fourier-plane recording
NASA Astrophysics Data System (ADS)
Martínez-Corral, M.; Barreiro, J. C.; Llavador, A.; Sánchez-Ortiga, E.; Sola-Pikabea, J.; Scrofani, G.; Saavedra, G.
2017-05-01
Integral Imaging is well known for its capability of recording both the spatial and the angular information of threedimensional (3D) scenes. Based on such an idea, the plenoptic concept has been developed in the past two decades, and therefore a new camera has been designed with the capacity of capturing the spatial-angular information with a single sensor and after a single shot. However, the classical plenoptic design presents two drawbacks, one is the oblique recording made by external microlenses. Other is loss of information due to diffraction effects. In this contribution report a change in the paradigm and propose the combination of telecentric architecture and Fourier-plane recording. This new capture geometry permits substantial improvements in resolution, depth of field and computation time
Towards Automatic Processing of Virtual City Models for Simulations
NASA Astrophysics Data System (ADS)
Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2016-10-01
Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.
Selective 4D modelling framework for spatial-temporal land information management system
NASA Astrophysics Data System (ADS)
Doulamis, Anastasios; Soile, Sofia; Doulamis, Nikolaos; Chrisouli, Christina; Grammalidis, Nikos; Dimitropoulos, Kosmas; Manesis, Charalambos; Potsiou, Chryssy; Ioannidis, Charalabos
2015-06-01
This paper introduces a predictive (selective) 4D modelling framework where only the spatial 3D differences are modelled at the forthcoming time instances, while regions of no significant spatial-temporal alterations remain intact. To accomplish this, initially spatial-temporal analysis is applied between 3D digital models captured at different time instances. So, the creation of dynamic change history maps is made. Change history maps indicate spatial probabilities of regions needed further 3D modelling at forthcoming instances. Thus, change history maps are good examples for a predictive assessment, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 4D Land Information Management System (LIMS) is implemented using open interoperable standards based on the CityGML framework. CityGML allows the description of the semantic metadata information and the rights of the land resources. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 4D LIMS digital parcels and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics. An application is made to detect the change through time of a 3D block of plots in an urban area of Athens, Greece. Starting with an accurate 3D model of the buildings in 1983, a change history map is created using automated dense image matching on aerial photos of 2010. For both time instances meshes are created and through their comparison the changes are detected.
Programming standards for effective S-3D game development
NASA Astrophysics Data System (ADS)
Schneider, Neil; Matveev, Alexander
2008-02-01
When a video game is in development, more often than not it is being rendered in three dimensions - complete with volumetric depth. It's the PC monitor that is taking this three-dimensional information, and artificially displaying it in a flat, two-dimensional format. Stereoscopic drivers take the three-dimensional information captured from DirectX and OpenGL calls and properly display it with a unique left and right sided view for each eye so a proper stereoscopic 3D image can be seen by the gamer. The two-dimensional limitation of how information is displayed on screen has encouraged programming short-cuts and work-arounds that stifle this stereoscopic 3D effect, and the purpose of this guide is to outline techniques to get the best of both worlds. While the programming requirements do not significantly add to the game development time, following these guidelines will greatly enhance your customer's stereoscopic 3D experience, increase your likelihood of earning Meant to be Seen certification, and give you instant cost-free access to the industry's most valued consumer base. While this outline is mostly based on NVIDIA's programming guide and iZ3D resources, it is designed to work with all stereoscopic 3D hardware solutions and is not proprietary in any way.
Temporal consistent depth map upscaling for 3DTV
NASA Astrophysics Data System (ADS)
Schwarz, Sebastian; Sjöström, Mârten; Olsson, Roger
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
Human machine interface by using stereo-based depth extraction
NASA Astrophysics Data System (ADS)
Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan
2014-03-01
The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.
Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology
Baskwill, Amanda J.; Belli, Patricia; Kelleher, Leila
2017-01-01
Background Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. Purpose The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Participants Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Research Design Quasi-experimental, wait-list comparison study. Intervention The intervention group participated in an in-class session with a Qualisys motion capture system. Main Outcome Measure(s) The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students’ satisfaction as measured through a questionnaire. Results There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Conclusions Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system. PMID:28293329
Remote listening and passive acoustic detection in a 3-D environment
NASA Astrophysics Data System (ADS)
Barnhill, Colin
Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras
Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong
2014-01-01
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679
A 3D Scan Model and Thermal Image Data Fusion Algorithms for 3D Thermography in Medicine
Klima, Ondrej
2017-01-01
Objectives At present, medical thermal imaging is still considered a mere qualitative tool enabling us to distinguish between but lacking the ability to quantify the physiological and nonphysiological states of the body. Such a capability would, however, facilitate solving the problem of medical quantification, whose presence currently manifests itself within the entire healthcare system. Methods A generally applicable method to enhance captured 3D spatial data carrying temperature-related information is presented; in this context, all equations required for other data fusions are derived. The method can be utilized for high-density point clouds or detailed meshes at a high resolution but is conveniently usable in large objects with sparse points. Results The benefits of the approach are experimentally demonstrated on 3D thermal scans of injured subjects. We obtained diagnostic information inaccessible via traditional methods. Conclusion Using a 3D model and thermal image data fusion allows the quantification of inflammation, facilitating more precise injury and illness diagnostics or monitoring. The technique offers a wide application potential in medicine and multiple technological domains, including electrical and mechanical engineering. PMID:29250306
NASA Astrophysics Data System (ADS)
Wei, Dong; Weinstein, Susan; Hsieh, Meng-Kang; Pantalone, Lauren; Kontos, Despina
2018-03-01
The relative amount of fibroglandular tissue (FGT) in the breast has been shown to be a risk factor for breast cancer. However, automatic segmentation of FGT in breast MRI is challenging due mainly to its wide variation in anatomy (e.g., amount, location and pattern, etc.), and various imaging artifacts especially the prevalent bias-field artifact. Motivated by a previous work demonstrating improved FGT segmentation with 2-D a priori likelihood atlas, we propose a machine learning-based framework using 3-D FGT context. The framework uses features specifically defined with respect to the breast anatomy to capture spatially varying likelihood of FGT, and allows (a) intuitive standardization across breasts of different sizes and shapes, and (b) easy incorporation of additional information helpful to the segmentation (e.g., texture). Extended from the concept of 2-D atlas, our framework not only captures spatial likelihood of FGT in 3-D context, but also broadens its applicability to both sagittal and axial breast MRI rather than being limited to the plane in which the 2-D atlas is constructed. Experimental results showed improved segmentation accuracy over the 2-D atlas method, and demonstrated further improvement by incorporating well-established texture descriptors.
Best practices for the 3D documentation of the Grotta dei Cervi of Porto Badisco, Italy
NASA Astrophysics Data System (ADS)
Beraldin, J. A.; Picard, M.; Bandiera, A.; Valzano, V.; Negro, F.
2011-03-01
The Grotta dei Cervi is a Neolithic cave where human presence has left many unique pictographs on the walls of many of its chambers. It was closed for conservation reasons soon after its discovery in 1970. It is for these reasons that a 3D documentation was started. Two sets of high resolution and detailed three-dimensional (3D) acquisitions were captured in 2005 and 2009 respectively, along with two-dimensional (2D) images. From this information a textured 3D model was produced for most of the 300-m long central corridor. Carbon dating of the guano used for the pictographs and environmental monitoring (Temperature, Relative humidity, and Radon) completed the project. This paper presents this project, some results obtained up to now, the best practice that has emerged from this work and a description of the processing pipeline that deals with more than 27 billion 3D coordinates.
Person identification by using 3D palmprint data
NASA Astrophysics Data System (ADS)
Bai, Xuefei; Huang, Shujun; Gao, Nan; Zhang, Zonghua
2016-11-01
Person identification based on biometrics is drawing more and more attentions in identity and information safety. This paper presents a biometric system to identify person using 3D palmprint data, including a non-contact system capturing 3D palmprint quickly and a method identifying 3D palmprint fast. In order to reduce the effect of slight shaking of palm on the data accuracy, a DLP (Digital Light Processing) projector is utilized to trigger a CCD camera based on structured-light and triangulation measurement and 3D palmprint data could be gathered within 1 second. Using the obtained database and the PolyU 3D palmprint database, feature extraction and matching method is presented based on MCI (Mean Curvature Image), Gabor filter and binary code list. Experimental results show that the proposed method can identify a person within 240 ms in the case of 4000 samples. Compared with the traditional 3D palmprint recognition methods, the proposed method has high accuracy, low EER (Equal Error Rate), small storage space, and fast identification speed.
Minimizing capture-related stress on white-tailed deer with a capture collar
DelGiudice, G.D.; Kunkel, K.E.; Mech, L.D.; Seal, U.S.
1990-01-01
We compared the effect of 3 capture methods for white-tailed deer (Odocoileus virginianus) on blood indicators of acute excitement and stress from 1 February to 20 April 1989. Eleven adult females were captured by Clover trap or cannon net between 1 February and 9 April 1989 in northeastern Minnesota [USA]. These deer were fitted with radio-controlled capture collars, and 9 deer were recaptured 7-33 days later. Trapping method affected serum cortisol (P < 0.0001), hemoglobin (Hb) (P < 0.06), and packed cell volume (PCV) (P < 0.07). Cortisol concentrations were lower (P < 0.0001) in capture-collared deer (0.54 .+-. 0.07 [SE] .mu.g/dL) compared to Clover-trapped (4.37 .+-. 0.69 .mu.g/dL) and cannon-netted (3.88 .+-. 0.82 .mu.g/dL) deer. Capture-collared deer were minimally stressed compared to deer captured by traditional methods. Use of the capture collar should permit more accurate interpretation of blood profiles of deer for assessement of condition and general health.
NASA Astrophysics Data System (ADS)
Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.
2014-03-01
Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Real-time 3D human capture system for mixed-reality art and entertainment.
Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu
2005-01-01
A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro
2012-09-10
We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.
Zhu, S; Yang, Y; Khambay, B
2017-03-01
Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all P<0.05). For 75% of the raters, stereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Building change detection via a combination of CNNs using only RGB aerial imageries
NASA Astrophysics Data System (ADS)
Nemoto, Keisuke; Hamaguchi, Ryuhei; Sato, Masakazu; Fujita, Aito; Imaizumi, Tomoyuki; Hikosaka, Shuhei
2017-10-01
Building change information extracted from remote sensing imageries is important for various applications such as urban management and marketing planning. The goal of this work is to develop a methodology for automatically capturing building changes from remote sensing imageries. Recent studies have addressed this goal by exploiting 3-D information as a proxy for building height. In contrast, because in practice it is expensive or impossible to prepare 3-D information, we do not rely on 3-D data but focus on using only RGB aerial imageries. Instead, we employ deep convolutional neural networks (CNNs) to extract effective features, and improve change detection accuracy in RGB remote sensing imageries. We consider two aspects of building change detection, building detection and subsequent change detection. Our proposed methodology was tested on several areas, which has some differences such as dominant building characteristics and varying brightness values. On all over the tested areas, the proposed method provides good results for changed objects, with recall values over 75 % with a strict overlap requirement of over 50% in intersection-over-union (IoU). When the IoU threshold was relaxed to over 10%, resulting recall values were over 81%. We conclude that use of CNNs enables accurate detection of building changes without employing 3-D information.
View-invariant gait recognition method by three-dimensional convolutional neural network
NASA Astrophysics Data System (ADS)
Xing, Weiwei; Li, Ying; Zhang, Shunli
2018-01-01
Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.
NASA Astrophysics Data System (ADS)
Willis, Andrew R.; Brink, Kevin M.
2016-06-01
This article describes a new 3D RGBD image feature, referred to as iGRaND, for use in real-time systems that use these sensors for tracking, motion capture, or robotic vision applications. iGRaND features use a novel local reference frame derived from the image gradient and depth normal (hence iGRaND) that is invariant to scale and viewpoint for Lambertian surfaces. Using this reference frame, Euclidean invariant feature components are computed at keypoints which fuse local geometric shape information with surface appearance information. The performance of the feature for real-time odometry is analyzed and its computational complexity and accuracy is compared with leading alternative 3D features.
Three-dimensional particle tracking via tunable color-encoded multiplexing.
Duocastella, Martí; Theriault, Christian; Arnold, Craig B
2016-03-01
We present a novel 3D tracking approach capable of locating single particles with nanometric precision over wide axial ranges. Our method uses a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on color into different and selectable focal planes. By separating the red, green, and blue channels from an image captured with a color camera, information from up to three focal planes can be retrieved. Multiplane information from the particle diffraction rings enables precisely locating and tracking individual objects up to an axial range about 5 times larger than conventional single-plane approaches. We apply our method to the 3D visualization of the well-known coffee-stain phenomenon in evaporating water droplets.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
TLS for generating multi-LOD of 3D building model
NASA Astrophysics Data System (ADS)
Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.
2014-02-01
The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.
Geospatial Data Processing for 3d City Model Generation, Management and Visualization
NASA Astrophysics Data System (ADS)
Toschi, I.; Nocerino, E.; Remondino, F.; Revolti, A.; Soria, G.; Piffer, S.
2017-05-01
Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in "smart city" applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above - http://seneca.fbk.eu). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.
Three-dimensional photography for the evaluation of facial profiles in obstructive sleep apnoea.
Lin, Shih-Wei; Sutherland, Kate; Liao, Yu-Fang; Cistulli, Peter A; Chuang, Li-Pang; Chou, Yu-Ting; Chang, Chih-Hao; Lee, Chung-Shu; Li, Li-Fu; Chen, Ning-Hung
2018-06-01
Craniofacial structure is an important determinant of obstructive sleep apnoea (OSA) syndrome risk. Three-dimensional stereo-photogrammetry (3dMD) is a novel technique which allows quantification of the craniofacial profile. This study compares the facial images of OSA patients captured by 3dMD to three-dimensional computed tomography (3-D CT) and two-dimensional (2-D) digital photogrammetry. Measurements were correlated with indices of OSA severity. Thirty-eight patients diagnosed with OSA were included, and digital photogrammetry, 3dMD and 3-D CT were performed. Distances, areas, angles and volumes from the images captured by three methods were analysed. Almost all measurements captured by 3dMD showed strong agreement with 3-D CT measurements. Results from 2-D digital photogrammetry showed poor agreement with 3-D CT. Mandibular width, neck perimeter size and maxillary volume measurements correlated well with the severity of OSA using all three imaging methods. Mandibular length, facial width, binocular width, neck width, cranial base triangle area, cranial base area 1 and middle cranial fossa volume correlated well with OSA severity using 3dMD and 3-D CT, but not with 2-D digital photogrammetry. 3dMD provided accurate craniofacial measurements of OSA patients, which were highly concordant with those obtained by CT, while avoiding the radiation associated with CT. © 2018 Asian Pacific Society of Respirology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pichierri, Fabio, E-mail: fabio@che.tohoku.ac.jp
Using computational quantum chemistry methods we design novel 2D and 3D soft materials made of cucurbituril macrocycles covalently connected with each other via rigid linkers. Such covalent cucurbituril networks might be useful for the capture of radioactive Cs-137 (present as Cs{sup +}) in the contaminated environment.
A new 4-dimensional imaging system for jaw tracking.
Lauren, Mark
2014-01-01
A non-invasive 4D imaging system that produces high resolution time-based 3D surface data has been developed to capture jaw motion. Fluorescent microspheres are brushed onto both tooth and soft-tissue areas of the upper and lower arches to be imaged. An extraoral hand-held imaging device, operated about 12 cm from the mouth, captures a time-based set of perspective image triplets of the patch areas. Each triplet, containing both upper and lower arch data, is converted to a high-resolution 3D point mesh using photogrammetry, providing the instantaneous relative jaw position. Eight 3D positions per second are captured. Using one of the 3D frames as a reference, a 4D model can be constructed to describe the incremental free body motion of the mandible. The surface data produced by this system can be registered to conventional 3D models of the dentition, allowing them to be animated. Applications include integration into prosthetic CAD and CBCT data.
Capturing PM2.5 Emissions from 3D Printing via Nanofiber-based Air Filter.
Rao, Chengchen; Gu, Fu; Zhao, Peng; Sharmin, Nusrat; Gu, Haibing; Fu, Jianzhong
2017-09-04
This study investigated the feasibility of using polycaprolactone (PCL) nanofiber-based air filters to capture PM2.5 particles emitted from fused deposition modeling (FDM) 3D printing. Generation and aggregation of emitted particles were investigated under different testing environments. The results show that: (1) the PCL nanofiber membranes are capable of capturing particle emissions from 3D printing, (2) relative humidity plays a signification role in aggregation of the captured particles, (3) generation and aggregation of particles from 3D printing can be divided into four stages: the PM2.5 concentration and particles size increase slowly (first stage), small particles are continuously generated and their concentration increases rapidly (second stage), small particles aggregate into more large particles and the growth of concentration slows down (third stage), the PM2.5 concentration and particle aggregation sizes increase rapidly (fourth stage), and (4) the ultrafine particles denoted as "building unit" act as the fundamentals of the aggregated particles. This work has tremendous implications in providing measures for controlling the particle emissions from 3D printing, which would facilitate the extensive application of 3D printing. In addition, this study provides a potential application scenario for nanofiber-based air filters other than laboratory theoretical investigation.
A reflection TIE system for 3D inspection of wafer structures
NASA Astrophysics Data System (ADS)
Yan, Yizhen; Qu, Weijuan; Yan, Lei; Wang, Zhaomin; Zhao, Hongying
2017-10-01
A reflection TIE system consisting of a reflecting microscope and a 4f relay system is presented in this paper, with which the transport of intensity equation (TIE) is applied to reconstruct the three-dimensional (3D) profile of opaque micro objects like wafer structures for 3D inspection. As the shape of an object can affect the phases of waves, the 3D information of the object can be easily acquired with the multiple phases at different refocusing planes. By electronically controlled refocusing, multi-focal images can be captured and used in solving TIE to obtain the phase and depth of the object. In order to validate the accuracy and efficiency of the proposed system, the phase and depth values of several samples are calculated, and the experimental results is presented to demonstrate the performance of the system.
NASA Astrophysics Data System (ADS)
Manabe, Yoshitsugu; Imura, Masataka; Tsuchiya, Masanobu; Yasumuro, Yoshihiro; Chihara, Kunihiro
2003-01-01
Wearable 3D measurement realizes to acquire 3D information of an objects or an environment using a wearable computer. Recently, we can send voice and sound as well as pictures by mobile phone in Japan. Moreover it will become easy to capture and send data of short movie by it. On the other hand, the computers become compact and high performance. And it can easy connect to Internet by wireless LAN. Near future, we can use the wearable computer always and everywhere. So we will be able to send the three-dimensional data that is measured by wearable computer as a next new data. This paper proposes the measurement method and system of three-dimensional data of an object with the using of wearable computer. This method uses slit light projection for 3D measurement and user"s motion instead of scanning system.
Real-time three-dimensional ultrasound-assisted axillary plexus block defines soft tissue planes.
Clendenen, Steven R; Riutort, Kevin; Ladlie, Beth L; Robards, Christopher; Franco, Carlo D; Greengrass, Roy A
2009-04-01
Two-dimensional (2D) ultrasound is commonly used for regional block of the axillary brachial plexus. In this technical case report, we described a real-time three-dimensional (3D) ultrasound-guided axillary block. The difference between 2D and 3D ultrasound is similar to the difference between plain radiograph and computer tomography. Unlike 2D ultrasound that captures a planar image, 3D ultrasound technology acquires a 3D volume of information that enables multiple planes of view by manipulating the image without movement of the ultrasound probe. Observation of the brachial plexus in cross-section demonstrated distinct linear hyperechoic tissue structures (loose connective tissue) that initially inhibited the flow of the local anesthesia. After completion of the injection, we were able to visualize the influence of arterial pulsation on the spread of the local anesthesia. Possible advantages of this novel technology over current 2D methods are wider image volume and the capability to manipulate the planes of the image without moving the probe.
NASA Astrophysics Data System (ADS)
Capocchiano, F.; Ravanelli, R.; Crespi, M.
2017-11-01
Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.
A real-time 3D end-to-end augmented reality system (and its representation transformations)
NASA Astrophysics Data System (ADS)
Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois
2016-09-01
The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.
Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display
NASA Astrophysics Data System (ADS)
Mun, Sungchul; Park, Min-Chul
2014-06-01
3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.
Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.
2010-01-01
One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives. PMID:20299257
Research and Technology Development for Construction of 3d Video Scenes
NASA Astrophysics Data System (ADS)
Khlebnikova, Tatyana A.
2016-06-01
For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.
Restoring the spatial resolution of refocus images on 4D light field
NASA Astrophysics Data System (ADS)
Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok
2010-01-01
This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.
Solis-Ibarra, D.; Smith, I. C.
2015-01-01
Reaction with halogen vapor allows us to post-synthetically exchange halides in both three- (3D) and two-dimensional (2D) organic–inorganic metal-halide perovskites. Films of 3D Pb–I perovskites cleanly convert to films of Pb–Br or Pb–Cl perovskites upon exposure to Br2 or Cl2 gas, respectively. This gas–solid reaction provides a simple method to produce the high-quality Pb–Br or Pb–Cl perovskite films required for optoelectronic applications. Reactivity with halogens can be extended to the organic layers in 2D metal-halide perovskites. Here, terminal alkene groups placed between the inorganic layers can capture Br2 gas through chemisorption to form dibromoalkanes. This reaction's selectivity for Br2 over I2 allows us to scrub Br2 to obtain high-purity I2 gas streams. We also observe unusual halogen transfer between the inorganic and organic layers within a single perovskite structure. Remarkably, the perovskite's crystallinity is retained during these massive structural rearrangements. PMID:29218171
Grayscale imbalance correction in real-time phase measuring profilometry
NASA Astrophysics Data System (ADS)
Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng
2016-10-01
Grayscale imbalance correction in real-time phase measuring profilometry (RPMP) is proposed. In the RPMP, the sufficient information is obtained to reconstruct the 3D shape of the measured object in one over twenty-four of a second. Only one color fringe pattern whose R, G and B channels are coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is sent to a flash memory on a specialized digital light projector (SDLP). And then the SDLP projects the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile a monochrome CCD camera captures the corresponding deformed patterns synchronously with the SDLP. Because the deformed patterns from three color channels are captured at different time, the color crosstalk is avoided completely. But due to the monochrome CCD camera's different spectral sensitivity to R, G and B tricolor, there will be grayscale imbalance among these deformed patterns captured at R, G and B channels respectively which may result in increasing measuring errors or even failing to reconstruct the 3D shape. So a new grayscale imbalance correction method based on least square method is developed. The experimental results verify the feasibility of the proposed method.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
Ear recognition from one sample per person.
Chen, Long; Mu, Zhichun; Zhang, Baoqing; Zhang, Yi
2015-01-01
Biometrics has the advantages of efficiency and convenience in identity authentication. As one of the most promising biometric-based methods, ear recognition has received broad attention and research. Previous studies have achieved remarkable performance with multiple samples per person (MSPP) in the gallery. However, most conventional methods are insufficient when there is only one sample per person (OSPP) available in the gallery. To solve the OSPP problem by maximizing the use of a single sample, this paper proposes a hybrid multi-keypoint descriptor sparse representation-based classification (MKD-SRC) ear recognition approach based on 2D and 3D information. Because most 3D sensors capture 3D data accessorizing the corresponding 2D data, it is sensible to use both types of information. First, the ear region is extracted from the profile. Second, keypoints are detected and described for both the 2D texture image and 3D range image. Then, the hybrid MKD-SRC algorithm is used to complete the recognition with only OSPP in the gallery. Experimental results on a benchmark dataset have demonstrated the feasibility and effectiveness of the proposed method in resolving the OSPP problem. A Rank-one recognition rate of 96.4% is achieved for a gallery of 415 subjects, and the time involved in the computation is satisfactory compared to conventional methods.
Komives, A; Sint, A K; Bowers, M; Snow, M
2005-01-01
A measurement of the parity-violating gamma asymmetry in n-D capture would yield information on N-N parity violation independent of the n-p system. Since cold neutrons will depolarize in a liquid deuterium target in which the scattering cross section is much larger than the absorption cross section, it will be necessary to quantify the loss of polarization before capture. One way to do this is to use the large circular polarization of the gamma from n-D capture and analyze the circular polarization of the gamma in a gamma polarimeter. We describe the design of this polarimeter.
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Geil, Mark D
2007-01-01
Computer-aided design (CAD) and computer-aided manufacturing systems have been adapted for specific use in prosthetics, providing practitioners with a means to digitally capture the shape of a patient's limb, modify the socket model using software, and automatically manufacture either a positive model to be used in the fabrication of a socket or the socket itself. The digital shape captured is a three-dimensional (3-D) model from which standard anthropometric measures can be easily obtained. This study recorded six common anthropometric dimensions from CAD shape files of three foam positive models of the residual limbs of persons with transtibial amputations. Two systems were used to obtain 3-D models of the residual limb, a noncontact optical system and a contact-based electromagnetic field system, and both experienced practitioners and prosthetics students conducted measurements. Measurements were consistent; the mean range (difference of maximum and minimum) across all measurements was 0.96 cm. Both systems provided similar results, and both groups used the systems consistently. Students were slightly more consistent than practitioners but not to a clinically significant degree. Results also compared favorably with traditional measurement, with differences versus hand measurements about 5 mm. These results suggest the routine use of digital shape capture for collection of patient volume information.
Colloidal Particles at Fluid Interfaces and the Interface of Colloidal Fluids
NASA Astrophysics Data System (ADS)
McGorty, Ryan
Holographic microscopy is a unifying theme in the different projects discussed in this thesis. The technique allows one to observe microscopic objects, like colloids and droplets, in a three-dimensional (3D) volume. Unlike scanning 3D optical techniques, holography captures a sample's 3D information in a single image: the hologram. Therefore, one can capture 3D information at video frame rates. The price for such speed is paid in computation time. The 3D information must be extracted from the image by methods such as reconstruction or fitting the hologram to scattering calculations. Using holography, we observe a single colloidal particle approach, penetrate and then slowly equilibrate at an oil--water interface. Because the particle moves along the optical axis (z-axis) and perpendicular to the interface holography is used to determine its position. We are able to locate the particle's z-position to within a few nanometers with a time resolution below a millisecond. We find that the capillary force pulling the particle into the interface is not balanced by a hydrodynamic force. Rather, a larger-than-viscous dissipation associated with the three-phase contact-line slipping over the particle's surface results in equilibration on time scales orders of magnitude longer than the minute time scales over which our setup allows us to examine. A separate project discussed here also examines colloidal particles and fluid-fluid interfaces. But the fluids involved are composed of colloids. With a colloid and polymer water-based mixture we study the phase separation of the colloid-rich (or liquid) and colloid-poor (or gas) region. In comparison to the oil--water interface in the previously mentioned project, the interface between the colloidal liquid and gas phases has a surface tension nearly six orders of magnitude smaller. So interfacial fluctuations are observable under microscopy. We also use holographic microscopy to study this system but not to track particles with great time and spatial resolution. Rather, holography allows us to observe nucleation of the liquid phase occurring throughout our sample volume.
Spectral-spatial classification of hyperspectral image using three-dimensional convolution network
NASA Astrophysics Data System (ADS)
Liu, Bing; Yu, Xuchu; Zhang, Pengqiang; Tan, Xiong; Wang, Ruirui; Zhi, Lu
2018-01-01
Recently, hyperspectral image (HSI) classification has become a focus of research. However, the complex structure of an HSI makes feature extraction difficult to achieve. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. The design of an improved 3-D convolutional neural network (3D-CNN) model for HSI classification is described. This model extracts features from both the spectral and spatial dimensions through the application of 3-D convolutions, thereby capturing the important discrimination information encoded in multiple adjacent bands. The designed model views the HSI cube data altogether without relying on any pre- or postprocessing. In addition, the model is trained in an end-to-end fashion without any handcrafted features. The designed model was applied to three widely used HSI datasets. The experimental results demonstrate that the 3D-CNN-based method outperforms conventional methods even with limited labeled training samples.
Low-cost structured-light based 3D capture system design
NASA Astrophysics Data System (ADS)
Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.
2014-03-01
Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
Motion-Capture-Enabled Software for Gestural Control of 3D Models
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony
2012-01-01
Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
System-wide identification of RNA-binding proteins by interactome capture.
Castello, Alfredo; Horos, Rastislav; Strein, Claudia; Fischer, Bernd; Eichelbaum, Katrin; Steinmetz, Lars M; Krijgsveld, Jeroen; Hentze, Matthias W
2013-03-01
Owing to their preeminent biological functions, the repertoire of expressed RNA-binding proteins (RBPs) and their activity states are highly informative about cellular systems. We have developed a novel and unbiased technique, called interactome capture, for identifying the active RBPs of cultured cells. By making use of in vivo UV cross-linking of RBPs to polyadenylated RNAs, covalently bound proteins are captured with oligo(dT) magnetic beads. After stringent washes, the mRNA interactome is determined by quantitative mass spectrometry (MS). The protocol takes 3 working days for analysis of single proteins by western blotting, and about 2 weeks for the determination of complete cellular mRNA interactomes by MS. The most important advantage of interactome capture over other in vitro and in silico approaches is that only RBPs bound to RNA in a physiological environment are identified. When applied to HeLa cells, interactome capture revealed hundreds of novel RBPs. Interactome capture can also be broadly used to compare different biological states, including metabolic stress, cell cycle, differentiation, development or the response to drugs.
Enhanced visualization of the retinal vasculature using depth information in OCT.
de Moura, Joaquim; Novo, Jorge; Charlón, Pablo; Barreira, Noelia; Ortega, Marcos
2017-12-01
Retinal vessel tree extraction is a crucial step for analyzing the microcirculation, a frequently needed process in the study of relevant diseases. To date, this has normally been done by using 2D image capture paradigms, offering a restricted visualization of the real layout of the retinal vasculature. In this work, we propose a new approach that automatically segments and reconstructs the 3D retinal vessel tree by combining near-infrared reflectance retinography information with Optical Coherence Tomography (OCT) sections. Our proposal identifies the vessels, estimates their calibers, and obtains the depth at all the positions of the entire vessel tree, thereby enabling the reconstruction of the 3D layout of the complete arteriovenous tree for subsequent analysis. The method was tested using 991 OCT images combined with their corresponding near-infrared reflectance retinography. The different stages of the methodology were validated using the opinion of an expert as a reference. The tests offered accurate results, showing coherent reconstructions of the 3D vasculature that can be analyzed in the diagnosis of relevant diseases affecting the retinal microcirculation, such as hypertension or diabetes, among others.
DAVIS: A direct algorithm for velocity-map imaging system
NASA Astrophysics Data System (ADS)
Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.
2018-05-01
In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.
NASA Astrophysics Data System (ADS)
Thienphrapa, Paul; Ramachandran, Bharat; Elhawary, Haytham; Taylor, Russell H.; Popovic, Aleksandra
2012-02-01
Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass, and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm. We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of the fragment and securing it upon its reentry to said location. To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively to select the best capture location based on constraints such as workspace, time, and device manipulability. Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable capability in an interventional system.
40 CFR 52.320 - Identification of plan.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of Group II VOC sources were submitted on January 6, 1981, and the supplemental information received... Gasoline Transfer at Bulk Plants-Vapor Balance System), and D (Test Procedures for Annual Pressure/Vacuum... recent EPA capture efficiency protocols, and the commitment to adopt federal capture efficiency test...
40 CFR 52.320 - Identification of plan.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of Group II VOC sources were submitted on January 6, 1981, and the supplemental information received... at Bulk Plants-Vapor Balance System), and D (Test Procedures for Annual Pressure/Vacuum Testing of... recent EPA capture efficiency protocols, and the commitment to adopt federal capture efficiency test...
40 CFR 52.320 - Identification of plan.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of Group II VOC sources were submitted on January 6, 1981, and the supplemental information received... at Bulk Plants-Vapor Balance System), and D (Test Procedures for Annual Pressure/Vacuum Testing of... recent EPA capture efficiency protocols, and the commitment to adopt federal capture efficiency test...
Dhont, Jennifer; Vandemeulebroucke, Jef; Burghelea, Manuela; Poels, Kenneth; Depuydt, Tom; Van Den Begin, Robbe; Jaudet, Cyril; Collen, Christine; Engels, Benedikt; Reynders, Truus; Boussaer, Marlies; Gevaert, Thierry; De Ridder, Mark; Verellen, Dirk
2018-02-01
To evaluate the short and long-term variability of breathing induced tumor motion. 3D tumor motion of 19 lung and 18 liver lesions captured over the course of an SBRT treatment were evaluated and compared to the motion on 4D-CT. An implanted fiducial could be used for unambiguous motion information. Fast orthogonal fluoroscopy (FF) sequences, included in the treatment workflow, were used to evaluate motion during treatment. Several motion parameters were compared between different FF sequences from the same fraction to evaluate the intrafraction variability. To assess interfraction variability, amplitude and hysteresis were compared between fractions and with the 3D tumor motion registered by 4D-CT. Population based margins, necessary on top of the ITV to capture all motion variability, were calculated based on the motion captured during treatment. Baseline drift in the cranio-caudal (CC) or anterior-poster (AP) direction is significant (ie. >5 mm) for a large group of patients, in contrary to intrafraction amplitude and hysteresis variability. However, a correlation between intrafraction amplitude variability and mean motion amplitude was found (Pearson's correlation coefficient, r = 0.72, p < 10 -4 ). Interfraction variability in amplitude is significant for 46% of all lesions. As such, 4D-CT accurately captures the motion during treatment for some fractions but not for all. Accounting for motion variability during treatment increases the PTV margins in all directions, most significantly in CC from 5 mm to 13.7 mm for lung and 8.0 mm for liver. Both short-term and day-to-day tumor motion variability can be significant, especially for lesions moving with amplitudes above 7 mm. Abandoning passive motion management strategies in favor of more active ones is advised. Copyright © 2017 Elsevier B.V. All rights reserved.
Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang
2015-06-06
Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating discourse level analysis significantly improved the performance of extracting the protein-mutation-disease association. Future work includes the extension of MutD for full text articles.
Endoscopic pulsed digital holography for 3D measurements
NASA Astrophysics Data System (ADS)
Saucedo, A. Tonatiuh; Mendoza Santoyo, Fernando; de La Torre-Ibarra, Manuel; Pedrini, Giancarlo; Osten, Wolfgang
2006-02-01
A rigid endoscope and three different object illumination source positions are used in pulsed digital holography to measure the three orthogonal displacement components from hidden areas of a harmonically vibrating metallic cylinder. In order to obtain simultaneous 3D information from the optical set up, it is necessary to match the optical paths of each of the reference object beam pairs, but to incoherently mismatch the three reference object beam pairs, such that three pulsed digital holograms are incoherently recorded within a single frame of the CCD sensor. The phase difference is obtained using the Fourier method and by subtracting two digital holograms captured for two different object positions.
Common-path biodynamic imaging for dynamic fluctuation spectroscopy of 3D living tissue
NASA Astrophysics Data System (ADS)
Li, Zhe; Turek, John; Nolte, David D.
2017-03-01
Biodynamic imaging is a novel 3D optical imaging technology based on short-coherence digital holography that measures intracellular motions of cells inside their natural microenvironments. Here both common-path and Mach-Zehnder designs are presented. Biological tissues such as tumor spheroids and ex vivo biopsies are used as targets, and backscattered light is collected as signal. Drugs are applied to samples, and their effects are evaluated by identifying biomarkers that capture intracellular dynamics from the reconstructed holograms. Through digital holography and coherence gating, information from different depths of the samples can be extracted, enabling the deep-tissue measurement of the responses to drugs.
Quantitative Understanding of SHAPE Mechanism from RNA Structure and Dynamics Analysis.
Hurst, Travis; Xu, Xiaojun; Zhao, Peinan; Chen, Shi-Jie
2018-05-10
The selective 2'-hydroxyl acylation analyzed by primer extension (SHAPE) method probes RNA local structural and dynamic information at single nucleotide resolution. To gain quantitative insights into the relationship between nucleotide flexibility, RNA 3D structure, and SHAPE reactivity, we develop a 3D Structure-SHAPE Relationship model (3DSSR) to rebuild SHAPE profiles from 3D structures. The model starts from RNA structures and combines nucleotide interaction strength and conformational propensity, ligand (SHAPE reagent) accessibility, and base-pairing pattern through a composite function to quantify the correlation between SHAPE reactivity and nucleotide conformational stability. The 3DSSR model shows the relationship between SHAPE reactivity and RNA structure and energetics. Comparisons between the 3DSSR-predicted SHAPE profile and the experimental SHAPE data show correlation, suggesting that the extracted analytical function may have captured the key factors that determine the SHAPE reactivity profile. Furthermore, the theory offers an effective method to sieve RNA 3D models and exclude models that are incompatible with experimental SHAPE data.
Epsky, Nancy D; Gill, Micah A
2017-06-01
Volatile chemicals produced by actively fermenting aqueous grape juice bait have been found to be highly attractive to the African fig fly, Zaprionus indianus Gupta. This is a highly dynamic system and time period of fermentation is an important factor in bait efficacy. A series of field tests were conducted that evaluated effects of laboratory versus field fermentation and sampling period (days after placement [DAP]) on bait effectiveness as the first step in identifying the chemicals responsible for attraction. Tests of traps with bait that had been aged in the laboratory for 0, 3, 6, and 9 d and then sampled 3 DAP found higher capture in traps with 0- and 3-d-old baits than in traps with 6- or 9-d-old baits. To further define the time period that produced the most attractive baits, a subsequent test evaluated baits aged for 0, 2, 4, and 6 d in the laboratory and sampled after 1-4 DAP, with traps sampled and bait discarded at the end of each DAP period. The highest capture was in traps with 4-d-old bait sampled 1 DAP, with the second best capture in traps with 0-d-old bait sampled 3 DAP. However, there tended to be fewer flies as DAP increased, indicating potential loss of identifiable flies owing to decomposition in the actively fermenting solutions. When traps were sampled and bait recycled daily, the highest capture was in 2- and 4-d-old baits sampled 1 DAP and in 0-d-old baits sampled 2-4 DAP. Similar patterns were observed for capture of nontarget drosophilids. Published by Oxford University Press on behalf of Entomological Society of America 2017. This work is written by US Government employees and is in the public domain in the US.
Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data
NASA Astrophysics Data System (ADS)
Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.
2016-06-01
This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
Buser, Thaddaeus J; Sidlauskas, Brian L; Summers, Adam P
2018-05-01
We contrast 2D vs. 3D landmark-based geometric morphometrics in the fish subfamily Oligocottinae by using 3D landmarks from CT-generated models and comparing the morphospace of the 3D landmarks to one based on 2D landmarks from images. The 2D and 3D shape variables capture common patterns across taxa, such that the pairwise Procrustes distances among taxa correspond and the trends captured by principal component analysis are similar in the xy plane. We use the two sets of landmarks to test several ecomorphological hypotheses from the literature. Both 2D and 3D data reject the hypothesis that head shape correlates significantly with the depth at which a species is commonly found. However, in taxa where shape variation in the z-axis is high, the 2D shape variables show sufficiently strong distortion to influence the outcome of the hypothesis tests regarding the relationship between mouth size and feeding ecology. Only the 3D data support previous studies which showed that large mouth sizes correlate positively with high percentages of elusive prey in the diet. When used to test for morphological divergence, 3D data show no evidence of divergence, while 2D data show that one clade of oligocottines has diverged from all others. This clade shows the greatest degree of z-axis body depth within Oligocottinae, and we conclude that the inability of the 2D approach to capture this lateral body depth causes the incongruence between 2D and 3D analyses. Anat Rec, 301:806-818, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
Parallax scanning methods for stereoscopic three-dimensional imaging
NASA Astrophysics Data System (ADS)
Mayhew, Christopher A.; Mayhew, Craig M.
2012-03-01
Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.
Using 2 Assessment Methods May Better Describe Dietary Supplement Intakes in the United States123
Nicastro, Holly L; Bailey, Regan L; Dodd, Kevin W
2015-01-01
Background: One-half of US adults report using a dietary supplement. NHANES has traditionally assessed dietary supplement use via a 30-d questionnaire but in 2007 added a supplement module to the 24-h dietary recall (24HR). Objective: We compared these 2 dietary assessment methods, examined potential biases in the methods, and determined the effect that instrument choice had on estimates of prevalence of multivitamin/multimineral dietary supplement (MVMM) use. Methods: We described prevalence of dietary supplement use by age, sex, and assessment instrument in 12,285 adults in the United States (>19 y of age) from NHANES 2007–2010. Results: When using data from the questionnaire alone, 29.3% ± 1.0% of men and 35.5% ± 1.0% of women were users of MVMMs, whereas data from the 24HR only produced prevalence estimates of 26.3% ± 1.1% for men and 33.2% ± 1.0% for women. When using data from both instruments combined, 32.3% ± 1.2% of men and 39.5% ± 1.1% of women were classified as MVMM users. Prevalence estimates were significantly higher by 2–9% in all age–sex groups when using information from both instruments combined than when using data from either instrument individually. A digit preference bias and flattened slope phenomenon were observed in responses to the dietary supplement questionnaire. A majority (67%) of MVMMs were captured on both instruments, whereas 19% additional MVMMs were captured on the questionnaire and 14% additional on the 24HR. Of those captured only on the 24HR, 26% had missing label information, whereas only 12% and 9% of those captured on the questionnaire or both, respectively, had missing information. Conclusions: Use of both the dietary supplement questionnaire and the 24HR can provide advantages to researchers over the use of a single instrument and potentially capture a larger fraction of dietary supplement users. PMID:26019244
Using 2 Assessment Methods May Better Describe Dietary Supplement Intakes in the United States.
Nicastro, Holly L; Bailey, Regan L; Dodd, Kevin W
2015-07-01
One-half of US adults report using a dietary supplement. NHANES has traditionally assessed dietary supplement use via a 30-d questionnaire but in 2007 added a supplement module to the 24-h dietary recall (24HR). We compared these 2 dietary assessment methods, examined potential biases in the methods, and determined the effect that instrument choice had on estimates of prevalence of multivitamin/multimineral dietary supplement (MVMM) use. We described prevalence of dietary supplement use by age, sex, and assessment instrument in 12,285 adults in the United States (>19 y of age) from NHANES 2007-2010. When using data from the questionnaire alone, 29.3% ± 1.0% of men and 35.5% ± 1.0% of women were users of MVMMs, whereas data from the 24HR only produced prevalence estimates of 26.3% ± 1.1% for men and 33.2% ± 1.0% for women. When using data from both instruments combined, 32.3% ± 1.2% of men and 39.5% ± 1.1% of women were classified as MVMM users. Prevalence estimates were significantly higher by 2-9% in all age-sex groups when using information from both instruments combined than when using data from either instrument individually. A digit preference bias and flattened slope phenomenon were observed in responses to the dietary supplement questionnaire. A majority (67%) of MVMMs were captured on both instruments, whereas 19% additional MVMMs were captured on the questionnaire and 14% additional on the 24HR. Of those captured only on the 24HR, 26% had missing label information, whereas only 12% and 9% of those captured on the questionnaire or both, respectively, had missing information. Use of both the dietary supplement questionnaire and the 24HR can provide advantages to researchers over the use of a single instrument and potentially capture a larger fraction of dietary supplement users. © 2015 American Society for Nutrition.
Salazar-Gamarra, Rodrigo; Seelaus, Rosemary; da Silva, Jorge Vicente Lopes; da Silva, Airton Moreira; Dib, Luciano Lauria
2016-05-25
The aim of this study is to present the development of a new technique to obtain 3D models using photogrammetry by a mobile device and free software, as a method for making digital facial impressions of patients with maxillofacial defects for the final purpose of 3D printing of facial prostheses. With the use of a mobile device, free software and a photo capture protocol, 2D captures of the anatomy of a patient with a facial defect were transformed into a 3D model. The resultant digital models were evaluated for visual and technical integrity. The technical process and resultant models were described and analyzed for technical and clinical usability. Generating 3D models to make digital face impressions was possible by the use of photogrammetry with photos taken by a mobile device. The facial anatomy of the patient was reproduced by a *.3dp and a *.stl file with no major irregularities. 3D printing was possible. An alternative method for capturing facial anatomy is possible using a mobile device for the purpose of obtaining and designing 3D models for facial rehabilitation. Further studies must be realized to compare 3D modeling among different techniques and systems. Free software and low cost equipment could be a feasible solution to obtain 3D models for making digital face impressions for maxillofacial prostheses, improving access for clinical centers that do not have high cost technology considered as a prior acquisition.
The creation of a global telemedical information society.
Marsh, A
1998-01-01
Healthcare is a major candidate for improvement in any vision of the kinds of "information highways" and "information societies" that are now being visualized. The medical information management market is one of the largest and fastest growing segments of the healthcare device industry. The expected revenue by the year 2000 is US$21 billion. Telemedicine currently accounts for only a small segment but is expanding rapidly. In the United States more than 60% of federal telemedicine projects were initiated in the last two years. The concept of telemedicine captures much of what is developing in terms of technology implementations, especially if it is combined with the growth of the Internet and World Wide Web (WWW). It is foreseen that the World Wide Web (WWW) will become the most important communication medium of any future information society. If the development of such a society is to be on a global scale it should not be allowed to develop in an ad hoc manner. The Euromed Project has identified 20 building blocks resulting in 39 steps requiring multi-disciplinary collaborations. Since, the organization of information is therefore critical especially when concerning healthcare the Euromed Project has also introduced a new (global) standard called "Virtual Medical Worlds" which provides the potential to organize existing medical information and provide the foundations for its integration into future forms of medical information systems. Virtual Medical Worlds, based on 3D reconstructed medical models, utilizes the WWW as a navigational medium to remotely access multimedia medical information systems. The visualisation and manipulation of hyper-graphical 3D "body/organ" templates and patient-specific 3D/4D/and VR models is an attempt to define an information infrastructure in an emerging WWW-based telemedical information society.
The Creation of a global telemedical information society.
Marsh, A
1998-04-01
Healthcare is a major candidate for improvement in any vision of the kinds of 'information highways' and 'information societies' that are now being visualized. The medical information management market is one of the largest and fastest growing segments of the healthcare device industry. The expected revenue by the year 2000 is US$21 billion. Telemedicine currently accounts for only a small segment but is expanding rapidly. In the USA more than 60% of federal telemedicine projects were initiated in the last 2 years. The concept of telemedicine captures much of what is developing in terms of technology implementations, especially if it is combined with the growth of the Internet and World Wide Web (WWW). It is foreseen that the World Wide Web (WWW) will become the most important communication medium of any future information society. If the development of such a society is to be on a global scale it should not be allowed to develop in an ad hoc manner. For this reason, the Euromed Project has identified 20 building blocks resulting in 39 steps requiring multi-disciplinary collaborations. Since, the organization of information is therefore critical especially when concerning healthcare the Euromed Project has also introduced a new (global) standard called 'Virtual Medical Worlds' which provides the potential to organize existing medical information and provide the foundations for its integration into future forms of medical information systems. Virtual Medical Worlds, based on 3D reconstructed medical models, utilizes the WWW as a navigational medium to remotely access multi-media medical information systems. The visualization and manipulation of hyper-graphical 3D 'body/organ' templates and patient-specific 3D/4D/and VR models is an attempt to define an information infrastructure in an emerging WWW-based telemedical information society.
A geographic information system-based 3D city estate modeling and simulation system
NASA Astrophysics Data System (ADS)
Chong, Xiaoli; Li, Sha
2015-12-01
This paper introduces a 3D city simulation system which is based on geographic information system (GIS), covering all commercial housings of the city. A regional- scale, GIS-based approach is used to capture, describe, and track the geographical attributes of each house in the city. A sorting algorithm of "Benchmark + Parity Rate" is developed to cluster houses with similar spatial and construction attributes. This system is applicable for digital city modeling, city planning, housing evaluation, housing monitoring, and visualizing housing transaction. Finally, taking Jingtian area of Shenzhen as an example, the each unit of 35,997 houses in the area could be displayed, tagged, and easily tracked by the GIS-based city modeling and simulation system. The match market real conditions well and can be provided to house buyers as reference.
High-resolution hyperspectral ground mapping for robotic vision
NASA Astrophysics Data System (ADS)
Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich
2018-04-01
Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.
NASA Astrophysics Data System (ADS)
Moser, Stefan; Nau, Siegfried; Salk, Manfred; Thoma, Klaus
2014-02-01
The in situ investigation of dynamic events, ranging from car crash to ballistics, often is key to the understanding of dynamic material behavior. In many cases the important processes and interactions happen on the scale of milli- to microseconds at speeds of 1000 m s-1 or more. Often, 3D information is necessary to fully capture and analyze all relevant effects. High-speed 3D-visualization techniques are thus required for the in situ analysis. 3D-capable optical high-speed methods often are impaired by luminous effects and dust, while flash x-ray based methods usually deliver only 2D data. In this paper, a novel 3D-capable flash x-ray based method, in situ flash x-ray high-speed computed tomography is presented. The method is capable of producing 3D reconstructions of high-speed processes based on an undersampled dataset consisting of only a few (typically 3 to 6) x-ray projections. The major challenges are identified, discussed and the chosen solution outlined. The application is illustrated with an exemplary application of a 1000 m s-1 high-speed impact event on the scale of microseconds. A quantitative analysis of the in situ measurement of the material fragments with a 3D reconstruction with 1 mm voxel size is presented and the results are discussed. The results show that the HSCT method allows gaining valuable visual and quantitative mechanical information for the understanding and interpretation of high-speed events.
Validation of an inertial measurement unit for the measurement of jump count and height.
MacDonald, Kerry; Bahr, Roald; Baltich, Jennifer; Whittaker, Jackie L; Meeuwisse, Willem H
2017-05-01
To validate the use of an inertial measurement unit (IMU) for the collection of total jump count and assess the validity of an IMU for the measurement of jump height against 3-D motion analysis. Cross sectional validation study. 3D motion-capture laboratory and field based settings. Thirteen elite adolescent volleyball players. Participants performed structured drills, played a 4 set volleyball match and performed twelve counter movement jumps. Jump counts from structured drills and match play were validated against visual count from recorded video. Jump height during the counter movement jumps was validated against concurrent 3-D motion-capture data. The IMU device captured more total jumps (1032) than visual inspection (977) during match play. During structured practice, device jump count sensitivity was strong (96.8%) while specificity was perfect (100%). The IMU underestimated jump height compared to 3D motion-capture with mean differences for maximal and submaximal jumps of 2.5 cm (95%CI: 1.3 to 3.8) and 4.1 cm (3.1-5.1), respectively. The IMU offers a valid measuring tool for jump count. Although the IMU underestimates maximal and submaximal jump height, our findings demonstrate its practical utility for field-based measurement of jump load. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chen, Ri-Zhao; Li, Lian-Bing; Klein, Michael G; Li, Qi-Yun; Li, Peng-Pei; Sheng, Cheng-Fa
2016-02-01
Ostrinia furnacalis (Guenée) (Lepidoptera: Crambidae), commonly referred to as the Asian corn borer, is the most important corn pest in Asia. Although capturing males with pheromone traps has recently been the main monitoring tool and suppression technique, the best trap designs remain unclear. Commercially available Delta and funnel traps, along with laboratory-made basin and water traps, and modified Delta traps, were evaluated in corn and soybean fields during 2013-2014 in NE China. The water trap was superior for capturing first-generation O. furnacalis (1.37 times the Delta trap). However, the basin (8.3 ± 3.2 moths/trap/3 d), Delta (7.9 ± 2.5), and funnel traps (7.0 ± 2.3) were more effective than water traps (1.4 ± 0.4) during the second generation. Delta traps gave optimal captures when deployed at ca. 1.57 × the highest corn plants, 1.36× that of average soybean plants, and at the field borders. In Delta traps modified by covering 1/3 of their ends, captures increased by ca. 15.7 and 8.1% in the first and second generations, respectively. After 35 d in the field, pheromone lures were still ca. 50% as attractive as fresh lures, and retained this level of attraction for ca. 25 more days. Increased captures (first and second generation: 90.9 ± 9.5%; 78.3 ± 9.3%) were obtained by adding a lure exposed for 5 d to funnel traps baited with a 35-d lure. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
User Requirements Gathering for 3d Geographic Information in the United Kingdom
NASA Astrophysics Data System (ADS)
Wong, K.; Ellul, C.
2017-10-01
Despite significant developments, 3D technologies are still not fully exploited in practice due to the lack of awareness as well as the lack of understanding of who the users of 3D will be and what the user requirements are. From a National Mapping & Cadastral Agency and data acquisition perspective, each new 3D feature type and element within a feature added (such as doors, windows, chimneys, street lights) requires additional processing and cost to create. There is therefore a need to understand the importance of different 3D features and components for different applications. This will allow the direction of capture effort towards items that will be relevant to a wide range of users, as well as to understand the current status of, and interest in, 3D at a national level. This paper reports the results of an initial requirements gathering exercise for 3D geographic information in the United Kingdom (UK). It describes a user-centred design approach where usability and user needs are given extensive attention at each stage of the design process. Web-based questionnaires and semi-structured face-to-face interviews were used as complementary data collection methods to understand the user needs. The results from this initial study showed that while some applications lead the field with a high adoption of 3D, others are laggards, predominantly from organisational inertia. While individuals may be positive about the use of 3D, many struggle to justify the value and business case for 3D GI. Further work is required to identify the specific geometric and semantic requirements for different applications and to repeat the study with a larger sample.
The reliability and criterion validity of 2D video assessment of single leg squat and hop landing.
Herrington, Lee; Alenezi, Faisal; Alzhrani, Msaad; Alrayani, Hasan; Jones, Richard
2017-06-01
The objective was to assess the intra-tester, within and between day reliability of measurement of hip adduction (HADD) and frontal plane projection angles (FPPA) during single leg squat (SLS) and single leg landing (SLL) using 2D video and the validity of these measurements against those found during 3D motion capture. 15 healthy subjects had their SLS and SLL assessed using 3D motion capture and video analysis. Inter-tester reliability for both SLS and SLL when measuring FPPA and HADD show excellent correlations (ICC 2,1 0.97-0.99). Within and between day assessment of SLS and SLL showed good to excellent correlations for both variables (ICC 3,1 0.72-91). 2D FPPA measures were found to have good correlation with knee abduction angle in 3-D (r=0.79, p=0.008) during SLS, and also to knee abduction moment (r=0.65, p=0.009). 2D HADD showed very good correlation with 3D HADD during SLS (r=0.81, p=0.001), and a good correlation during SLL (r=0.62, p=0.013). All other associations were weak (r<0.4). This study suggests that 2D video kinematics have a reasonable association to what is being measured with 3D motion capture. Copyright © 2017 Elsevier Ltd. All rights reserved.
USM3D Predictions of Supersonic Nozzle Flow
NASA Technical Reports Server (NTRS)
Carter, Melissa B.; Elmiligui, Alaa A.; Campbell, Richard L.; Nayani, Sudheer N.
2014-01-01
This study focused on the NASA Tetrahedral Unstructured Software System CFD code (USM3D) capability to predict supersonic plume flow. Previous studies, published in 2004 and 2009, investigated USM3D's results versus historical experimental data. This current study continued that comparison however focusing on the use of the volume souring to capture the shear layers and internal shock structure of the plume. This study was conducted using two benchmark axisymmetric supersonic jet experimental data sets. The study showed that with the use of volume sourcing, USM3D was able to capture and model a jet plume's shear layer and internal shock structure.
Efficient local representations for three-dimensional palmprint recognition
NASA Astrophysics Data System (ADS)
Yang, Bing; Wang, Xiaohua; Yao, Jinliang; Yang, Xin; Zhu, Wenhua
2013-10-01
Palmprints have been broadly used for personal authentication because they are highly accurate and incur low cost. Most previous works have focused on two-dimensional (2-D) palmprint recognition in the past decade. Unfortunately, 2-D palmprint recognition systems lose the shape information when capturing palmprint images. Moreover, such 2-D palmprint images can be easily forged or affected by noise. Hence, three-dimensional (3-D) palmprint recognition has been regarded as a promising way to further improve the performance of palmprint recognition systems. We have developed a simple, but efficient method for 3-D palmprint recognition by using local features. We first utilize shape index representation to describe the geometry of local regions in 3-D palmprint data. Then, we extract local binary pattern and Gabor wavelet features from the shape index image. The two types of complementary features are finally fused at a score level for further improvements. The experimental results on the Hong Kong Polytechnic 3-D palmprint database, which contains 8000 samples from 400 palms, illustrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi
2015-04-01
The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.
Integration of real-time 3D capture, reconstruction, and light-field display
NASA Astrophysics Data System (ADS)
Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao
2015-03-01
Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
A 176×144 148dB adaptive tone-mapping imager
NASA Astrophysics Data System (ADS)
Vargas-Sierra, S.; Liñán-Cembrano, G.; Rodríguez-Vázquez, A.
2012-03-01
This paper presents a 176x144 (QCIF) HDR image sensor where visual information is simultaneously captured and adaptively compressed by means of an in-pixel tone mapping scheme. The tone mapping curve (TMC) is calculated from the histogram of a Time Stamp image captured in the previous frame, which serves as a probability indicator of the distribution of illuminations within the present frame. The chip produces 7-bit/pixel images that can map illuminations from 311μlux to 55.3 klux in a single frame in a way that each pixel decides when to stop observing photocurrent integration -with extreme values captured at 8s and 2.34μs respectively. Pixels size is 33x33μm2, which includes a 3x3μm2 Nwell- Psubstrate photodiode and an autozeroing technique for establishing the reset voltage, which cancels most of the offset contributions created by the analog processing circuitry. Dark signal (10.8 mV/s ) effects in the final image are attenuated by an automatic programming of the DAC top voltage. Measured characteristics are Sensitivity 5.79 V/lux.s , FWC 12.2ke-, Conversion Factor 129(e-/DN), and Read Noise 25e-. The chip has been designed in the 0.35μm OPTO technology from Austriamicrosystems (AMS). Due to the focal plane operation, this architecture is especially well suited to be implemented in a 3D (vertical stacking) technology using per-pixel TSVs.
Unified Information Access in Product Creation with an Integrated Control Desk
NASA Astrophysics Data System (ADS)
Wrasse, Kevin; Diener, Holger; Hayka, Haygazun; Stark, Rainer
2017-06-01
Customers demand for individualized products leads to a large variety of different products in small series and single-unit production. A high flexibility pressure in product creation is one result of this trend. In order to counteract the pressure, the information steadily increasing by Industry 4.0 must be made available at the workplace. Additionally, a better exchange of information between product development, production planning and production is necessary. The improvement of individual systems, like CAD, PDM, ERP and MES, can only achieve this to a limited extent. Since they mostly use systems from different manufacturers, the necessary deeper integration of information is only feasible for SMEs to a limited extend. The presented control desk helps to ensure a more flexible product creation as well as information exchange. It captures information from different IT systems in the production process and presents them integrated, task-oriented and oriented to the user’s mental model, e.g. information of the production combined with the 3D model of product parts, or information about product development on the 3D model of the production. The solution is a digital 3D model of the manufacturing environment, which is enriched by billboards for a quick information overview and web service windows to access detailed MES and PDM information. By this, the level of abstraction can be reduced and reacts to changed requirements in the short term, making informed decisions. The interaction with the control stands utilizes the touch skills of mobile and fixed systems such as smartphones, tablets and multitouch tables.
Development of real-time motion capture system for 3D on-line games linked with virtual character
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck
2004-10-01
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
NASA Astrophysics Data System (ADS)
Markman, Adam; Carnicer, Artur; Javidi, Bahram
2017-05-01
We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.
Belaghzal, Houda; Dekker, Job; Gibcus, Johan H
2017-07-01
Chromosome conformation capture-based methods such as Hi-C have become mainstream techniques for the study of the 3D organization of genomes. These methods convert chromatin interactions reflecting topological chromatin structures into digital information (counts of pair-wise interactions). Here, we describe an updated protocol for Hi-C (Hi-C 2.0) that integrates recent improvements into a single protocol for efficient and high-resolution capture of chromatin interactions. This protocol combines chromatin digestion and frequently cutting enzymes to obtain kilobase (kb) resolution. It also includes steps to reduce random ligation and the generation of uninformative molecules, such as unligated ends, to improve the amount of valid intra-chromosomal read pairs. This protocol allows for obtaining information on conformational structures such as compartment and topologically associating domains, as well as high-resolution conformational features such as DNA loops. Copyright © 2017 Elsevier Inc. All rights reserved.
Co-registered photoacoustic, thermoacoustic, and ultrasound mouse imaging
NASA Astrophysics Data System (ADS)
Reinecke, Daniel R.; Kruger, Robert A.; Lam, Richard B.; DelRio, Stephen P.
2010-02-01
We have constructed and tested a prototype test bed that allows us to form 3D photoacoustic CT images using near-infrared (NIR) irradiation (700 - 900 nm), 3D thermoacoustic CT images using microwave irradiation (434 MHz), and 3D ultrasound images from a commercial ultrasound scanner. The device utilizes a vertically oriented, curved array to capture the photoacoustic and thermoacoustic data. In addition, an 8-MHz linear array fixed in a horizontal position provides the ultrasound data. The photoacoustic and thermoacoustic data sets are co-registered exactly because they use the same detector. The ultrasound data set requires only simple corrections to co-register its images. The photoacoustic, thermoacoustic, and ultrasound images of mouse anatomy reveal complementary anatomic information as they exploit different contrast mechanisms. The thermoacoustic images differentiate between muscle, fat and bone. The photoacoustic images reveal the hemoglobin distribution, which is localized predominantly in the vascular space. The ultrasound images provide detailed information about the bony structures. Superposition of all three images onto a co-registered hybrid image shows the potential of a trimodal photoacoustic-thermoacoustic-ultrasound small-animal imaging system.
NASA Astrophysics Data System (ADS)
Trimborn, Barbara; Wolf, Ivo; Abu-Sammour, Denis; Henzler, Thomas; Schad, Lothar R.; Zöllner, Frank G.
2017-03-01
Image registration of preprocedural contrast-enhanced CTs to intraprocedual cone-beam computed tomography (CBCT) can provide additional information for interventional liver oncology procedures such as transcatheter arterial chemoembolisation (TACE). In this paper, a novel similarity metric for gradient-based image registration is proposed. The metric relies on the patch-based computation of histograms of oriented gradients (HOG) building the basis for a feature descriptor. The metric was implemented in a framework for rigid 3D-3D-registration of pre-interventional CT with intra-interventional CBCT data obtained during the workflow of a TACE. To evaluate the performance of the new metric, the capture range was estimated based on the calculation of the mean target registration error and compared to the results obtained with a normalized cross correlation metric. The results show that 3D HOG feature descriptors are suitable as image-similarity metric and that the novel metric can compete with established methods in terms of registration accuracy
Radiative capture reactions via indirect methods
NASA Astrophysics Data System (ADS)
Mukhamedzhanov, A. M.; Rogachev, G. V.
2017-10-01
Many radiative capture reactions of astrophysical interest occur at such low energies that their direct measurement is hardly possible. Until now the only indirect method, which was used to determine the astrophysical factor of the astrophysical radiative capture process, was the Coulomb dissociation. In this paper we address another indirect method, which can provide information about resonant radiative capture reactions at astrophysically relevant energies. This method can be considered an extension of the Trojan horse method for resonant radiative capture reactions. The idea of the suggested indirect method is to use the indirect reaction A (a ,s γ )F to obtain information about the radiative capture reaction A (x ,γ )F , where a =(s x ) and F =(x A ) . The main advantage of using the indirect reactions is the absence of the penetrability factor in the channel x +A , which suppresses the low-energy cross sections of the A (x ,γ )F reactions and does not allow one to measure these reactions at astrophysical energies. A general formalism to treat indirect resonant radiative capture reactions is developed when only a few intermediate states contribute and a statistical approach cannot be applied. The indirect method requires coincidence measurements of the triple differential cross section, which is a function of the photon scattering angle, energy, and the scattering angle of the outgoing spectator particle s . Angular dependence of the triple differential cross section at fixed scattering angle of the spectator s is the angular γ -s correlation function. Using indirect resonant radiative capture reactions, one can obtain information about important astrophysical resonant radiative capture reactions such as (p ,γ ) , (α ,γ ) , and (n ,γ ) on stable and unstable isotopes. The indirect technique makes accessible low-lying resonances, which are close to the threshold, and even subthreshold bound states located at negative energies. In this paper, after developing the general formalism, we demonstrate the application of the indirect reaction 12C(6Li,d γ )16O proceeding through 1- and 2+ subthreshold bound states and resonances to obtain the information about the 12C(α ,γ )16O radiative capture at the astrophysically most effective energy 0.3 MeV, which is impossible using standard direct measurements. Feasibility of the suggested approach is discussed.
NASA Astrophysics Data System (ADS)
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
Entanglement entropy for (3+1)-dimensional topological order with excitations
NASA Astrophysics Data System (ADS)
Wen, Xueda; He, Huan; Tiwari, Apoorv; Zheng, Yunqin; Ye, Peng
2018-02-01
Excitations in (3+1)-dimensional [(3+1)D] topologically ordered phases have very rich structures. (3+1)D topological phases support both pointlike and stringlike excitations, and in particular the loop (closed string) excitations may admit knotted and linked structures. In this work, we ask the following question: How do different types of topological excitations contribute to the entanglement entropy or, alternatively, can we use the entanglement entropy to detect the structure of excitations, and further obtain the information of the underlying topological order? We are mainly interested in (3+1)D topological order that can be realized in Dijkgraaf-Witten (DW) gauge theories, which are labeled by a finite group G and its group 4-cocycle ω ∈H4[G ;U(1 ) ] up to group automorphisms. We find that each topological excitation contributes a universal constant lndi to the entanglement entropy, where di is the quantum dimension that depends on both the structure of the excitation and the data (G ,ω ) . The entanglement entropy of the excitations of the linked/unlinked topology can capture different information of the DW theory (G ,ω ) . In particular, the entanglement entropy introduced by Hopf-link loop excitations can distinguish certain group 4-cocycles ω from the others.
Wavefront coding for fast, high-resolution light-sheet microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Olarte, Omar E.; Licea-Rodriguez, Jacob; Loza-Alvarez, Pablo
2017-02-01
Some biological experiments demand the observation of dynamics processes in 3D with high spatiotemporal resolution. The use of wavefront coding to extend the depth-of-field (DOF) of the collection arm of a light-sheet microscope is an interesting alternative for fast 3D imaging. Under this scheme, the 3D features of the sample are captured at high volumetric rates while the light sheet is swept rapidly within the extended DOF. The DOF is extended by coding the pupil function of the imaging lens by using a custom-designed phase mask. A posterior restoration step is required to decode the information of the captured images based on the applied phase mask [1]. This hybrid optical-digital approach is known as wavefront coding (WFC). Previously, we have demonstrated this method for performing fast 3D imaging of biological samples at medium resolution [2]. In this work, we present the extension of this approach for high-resolution microscopes. Under these conditions, the effective DOF of a standard high NA objective is of a few micrometers. Here we demonstrate that by the use of WFC, we can extend the DOF more than one order of magnitude keeping the high-resolution imaging. This is demonstrated for two designed phase masks using Zebrafish and C. elegans samples. [1] Olarte, O.E., Andilla, J., Artigas, D., and Loza-Alvarez, P., "Decoupled Illumination-Detection Microscopy. Selected Optics in Year 2105," in Optics and Photonics news 26, p. 41 (2015). [2] Olarte, O.E., Andilla, J., Artigas, D., and Loza-Alvarez, P., "Decoupled illumination detection in light sheet microscopy for fast volumetric imaging," Optica 2(8), 702 (2015).
Dubois, Fanny; Vandermoere, Franck; Gernez, Aurélie; Murphy, Jane; Toth, Rachel; Chen, Shuai; Geraghty, Kathryn M; Morrice, Nick A; MacKintosh, Carol
2009-11-01
We devised a strategy of 14-3-3 affinity capture and release, isotope differential (d(0)/d(4)) dimethyl labeling of tryptic digests, and phosphopeptide characterization to identify novel targets of insulin/IGF1/phosphatidylinositol 3-kinase signaling. Notably four known insulin-regulated proteins (PFK-2, PRAS40, AS160, and MYO1C) had high d(0)/d(4) values meaning that they were more highly represented among 14-3-3-binding proteins from insulin-stimulated than unstimulated cells. Among novel candidates, insulin receptor substrate 2, the proapoptotic CCDC6, E3 ubiquitin ligase ZNRF2, and signaling adapter SASH1 were confirmed to bind to 14-3-3s in response to IGF1/phosphatidylinositol 3-kinase signaling. Insulin receptor substrate 2, ZNRF2, and SASH1 were also regulated by phorbol ester via p90RSK, whereas CCDC6 and PRAS40 were not. In contrast, the actin-associated protein vasodilator-stimulated phosphoprotein and lipolysis-stimulated lipoprotein receptor, which had low d(0)/d(4) scores, bound 14-3-3s irrespective of IGF1 and phorbol ester. Phosphorylated Ser(19) of ZNRF2 (RTRAYpS(19)GS), phospho-Ser(90) of SASH1 (RKRRVpS(90)QD), and phospho- Ser(493) of lipolysis-stimulated lipoprotein receptor (RPRARpS(493)LD) provide one of the 14-3-3-binding sites on each of these proteins. Differential 14-3-3 capture provides a powerful approach to defining downstream regulatory mechanisms for specific signaling pathways.
Dubois, Fanny; Vandermoere, Franck; Gernez, Aurélie; Murphy, Jane; Toth, Rachel; Chen, Shuai; Geraghty, Kathryn M.; Morrice, Nick A.; MacKintosh, Carol
2009-01-01
We devised a strategy of 14-3-3 affinity capture and release, isotope differential (d0/d4) dimethyl labeling of tryptic digests, and phosphopeptide characterization to identify novel targets of insulin/IGF1/phosphatidylinositol 3-kinase signaling. Notably four known insulin-regulated proteins (PFK-2, PRAS40, AS160, and MYO1C) had high d0/d4 values meaning that they were more highly represented among 14-3-3-binding proteins from insulin-stimulated than unstimulated cells. Among novel candidates, insulin receptor substrate 2, the proapoptotic CCDC6, E3 ubiquitin ligase ZNRF2, and signaling adapter SASH1 were confirmed to bind to 14-3-3s in response to IGF1/phosphatidylinositol 3-kinase signaling. Insulin receptor substrate 2, ZNRF2, and SASH1 were also regulated by phorbol ester via p90RSK, whereas CCDC6 and PRAS40 were not. In contrast, the actin-associated protein vasodilator-stimulated phosphoprotein and lipolysis-stimulated lipoprotein receptor, which had low d0/d4 scores, bound 14-3-3s irrespective of IGF1 and phorbol ester. Phosphorylated Ser19 of ZNRF2 (RTRAYpS19GS), phospho-Ser90 of SASH1 (RKRRVpS90QD), and phospho- Ser493 of lipolysis-stimulated lipoprotein receptor (RPRARpS493LD) provide one of the 14-3-3-binding sites on each of these proteins. Differential 14-3-3 capture provides a powerful approach to defining downstream regulatory mechanisms for specific signaling pathways. PMID:19648646
Free viewpoint TV and its international standardization
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2009-05-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.
Validation of 3D multimodality roadmapping in interventional neuroradiology
NASA Astrophysics Data System (ADS)
Ruijters, Daniel; Homan, Robert; Mielekamp, Peter; van de Haar, Peter; Babic, Drazenko
2011-08-01
Three-dimensional multimodality roadmapping is entering clinical routine utilization for neuro-vascular treatment. Its purpose is to navigate intra-arterial and intra-venous endovascular devices through complex vascular anatomy by fusing pre-operative computed tomography (CT) or magnetic resonance (MR) with the live fluoroscopy image. The fused image presents the real-time position of the intra-vascular devices together with the patient's 3D vascular morphology and its soft-tissue context. This paper investigates the effectiveness, accuracy, robustness and computation times of the described methods in order to assess their suitability for the intended clinical purpose: accurate interventional navigation. The mutual information-based 3D-3D registration proved to be of sub-voxel accuracy and yielded an average registration error of 0.515 mm and the live machine-based 2D-3D registration delivered an average error of less than 0.2 mm. The capture range of the image-based 3D-3D registration was investigated to characterize its robustness, and yielded an extent of 35 mm and 25° for >80% of the datasets for registration of 3D rotational angiography (3DRA) with CT, and 15 mm and 20° for >80% of the datasets for registration of 3DRA with MR data. The image-based 3D-3D registration could be computed within 8 s, while applying the machine-based 2D-3D registration only took 1.5 µs, which makes them very suitable for interventional use.
Light field imaging and application analysis in THz
NASA Astrophysics Data System (ADS)
Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.
Engel, Lidia; Mortimer, Duncan; Bryan, Stirling; Lear, Scott A; Whitehurst, David G T
2017-07-01
The ICEpop CAPability measure for Adults (ICECAP-A) is a measure of capability wellbeing developed for use in economic evaluations. It was designed to overcome perceived limitations associated with existing preference-based instruments, where the explicit focus on health-related aspects of quality of life may result in the failure to capture fully the broader benefits of interventions and treatments that go beyond health. The aim of this study was to investigate the extent to which preference-based health-related quality of life (HRQoL) instruments are able to capture aspects of capability wellbeing, as measured by the ICECAP-A. Using data from the Multi Instrument Comparison project, pairwise exploratory factor analyses were conducted to compare the ICECAP-A with five preference-based HRQoL instruments [15D, Assessment of Quality of Life 8-dimension (AQoL-8D), EQ-5D-5L, Health Utilities Index Mark 3 (HUI-3), and SF-6D]. Data from 6756 individuals were used in the analyses. The ICECAP-A provides information above that garnered from most commonly used preference-based HRQoL instruments. The exception was the AQoL-8D; more common factors were identified between the ICECAP-A and AQoL-8D compared with the other pairwise analyses. Further investigations are needed to explore the extent and potential implications of 'double counting' when applying the ICECAP-A alongside health-related preference-based instruments.
Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling
NASA Astrophysics Data System (ADS)
Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.
2016-04-01
Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
ARCHAEO-SCAN: Portable 3D shape measurement system for archaeological field work
NASA Astrophysics Data System (ADS)
Knopf, George K.; Nelson, Andrew J.
2004-10-01
Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.
Natural 3D content on glasses-free light-field 3D cinema
NASA Astrophysics Data System (ADS)
Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.
2013-03-01
This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.
Real-time tricolor phase measuring profilometry based on CCD sensitivity calibration
NASA Astrophysics Data System (ADS)
Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng
2017-02-01
A real-time tricolor phase measuring profilometry (RTPMP) based on charge coupled device (CCD) sensitivity calibration is proposed. Only one colour fringe pattern whose red (R), green (G) and blue (B) components are, respectively, coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is needed and sent to an appointed flash memory on a specialized digital light projector (SDLP). A specialized time-division multiplexing timing sequence actively controls the SDLP to project the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile actively controls a high frame rate monochrome CCD camera to capture the corresponding deformed patterns synchronously with the SDLP. So the sufficient information for reconstructing the three-dimensional (3D) shape in one over twenty-four of a second is obtained. Due to the different spectral sensitivity of the CCD camera to RGB lights, the captured deformed patterns from R, G and B channels cannot share the same peak and valley, which will lead to lower accuracy or even failing to reconstruct the 3D shape. So a deformed pattern amending method based on CCD sensitivity calibration is developed to guarantee the accurate 3D reconstruction. The experimental results verify the feasibility of the proposed RTPMP method. The proposed RTPMP method can obtain the 3D shape at over the video frame rate of 24 frames per second, avoid the colour crosstalk completely and be effective for measuring real-time changing object.
NASA Astrophysics Data System (ADS)
Welch, Kyle; Kumar, Santosh; Hong, Jiarong; Cheng, Xiang
2017-11-01
Understanding the 3D flow induced by microswimmers is paramount to revealing how they interact with each other and their environment. While many studies have measured 2D projections of flow fields around single microorganisms, reliable 3D measurement remains elusive due to the difficulty in imaging fast 3D fluid flows at submicron spatial and millisecond temporal scales. Here, we present a precision measurement of the 3D flow field induced by motile planktonic algae cells, Chlamydomonas reinhardtii. We manually capture and hold stationary a single alga using a micropipette, while still allowing it to beat its flagella in the breastroke pattern characteristic to C. reinhardtii. The 3D flow field around the alga is then tracked by employing fast holographic imaging on 1 um tracer particles, which leads to a spatial resolution of 100 nm along the optical axis and 40 nm in the imaging plane normal to the optical axis. We image the flow around a single alga continuously through thousands of flagellar beat cycles and aggregate that data into a complete 3D flow field. Our study demonstrates the power of holography in imaging fast complex microscopic flow structures and provides crucial information for understanding the detailed locomotion of swimming microorganisms.
NASA Astrophysics Data System (ADS)
Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.
2009-02-01
Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.
Reaction μ-+6Li-->3H+3H+νμ and the axial current form factor in the timelike region
NASA Astrophysics Data System (ADS)
Mintz, S. L.
1983-09-01
The differential muon-capture rate dΓdET is obtained for the reaction μ-+6Li-->3H+3H+νμ over the allowed range of ET, the tritium energy, for two assumptions concerning the behavior of FA, the axial current form factor, in the timelike region; analytic continuation from the spacelike region and mirror behavior, FA(q2, timelike)=FA(q2, spacelike). The values of dΓdET under these two assumptions are found to vary substantially in the timelike region as a function of the mass MA in the dipole fit to FA. Values of dΓdET are given for MA2=2mπ2, 4.95mπ2, and 8mπ2. NUCLEAR REACTIONS Muon capture 6Li(μ-, νμ)3H3H, Γ, dΓdET calculated for two assumptions concerning the axial current form factor behavior in timelike region.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
NASA Astrophysics Data System (ADS)
Themistocleous, K.; Agapiou, A.; Hadjimitsis, D.
2016-10-01
The documentation of architectural cultural heritage sites has traditionally been expensive and labor-intensive. New innovative technologies, such as Unmanned Aerial Vehicles (UAVs), provide an affordable, reliable and straightforward method of capturing cultural heritage sites, thereby providing a more efficient and sustainable approach to documentation of cultural heritage structures. In this study, hundreds of images of the Panagia Chryseleousa church in Foinikaria, Cyprus were taken using a UAV with an attached high resolution camera. The images were processed to generate an accurate digital 3D model by using Structure in Motion techniques. Building Information Model (BIM) was then used to generate drawings of the church. The methodology described in the paper provides an accurate, simple and cost-effective method of documenting cultural heritage sites and generating digital 3D models using novel techniques and innovative methods.
Fully automated three-dimensional microscopy system
NASA Astrophysics Data System (ADS)
Kerschmann, Russell L.
2000-04-01
Tissue-scale structures such as vessel networks are imaged at micron resolution with the Virtual Tissue System (VT System). VT System imaging of cubic millimeters of tissue and other material extends the capabilities of conventional volumetric techniques such as confocal microscopy, and allows for the first time the integrated 2D and 3D analysis of important tissue structural relationships. The VT System eliminates the need for glass slide-mounted tissue sections and instead captures images directly from the surface of a block containing a sample. Tissues are en bloc stained with fluorochrome compounds, embedded in an optically conditioned polymer that suppresses image signals form dep within the block , and serially sectioned for imaging. Thousands of fully registered 2D images are automatically captured digitally to completely convert tissue samples into blocks of high-resolution information. The resulting multi gigabyte data sets constitute the raw material for precision visualization and analysis. Cellular function may be seen in a larger anatomical context. VT System technology makes tissue metrics, accurate cell enumeration and cell cycle analyses possible while preserving full histologic setting.
Creating 3D models of historical buildings using geospatial data
NASA Astrophysics Data System (ADS)
Alionescu, Adrian; Bǎlǎ, Alina Corina; Brebu, Floarea Maria; Moscovici, Anca-Maria
2017-07-01
Recently, a lot of interest has been shown to understand a real world object by acquiring its 3D images of using laser scanning technology and panoramic images. A realistic impression of geometric 3D data can be generated by draping real colour textures simultaneously captured by a colour camera images. In this context, a new concept of geospatial data acquisition has rapidly revolutionized the method of determining the spatial position of objects, which is based on panoramic images. This article describes an approach that comprises inusing terrestrial laser scanning and panoramic images captured with Trimble V10 Imaging Rover technology to enlarge the details and realism of the geospatial data set, in order to obtain 3D urban plans and virtual reality applications.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
NASA Astrophysics Data System (ADS)
Ai, Lingyu; Kim, Eun-Soo
2018-03-01
We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.
High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, ChiHye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I.; Lee, Hoonkyung
2016-01-01
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10−3 bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc– or V–porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials. PMID:26902156
High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide.
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, ChiHye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I; Lee, Hoonkyung
2016-02-23
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10(-3) bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc- or V-porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials.
High-throughput screening of metal-porphyrin-like graphenes for selective capture of carbon dioxide
NASA Astrophysics Data System (ADS)
Bae, Hyeonhu; Park, Minwoo; Jang, Byungryul; Kang, Yura; Park, Jinwoo; Lee, Hosik; Chung, Haegeun; Chung, Chihye; Hong, Suklyun; Kwon, Yongkyung; Yakobson, Boris I.; Lee, Hoonkyung
2016-02-01
Nanostructured materials, such as zeolites and metal-organic frameworks, have been considered to capture CO2. However, their application has been limited largely because they exhibit poor selectivity for flue gases and low capture capacity under low pressures. We perform a high-throughput screening for selective CO2 capture from flue gases by using first principles thermodynamics. We find that elements with empty d orbitals selectively attract CO2 from gaseous mixtures under low CO2 pressures (~10-3 bar) at 300 K and release it at ~450 K. CO2 binding to elements involves hybridization of the metal d orbitals with the CO2 π orbitals and CO2-transition metal complexes were observed in experiments. This result allows us to perform high-throughput screening to discover novel promising CO2 capture materials with empty d orbitals (e.g., Sc- or V-porphyrin-like graphene) and predict their capture performance under various conditions. Moreover, these findings provide physical insights into selective CO2 capture and open a new path to explore CO2 capture materials.
Video-based convolutional neural networks for activity recognition from robot-centric videos
NASA Astrophysics Data System (ADS)
Ryoo, M. S.; Matthies, Larry
2016-05-01
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Novel fully integrated computer system for custom footwear: from 3D digitization to manufacturing
NASA Astrophysics Data System (ADS)
Houle, Pascal-Simon; Beaulieu, Eric; Liu, Zhaoheng
1998-03-01
This paper presents a recently developed custom footwear system, which integrates 3D digitization technology, range image fusion techniques, a 3D graphical environment for corrective actions, parametric curved surface representation and computer numerical control (CNC) machining. In this system, a support designed with the help of biomechanics experts can stabilize the foot in a correct and neutral position. The foot surface is then captured by a 3D camera using active ranging techniques. A software using a library of documented foot pathologies suggests corrective actions on the orthosis. Three kinds of deformations can be achieved. The first method uses previously scanned pad surfaces by our 3D scanner, which can be easily mapped onto the foot surface to locally modify the surface shape. The second kind of deformation is construction of B-Spline surfaces by manipulating control points and modifying knot vectors in a 3D graphical environment to build desired deformation. The last one is a manual electronic 3D pen, which may be of different shapes and sizes, and has an adjustable 'pressure' information. All applied deformations should respect a G1 surface continuity, which ensure that the surface can accustom a foot. Once the surface modification process is completed, the resulting data is sent to manufacturing software for CNC machining.
NASA Astrophysics Data System (ADS)
Merrill, Daniel; An, Ran; Sun, Hao; Yakubov, Bakhtiyor; Matei, Daniela; Turek, John; Nolte, David
2016-01-01
Three-dimensional (3D) tissue cultures are replacing conventional two-dimensional (2D) cultures for applications in cancer drug development. However, direct comparisons of in vitro 3D models relative to in vivo models derived from the same cell lines have not been reported because of the lack of sensitive optical probes that can extract high-content information from deep inside living tissue. Here we report the use of biodynamic imaging (BDI) to measure response to platinum in 3D living tissue. BDI combines low-coherence digital holography with intracellular Doppler spectroscopy to study tumor drug response. Human ovarian cancer cell lines were grown either in vitro as 3D multicellular monoculture spheroids or as xenografts in nude mice. Fragments of xenografts grown in vivo in nude mice from a platinum-sensitive human ovarian cell line showed rapid and dramatic signatures of induced cell death when exposed to platinum ex vivo, while the corresponding 3D multicellular spheroids grown in vitro showed negligible response. The differences in drug response between in vivo and in vitro growth have important implications for predicting chemotherapeutic response using tumor biopsies from patients or patient-derived xenografts.
Fortuny, Josep; Marcé-Nogué, Jordi; Heiss, Egon; Sanchez, Montserrat; Gil, Lluis; Galobart, Àngel
2015-01-01
Biting is an integral feature of the feeding mechanism for aquatic and terrestrial salamanders to capture, fix or immobilize elusive or struggling prey. However, little information is available on how it works and the functional implications of this biting system in amphibians although such approaches might be essential to understand feeding systems performed by early tetrapods. Herein, the skull biomechanics of the Chinese giant salamander, Andrias davidianus is investigated using 3D finite element analysis. The results reveal that the prey contact position is crucial for the structural performance of the skull, which is probably related to the lack of a bony bridge between the posterior end of the maxilla and the anterior quadrato-squamosal region. Giant salamanders perform asymmetrical strikes. These strikes are unusual and specialized behavior but might indeed be beneficial in such sit-and-wait or ambush-predators to capture laterally approaching prey. However, once captured by an asymmetrical strike, large, elusive and struggling prey have to be brought to the anterior jaw region to be subdued by a strong bite. Given their basal position within extant salamanders and their “conservative” morphology, cryptobranchids may be useful models to reconstruct the feeding ecology and biomechanics of different members of early tetrapods and amphibians, with similar osteological and myological constraints. PMID:25853557
Performance Characteristics of a Kernel-Space Packet Capture Module
2010-03-01
Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to
Personal photograph enhancement using internet photo collections.
Zhang, Chenxi; Gao, Jizhou; Wang, Oliver; Georgel, Pierre; Yang, Ruigang; Davis, James; Frahm, Jan-Michael; Pollefeys, Marc
2014-02-01
Given the growth of Internet photo collections, we now have a visual index of all major cities and tourist sites in the world. However, it is still a difficult task to capture that perfect shot with your own camera when visiting these places, especially when your camera itself has limitations, such as a limited field of view. In this paper, we propose a framework to overcome the imperfections of personal photographs of tourist sites using the rich information provided by large-scale Internet photo collections. Our method deploys state-of-the-art techniques for constructing initial 3D models from photo collections. The same techniques are then used to register personal photographs to these models, allowing us to augment personal 2D images with 3D information. This strong available scene prior allows us to address a number of traditionally challenging image enhancement techniques and achieve high-quality results using simple and robust algorithms. Specifically, we demonstrate automatic foreground segmentation, mono-to-stereo conversion, field-of-view expansion, photometric enhancement, and additionally automatic annotation with geolocation and tags. Our method clearly demonstrates some possible benefits of employing the rich information contained in online photo databases to efficiently enhance and augment one's own personal photographs.
Maciel, A S; Araújo, J V; Campos, A K; Benjamin, L A; Freitas, L G
2009-06-01
The interaction between the nematode-trapping fungus Duddingtonia flagrans (isolate CG768) against Ancylostoma spp. dog infective larvae (L(3)) was evaluated by means of scanning electron microscopy. Adhesive network trap formation was observed 6h after the beginning of the interaction, and the capture of Ancylostoma spp. L(3) was observed 8h after the inoculation these larvae on the cellulose membranes colonized by the fungus. Scanning electron micrographs were taken at 0, 12, 24, 36 and 48 h, where 0 is the time when Ancylostoma spp. L(3) was first captured by the fungus. Details of the capture structure formed by the fungus were described. Nematophagous Fungus Helper Bacteria (NHB) were found at interactions points between the D. flagrans and Ancylostoma spp. L(3). The cuticle penetration by the differentiated fungal hyphae with the exit of nematode internal contents was observed 36 h after the capture. Ancylostoma spp. L(3) were completely destroyed after 48 h of interaction with the fungus. The scanning electron microscopy technique was efficient on the study of this interaction, showing that the nematode-trapping fungus D. flagrans (isolate CG768) is a potential exterminator of Ancylostoma spp. L(3).
A multimodal 3D framework for fire characteristics estimation
NASA Astrophysics Data System (ADS)
Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.
2018-02-01
In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.
NASA Astrophysics Data System (ADS)
Bond, C. E.; Howell, J.; Butler, R.
2016-12-01
With an increase in flood and storm events affecting infrastructure the role of weather systems, in a changing climate, and their impact is of increasing interest. Here we present a new workflow integrating crowd sourced imagery from the public with UAV photogrammetry to create, the first 3D hydrograph of a major flooding event. On December 30th 2015, Storm Frank resulted in high magnitude rainfall, within the Dee catchment in Aberdeenshire, resulting in the highest ever-recorded river level for the Dee, with significant impact on infrastructure and river morphology. The worst of the flooding occurred during daylight hours and was digitally captured by the public on smart phones and cameras. After the flood event a UAV was used to shoot photogrammetry to create a textured elevation model of the area around Aboyne Bridge on the River Dee. A media campaign aided crowd sourced digital imagery from the public, resulting in over 1,000 images submitted by the public. EXIF data captured by the imagery of the time, date were used to sort the images into a time series. Markers such as signs, walls, fences and roads within the images were used to determine river level height through the flood, and matched onto the elevation model to contour the change in river level. The resulting 3D hydrograph shows the build up of water on the up-stream side of the Bridge that resulted in significant scouring and under-mining in the flood. We have created the first known data based 3D hydrograph for a river section, from a UAV photogrammetric model and crowd sourced imagery. For future flood warning and infrastructure management a solution that allows a realtime hydrograph to be created utilising augmented reality to integrate the river level information in crowd sourced imagery directly onto a 3D model, would significantly improve management planning and infrastructure resilience assessment.
Sketching for Knowledge Capture: A Progress Report
2002-01-16
understanding , qualitative modeling, knowledge acquisition, analogy, diagrammatic reasoning, spatial reasoning. INTRODUCTION Sketching is often used...main limits of sKEA’s expressivity are (a) the predicate vocabulary in its knowledge base and (b) how natural it is to express a piece of information ...Sketching for knowledge capture: A progress report Kenneth D. Forbus Qualitative Reasoning Group Northwestern University 1890 Maple Avenue
Efficient view based 3-D object retrieval using Hidden Markov Model
NASA Astrophysics Data System (ADS)
Jain, Yogendra Kumar; Singh, Roshan Kumar
2013-12-01
Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Current and Future Research at DANCE
NASA Astrophysics Data System (ADS)
Jandel, M.; Baramsai, B.; Bredeweg, T. A.; Couture, A.; Hayes, A.; Kawano, T.; Mosby, S.; Rusev, G.; Stetcu, I.; Taddeucci, T. N.; Talou, P.; Ullmann, J. L.; Walker, C. L.; Wilhelmy, J. B.
2015-05-01
An overview of the current experimental program on measurements of neutron capture and neutron induced fission at the
Identification of depth information with stereoscopic mammography using different display methods
NASA Astrophysics Data System (ADS)
Morikawa, Takamitsu; Kodera, Yoshie
2013-03-01
Stereoscopy in radiography was widely used in the late 80's because it could be used for capturing complex structures in the human body, thus proving beneficial for diagnosis and screening. When radiologists observed the images stereoscopically, radiologists usually needed the training of their eyes in order to perceive the stereoscopic effect. However, with the development of three-dimensional (3D) monitors and their use in the medical field, only a visual inspection is no longer required in the medical field. The question then arises as to whether there is any difference in recognizing depth information when using conventional methods and that when using a 3D monitor. We constructed a phantom and evaluated the difference in capacity to identify the depth information between the two methods. The phantom consists of acryl steps and 3mm diameter acryl pillars on the top and bottom of each step. Seven observers viewed these images stereoscopically using the two display methods and were asked to judge the direction of the pillar that was on the top. We compared these judged direction with the direction of the real pillar arranged on the top, and calculated the percentage of correct answerers (PCA). The results showed that PCA obtained using the 3D monitor method was higher PCA by about 5% than that obtained using the naked-eye method. This indicated that people could view images stereoscopically more precisely using the 3D monitor method than when using with conventional methods, like the crossed or parallel eye viewing. We were able to estimate the difference in capacity to identify the depth information between the two display methods.
The Application of Three-Dimensional Surface Imaging System in Plastic and Reconstructive Surgery.
Li, Yanqi; Yang, Xin; Li, Dong
2016-02-01
Three-dimensional (3D) surface imaging system has gained popularity worldwide in clinical application. Unlike computed tomography and magnetic resonance imaging, it has the ability to capture 3D images with both shape and texture information. This feature has made it quite useful for plastic surgeons. This review article is mainly focusing on demonstrating the current status and analyzing the future of the application of 3D surface imaging systems in plastic and reconstructive surgery.Currently, 3D surface imaging system is mainly used in plastic and reconstructive surgery to help improve the reliability of surgical planning and assessing surgical outcome objectively. There have already been reports of its using on plastic and reconstructive surgery from head to toe. Studies on facial aging process, online applications development, and so on, have also been done through the use of 3D surface imaging system.Because different types of 3D surface imaging devices have their own advantages and disadvantages, a basic knowledge of their features is required and careful thought should be taken to choose the one that best fits a surgeon's demand.In the future, by integrating with other imaging tools and the 3D printing technology, 3D surface imaging system will play an important role in individualized surgical planning, implants production, meticulous surgical simulation, operative techniques training, and patient education.
Overview of FTV (free-viewpoint television)
NASA Astrophysics Data System (ADS)
Tanimoto, Masayuki
2010-07-01
We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... to capture public comment and current medical science information from presentations made by subject matter experts. This Forum is scheduled for January 17-26, 2012. VA plans to use this information to... first-come, first-served basis. FOR FURTHER INFORMATION CONTACT: Nick Olmos-Lau, M.D., Regulation Staff...
Jeong, Jiyun; Lee, Yeolin; Yoo, Yeongeun; Lee, Myung Kyu
2018-02-01
Agarose gel can be used for three dimensional (3D) cell culture because it prevents cell attachment. The dried agarose film coated on a culture plate also protected cell attachment and allowed 3D growth of cancer cells. We developed an efficient method for agarose film coating on an oxygen-plasma treated micropost polystyrene chip prepared by an injection molding process. The agarose film was modified to maleimide or Ni-NTA groups for covalent or cleavable attachment of photoactivatable Fc-specific antibody binding proteins (PFcBPs) via their N-terminal cysteine residues or 6xHis tag, respectively. The antibodies photocrosslinked onto the PFcBP-modified chips specifically captured the target cells without nonspecific binding, and the captured cells grew 3D modes on the chips. The captured cells on the cleavable antibody-modified chips were easily recovered by treatment of commercial trypsin-EDTA solution. Under fluidic conditions using an antibody-modified micropost chip, the cells were mainly captured on the micropost walls of the chip rather than on the bottom of it. The presented method will also be applicable for immobilization of oriented antibodies on various microfluidic chips with different structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture
Karatsidis, Angelos; Bellusci, Giovanni; Schepers, H. Martin; de Zee, Mark; Andersen, Michael S.; Veltink, Peter H.
2016-01-01
Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory. PMID:28042857
Maximal privacy without coherence.
Leung, Debbie; Li, Ke; Smith, Graeme; Smolin, John A
2014-07-18
Privacy is a fundamental feature of quantum mechanics. A coherently transmitted quantum state is inherently private. Remarkably, coherent quantum communication is not a prerequisite for privacy: there are quantum channels that are too noisy to transmit any quantum information reliably that can nevertheless send private classical information. Here, we ask how much private classical information a channel can transmit if it has little quantum capacity. We present a class of channels N(d) with input dimension d(2), quantum capacity Q(N(d)) ≤ 1, and private capacity P(N(d)) = log d. These channels asymptotically saturate an interesting inequality P(N) ≤ (1/2)[log d(A) + Q(N)] for any channel N with input dimension d(A) and capture the essence of privacy stripped of the confounding influence of coherence.
de Hoogt, Ronald; Estrada, Marta F; Vidic, Suzana; Davies, Emma J; Osswald, Annika; Barbier, Michael; Santo, Vítor E; Gjerde, Kjersti; van Zoggel, Hanneke J A A; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M; Verschuren, Emmy W; Hickman, John; Graeser, Ralph
2017-11-21
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented.
de Hoogt, Ronald; Estrada, Marta F.; Vidic, Suzana; Davies, Emma J.; Osswald, Annika; Barbier, Michael; Santo, Vítor E.; Gjerde, Kjersti; van Zoggel, Hanneke J. A. A.; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M.; Verschuren, Emmy W.; Hickman, John; Graeser, Ralph
2017-01-01
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented. PMID:29160867
NASA Astrophysics Data System (ADS)
Alidoost, F.; Arefi, H.
2017-11-01
Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Biofidelic Human Activity Modeling and Simulation with Large Variability
2014-11-25
A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to
Scanning 3D full human bodies using Kinects.
Tong, Jing; Zhou, Jin; Liu, Ligang; Pan, Zhigeng; Yan, Hao
2012-04-01
Depth camera such as Microsoft Kinect, is much cheaper than conventional 3D scanning devices, and thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of extreme low quality. In this paper, we present a novel scanning system for capturing 3D full human body models by using multiple Kinects. To avoid the interference phenomena, we use two Kinects to capture the upper part and lower part of a human body respectively without overlapping region. A third Kinect is used to capture the middle part of the human body from the opposite direction. We propose a practical approach for registering the various body parts of different views under non-rigid deformation. First, a rough mesh template is constructed and used to deform successive frames pairwisely. Second, global alignment is performed to distribute errors in the deformation space, which can solve the loop closure problem efficiently. Misalignment caused by complex occlusion can also be handled reasonably by our global alignment algorithm. The experimental results have shown the efficiency and applicability of our system. Our system obtains impressive results in a few minutes with low price devices, thus is practically useful for generating personalized avatars for everyday users. Our system has been used for 3D human animation and virtual try on, and can further facilitate a range of home–oriented virtual reality (VR) applications.
Adaptive 3D Face Reconstruction from Unconstrained Photo Collections.
Roth, Joseph; Tong, Yiying; Liu, Xiaoming
2016-12-07
Given a photo collection of "unconstrained" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.
Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.
Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael
2017-08-22
We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.
TU-CD-207-09: Analysis of the 3-D Shape of Patients’ Breast for Breast Imaging and Surgery Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agasthya, G; Sechopoulos, I
2015-06-15
Purpose: Develop a method to accurately capture the 3-D shape of patients’ external breast surface before and during breast compression for mammography/tomosynthesis. Methods: During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3-D breast surface imaging during breast compression and imaging for the cranio-caudal (CC) view on a digital mammography/breast tomosynthesis system. Digital projectors and cameras mounted on tripods were used to acquire 3-D surface images of the breast, in three conditions: (a) positioned on the support paddle before compression, (b) during compression by the compression paddle and (c) the anterior-posterior view with the breast in its natural,more » unsupported position. The breast was compressed to standard full compression with the compression paddle and a tomosynthesis image was acquired simultaneously with the 3-D surface. The 3-D surface curvature and deformation with respect to the uncompressed surface was analyzed using contours. The 3-D surfaces were voxelized to capture breast shape in a format that can be manipulated for further analysis. Results: A protocol was developed to accurately capture the 3-D shape of patients’ breast before and during compression for mammography. Using a pair of 3-D scanners, the 50 patient breasts were scanned in three conditions, resulting in accurate representations of the breast surfaces. The surfaces were post processed, analyzed using contours and voxelized, with 1 mm{sup 3} voxels, converting the breast shape into a format that can be easily modified as required. Conclusion: Accurate characterization of the breast curvature and shape for the generation of 3-D models is possible. These models can be used for various applications such as improving breast dosimetry, accurate scatter estimation, conducting virtual clinical trials and validating compression algorithms. Ioannis Sechopoulos is consultant for Fuji Medical Systems USA.« less
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Understanding Science: Studies of Communication and Information.
ERIC Educational Resources Information Center
Griffith, Belver C.
1989-01-01
Sets bibliometrics in the context of the sociology of science by tracing the influences of Robert Merton, Thomas Kuhn, and D. J. Price. Explores the discovery of strong empirical relationships among measured communication and information that capture important features of social process and cognitive change in science. (SR)
NASA Astrophysics Data System (ADS)
Chen, Xueqin; Li, Siyuan; Zhang, Xiaoxia; Min, Qianhao; Zhu, Jun-Jie
2015-03-01
Qualitative and quantitative characterization of phosphopeptides by means of mass spectrometry (MS) is the main goal of MS-based phosphoproteomics, but suffers from their low abundance in the large haystack of various biological molecules. Herein, we introduce two-dimensional (2D) metal oxides to tackle this biological separation issue. A nanocomposite composed of titanoniobate nanosheets embedded with Fe3O4 nanocrystals (Fe3O4-TiNbNS) is constructed via a facile cation-exchange approach, and adopted for the capture and isotope labeling of phosphopeptides. In this nanoarchitecture, the 2D titanoniobate nanosheets offer enlarged surface area and a spacious microenvironment for capturing phosphopeptides, while the Fe3O4 nanocrystals not only incorporate a magnetic response into the composite but, more importantly, also disrupt the restacking process between the titanoniobate nanosheets and thus preserve a greater specific surface for binding phosphopeptides. Owing to the extended active surface, abundant Lewis acid sites and excellent magnetic controllability, Fe3O4-TiNbNS demonstrates superior sensitivity, selectivity and capacity over homogeneous bulk metal oxides, layered oxides, and even restacked nanosheets in phosphopeptide enrichment, and further allows in situ isotope labeling to quantify aberrantly-regulated phosphopeptides from sera of leukemia patients. This composite nanosheet greatly contributes to the MS analysis of phosphopeptides and gives inspiration in the pursuit of 2D structured materials for separation of other biological molecules of interests.Qualitative and quantitative characterization of phosphopeptides by means of mass spectrometry (MS) is the main goal of MS-based phosphoproteomics, but suffers from their low abundance in the large haystack of various biological molecules. Herein, we introduce two-dimensional (2D) metal oxides to tackle this biological separation issue. A nanocomposite composed of titanoniobate nanosheets embedded with Fe3O4 nanocrystals (Fe3O4-TiNbNS) is constructed via a facile cation-exchange approach, and adopted for the capture and isotope labeling of phosphopeptides. In this nanoarchitecture, the 2D titanoniobate nanosheets offer enlarged surface area and a spacious microenvironment for capturing phosphopeptides, while the Fe3O4 nanocrystals not only incorporate a magnetic response into the composite but, more importantly, also disrupt the restacking process between the titanoniobate nanosheets and thus preserve a greater specific surface for binding phosphopeptides. Owing to the extended active surface, abundant Lewis acid sites and excellent magnetic controllability, Fe3O4-TiNbNS demonstrates superior sensitivity, selectivity and capacity over homogeneous bulk metal oxides, layered oxides, and even restacked nanosheets in phosphopeptide enrichment, and further allows in situ isotope labeling to quantify aberrantly-regulated phosphopeptides from sera of leukemia patients. This composite nanosheet greatly contributes to the MS analysis of phosphopeptides and gives inspiration in the pursuit of 2D structured materials for separation of other biological molecules of interests. Electronic supplementary information (ESI) available: Sequence of phosphopeptides from the digests of α- and β-casein percentages of the 4 methylated products from peptide β1 at different labeling reaction times; sequence of serum phosphopeptides; XPS spectra of Nb 3d and Ti 2p in layered oxides and H+-stacked nanosheets; phosphopeptide enrichment sensitivity of bulk oxides, layered oxides and H+-stacked nanosheets; AFM image of TiNbNS; saturated adsorption isotherm for pNPP adsorbed on bulk oxides, layered oxides and H+-stacked nanosheets; XPS spectra of Fe3O4-TiNbNS nitrogen adsorption-desorption isotherms and pore size distribution curves for the Fe3O4 nanocrystals; phosphopeptide enrichment sensitivity, capacity and selectivity of the Fe3O4-TiNbNS composites; MS/MS spectra of phosphopeptides enriched from serum; linear relationship between the logarithms of peak area ratio and loading volume ratio. See DOI: 10.1039/c4nr07041k
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
When the display matters: A multifaceted perspective on 3D geovisualizations
NASA Astrophysics Data System (ADS)
Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří
2017-04-01
This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.
Uav and Computer Vision, Detection of Infrastructure Losses and 3d Modeling
NASA Astrophysics Data System (ADS)
Barrile, V.; Bilotta, G.; Nunnari, A.
2017-11-01
The degradation of buildings, or rather the decline of their initial performances following external agents both natural (cold-thaw, earthquake, salt, etc.) and artificial (industrial field, urban setting, etc.), in the years lead to the necessity of developing Non-Destructive Testing (NDT) intended to give useful information for an explanation of a potential deterioration without damaging the state of buildings. An accurate examination of damages, of the repeat of cracks in condition of similar stress, indicate the existence of principles that control the creation of these events. There is no doubt that a precise visual analysis is at the bottom of a correct evaluation of the building. This paper deals with the creation of 3D models based on the capture of digital images, through autopilot flight UAV, for civil buildings situated on the area of Reggio Calabria. The following elaboration is done thanks to the use of commercial software, based on specific algorithms of the Structure from Motion (SfM) technique. SfM represents an important progress in the aerial and terrestrial survey field obtaining results, in terms of time and quality, comparable to those achievable through more traditional data capture methodologies.
Human body motion capture from multi-image video sequences
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2003-01-01
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
A quartz nanopillar hemocytometer for high-yield separation and counting of CD4+ T lymphocytes
NASA Astrophysics Data System (ADS)
Kim, Dong-Joo; Seol, Jin-Kyeong; Wu, Yu; Ji, Seungmuk; Kim, Gil-Sung; Hyung, Jung-Hwan; Lee, Seung-Yong; Lim, Hyuneui; Fan, Rong; Lee, Sang-Kwon
2012-03-01
We report the development of a novel quartz nanopillar (QNP) array cell separation system capable of selectively capturing and isolating a single cell population including primary CD4+ T lymphocytes from the whole pool of splenocytes. Integrated with a photolithographically patterned hemocytometer structure, the streptavidin (STR)-functionalized-QNP (STR-QNP) arrays allow for direct quantitation of captured cells using high content imaging. This technology exhibits an excellent separation yield (efficiency) of ~95.3 +/- 1.1% for the CD4+ T lymphocytes from the mouse splenocyte suspensions and good linear response for quantitating captured CD4+ T-lymphoblasts, which is comparable to flow cytometry and outperforms any non-nanostructured surface capture techniques, i.e. cell panning. This nanopillar hemocytometer represents a simple, yet efficient cell capture and counting technology and may find immediate applications for diagnosis and immune monitoring in the point-of-care setting.We report the development of a novel quartz nanopillar (QNP) array cell separation system capable of selectively capturing and isolating a single cell population including primary CD4+ T lymphocytes from the whole pool of splenocytes. Integrated with a photolithographically patterned hemocytometer structure, the streptavidin (STR)-functionalized-QNP (STR-QNP) arrays allow for direct quantitation of captured cells using high content imaging. This technology exhibits an excellent separation yield (efficiency) of ~95.3 +/- 1.1% for the CD4+ T lymphocytes from the mouse splenocyte suspensions and good linear response for quantitating captured CD4+ T-lymphoblasts, which is comparable to flow cytometry and outperforms any non-nanostructured surface capture techniques, i.e. cell panning. This nanopillar hemocytometer represents a simple, yet efficient cell capture and counting technology and may find immediate applications for diagnosis and immune monitoring in the point-of-care setting. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr11338d
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panaccione, Charles; Staab, Greg; Meuleman, Erik
ION has developed a mathematically driven model for a contacting device incorporating mass transfer, heat transfer, and computational fluid dynamics. This model is based upon a parametric structure for purposes of future commercialization. The most promising design from modeling was 3D printed and tested in a bench scale CO 2 capture unit and compared to commercially available structured packing tested in the same unit.
NASA Spacecraft Captures 3-D View of Massive Australian Wildfire
2013-02-05
This 3-D view was created from data acquired Feb. 4, 2013 by NASA Terra spacecraft showing a massive wildfire which damaged Australia largest optical astronomy facility, the Siding Spring Observatory.
Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.
Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C
2004-11-01
Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.
Systems and Methods for Automated Water Detection Using Visible Sensors
NASA Technical Reports Server (NTRS)
Rankin, Arturo L. (Inventor); Matthies, Larry H. (Inventor); Bellutta, Paolo (Inventor)
2016-01-01
Systems and methods are disclosed that include automated machine vision that can utilize images of scenes captured by a 3D imaging system configured to image light within the visible light spectrum to detect water. One embodiment includes autonomously detecting water bodies within a scene including capturing at least one 3D image of a scene using a sensor system configured to detect visible light and to measure distance from points within the scene to the sensor system, and detecting water within the scene using a processor configured to detect regions within each of the at least one 3D images that possess at least one characteristic indicative of the presence of water.
NASA Astrophysics Data System (ADS)
Kotan, Muhammed; Öz, Cemil
2017-12-01
An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.
Schulz-Wendtland, Rüdiger; Harz, Markus; Meier-Meitinger, Martina; Brehm, Barbara; Wacker, Till; Hahn, Horst K; Wagner, Florian; Wittenberg, Thomas; Beckmann, Matthias W; Uder, Michael; Fasching, Peter A; Emons, Julius
2017-03-01
Three-dimensional (3D) printing has become widely available, and a few cases of its use in clinical practice have been described. The aim of this study was to explore facilities for the semi-automated delineation of breast cancer tumors and to assess the feasibility of 3D printing of breast cancer tumors. In a case series of five patients, different 3D imaging methods-magnetic resonance imaging (MRI), digital breast tomosynthesis (DBT), and 3D ultrasound-were used to capture 3D data for breast cancer tumors. The volumes of the breast tumors were calculated to assess the comparability of the breast tumor models, and the MRI information was used to render models on a commercially available 3D printer to materialize the tumors. The tumor volumes calculated from the different 3D methods appeared to be comparable. Tumor models with volumes between 325 mm 3 and 7,770 mm 3 were printed and compared with the models rendered from MRI. The materialization of the tumors reflected the computer models of them. 3D printing (rapid prototyping) appears to be feasible. Scenarios for the clinical use of the technology might include presenting the model to the surgeon to provide a better understanding of the tumor's spatial characteristics in the breast, in order to improve decision-making in relation to neoadjuvant chemotherapy or surgical approaches. J. Surg. Oncol. 2017;115:238-242. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Deformation Invariant Attribute Vector for Deformable Registration of Longitudinal Brain MR Images
Li, Gang; Guo, Lei; Liu, Tianming
2009-01-01
This paper presents a novel approach to define deformation invariant attribute vector (DIAV) for each voxel in 3D brain image for the purpose of anatomic correspondence detection. The DIAV method is validated by using synthesized deformation in 3D brain MRI images. Both theoretic analysis and experimental studies demonstrate that the proposed DIAV is invariant to general nonlinear deformation. Moreover, our experimental results show that the DIAV is able to capture rich anatomic information around the voxels and exhibit strong discriminative ability. The DIAV has been integrated into a deformable registration algorithm for longitudinal brain MR images, and the results on both simulated and real brain images are provided to demonstrate the good performance of the proposed registration algorithm based on matching of DIAVs. PMID:19369031
Using focused plenoptic cameras for rich image capture.
Georgiev, T; Lumsdaine, A; Chunev, G
2011-01-01
This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.
Geographic Video 3d Data Model And Retrieval
NASA Astrophysics Data System (ADS)
Han, Z.; Cui, C.; Kong, Y.; Wu, H.
2014-04-01
Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
3D data. The support for the acquisition of such research instrumentation have significantly facilitated our current and future research and educate ...SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and
The capture and recreation of 3D auditory scenes
NASA Astrophysics Data System (ADS)
Li, Zhiyun
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
The Requirement for U.S. Army Special Forces to Conduct Interrogation
2012-06-01
SCHARFF .............93 APPENDIX D. “SUGGESTIONS FOR JAPANESE INTERPRETERS BASED ON WORK IN THE FIELD” BY SHERWOOD MORAN ...............107 INITIAL...reluctant to risk their men simply to capture Japanese soldiers—soldiers they were convinced would never disclose valuable intelligence.44 Through a...only the most severe coercive measures of interrogation would convince a captured Japanese soldier to divulge information. Moran believed “strong
Hdr Imaging for Feature Detection on Detailed Architectural Scenes
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.
2015-02-01
3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.
3D medical thermography device
NASA Astrophysics Data System (ADS)
Moghadam, Peyman
2015-05-01
In this paper, a novel handheld 3D medical thermography system is introduced. The proposed system consists of a thermal-infrared camera, a color camera and a depth camera rigidly attached in close proximity and mounted on an ergonomic handle. As a practitioner holding the device smoothly moves it around the human body parts, the proposed system generates and builds up a precise 3D thermogram model by incorporating information from each new measurement in real-time. The data is acquired in motion, thus it provides multiple points of view. When processed, these multiple points of view are adaptively combined by taking into account the reliability of each individual measurement which can vary due to a variety of factors such as angle of incidence, distance between the device and the subject and environmental sensor data or other factors influencing a confidence of the thermal-infrared data when captured. Finally, several case studies are presented to support the usability and performance of the proposed system.
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533
2005-03-24
This high-resolution stereo anaglyph captured by NASA Cassini spacecraft of Saturn moon Enceladus shows a region of craters softened by time and torn apart by tectonic stresses. 3D glasses are necessary to view this image.
ERIC Educational Resources Information Center
Rowe, Jeremy; Razdan, Anshuman
The Partnership for Research in Spatial Modeling (PRISM) project at Arizona State University (ASU) developed modeling and analytic tools to respond to the limitations of two-dimensional (2D) data representations perceived by affiliated discipline scientists, and to take advantage of the enhanced capabilities of three-dimensional (3D) data that…
Personal Photo Enhancement Using Internet Photo Collections.
Zhang, Chenxi; Gao, Jizhou; Wang, Oliver; Georgel, Pierre; Yang, Ruigang; Davis, James; Frahm, Jan-Michael; Pollefeys, Marc
2013-04-26
Given the growth of Internet photo collections we now have a visual index of all major cities and tourist sites in the world. However, it is still a difficult task to capture that perfect shot with your own camera when visiting these places, especially when your camera itself has limitations, such as a limited field of view. In this paper, we propose a framework to overcome the imperfections of personal photos of tourist sites using the rich information provided by large scale Internet photo collections. Our method deploys state-of-the-art techniques for constructing initial 3D models from photo collections. The same techniques are then used to register personal photos to these models, allowing us to augment personal 2D images with 3D information. This strong available scene prior allows us to address a number of traditionally challenging image enhancement techniques, and achieve high quality results using simple and robust algorithms. Specifically, we demonstrate automatic foreground segmentation, mono-to-stereo conversion, the field of view expansion, photometric enhancement, and additionally automatic annotation with geo-location and tags. Our method clearly demonstrates some possible benefits of employing the rich information contained in on-line photo databases to efficiently enhance and augment one’s own personal photos.
Comparison of Cyberware PX and PS 3D human head scanners
NASA Astrophysics Data System (ADS)
Carson, Jeremy; Corner, Brian D.; Crockett, Eric; Li, Peng; Paquette, Steven
2008-02-01
A common limitation of laser line three-Dimensional (3D) scanners is the inability to scan objects with surfaces that are either parallel to the laser line or that self-occlude. Filling in missing areas adds some unwanted inaccuracy to the 3D model. Capturing the human head with a Cyberware PS Head Scanner is an example of obtaining a model where the incomplete areas are difficult to fill accurately. The PS scanner uses a single vertical laser line to illuminate the head and is unable to capture data at top of the head, where the line of sight is tangent to the surface, and under the chin, an area occluded by the chin when the subject looks straight forward. The Cyberware PX Scanner was developed to obtain this missing 3D head data. The PX scanner uses two cameras offset at different angles to provide a more detailed head scan that captures surfaces missed by the PS scanner. The PX scanner cameras also use new technology to obtain color maps that are of higher resolution than the PS Scanner. The two scanners were compared in terms of amount of surface captured (surface area and volume) and the quality of head measurements when compared to direct measurements obtained through standard anthropometry methods. Relative to the PS scanner, the PX head scans were more complete and provided the full set of head measurements, but actual measurement values, when available from both scanners, were about the same.
3D Geological Model for "LUSI" - a Deep Geothermal System
NASA Astrophysics Data System (ADS)
Sohrabi, Reza; Jansen, Gunnar; Mazzini, Adriano; Galvan, Boris; Miller, Stephen A.
2016-04-01
Geothermal applications require the correct simulation of flow and heat transport processes in porous media, and many of these media, like deep volcanic hydrothermal systems, host a certain degree of fracturing. This work aims to understand the heat and fluid transport within a new-born sedimentary hosted geothermal system, termed Lusi, that began erupting in 2006 in East Java, Indonesia. Our goal is to develop conceptual and numerical models capable of simulating multiphase flow within large-scale fractured reservoirs such as the Lusi region, with fractures of arbitrary size, orientation and shape. Additionally, these models can also address a number of other applications, including Enhanced Geothermal Systems (EGS), CO2 sequestration (Carbon Capture and Storage CCS), and nuclear waste isolation. Fractured systems are ubiquitous, with a wide-range of lengths and scales, making difficult the development of a general model that can easily handle this complexity. We are developing a flexible continuum approach with an efficient, accurate numerical simulator based on an appropriate 3D geological model representing the structure of the deep geothermal reservoir. Using previous studies, borehole information and seismic data obtained in the framework of the Lusi Lab project (ERC grant n°308126), we present here the first 3D geological model of Lusi. This model is calculated using implicit 3D potential field or multi-potential fields, depending on the geological context and complexity. This method is based on geological pile containing the geological history of the area and relationship between geological bodies allowing automatic computation of intersections and volume reconstruction. Based on the 3D geological model, we developed a new mesh algorithm to create hexahedral octree meshes to transfer the structural geological information for 3D numerical simulations to quantify Thermal-Hydraulic-Mechanical-Chemical (THMC) physical processes.
Cryo-Scanning Electron Microscopy of Captured Cirrus Ice Particles
NASA Astrophysics Data System (ADS)
Magee, N. B.; Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.
2016-12-01
We present the latest collection of high-resolution cryo-scanning electron microscopy images and microanalysis of cirrus ice particles captured by high-altitude balloon (ICE-Ball, see abstracts by K. Boaggio and M. Bandamede). Ice particle images and sublimation-residues are derived from particles captured during approximately 15 balloon flights conducted in Pennsylvania and New Jersey over the past 12 months. Measurements include 3D digital elevation model reconstructions of ice particles, and associated statistical analyses of entire particles and particle sub-facets and surfaces. This 3D analysis reveals that morphologies of most ice particles captured deviate significantly from ideal habits, and display geometric complexity and surface roughness at multiple measureable scales, ranging from 100's nanometers to 100's of microns. The presentation suggests potential a path forward for representing scattering from a realistically complex array of ice particle shapes and surfaces.
3D cinema to 3DTV content adaptation
NASA Astrophysics Data System (ADS)
Yasakethu, L.; Blondé, L.; Doyen, D.; Huynh-Thu, Q.
2012-03-01
3D cinema and 3DTV have grown in popularity in recent years. Filmmakers have a significant opportunity in front of them given the recent success of 3D films. In this paper we investigate whether this opportunity could be extended to the home in a meaningful way. "3D" perceived from viewing stereoscopic content depends on the viewing geometry. This implies that the stereoscopic-3D content should be captured for a specific viewing geometry in order to provide a satisfactory 3D experience. However, although it would be possible, it is clearly not viable, to produce and transmit multiple streams of the same content for different screen sizes. In this study to solve the above problem, we analyze the performance of six different disparity-based transformation techniques, which could be used for cinema-to-3DTV content conversion. Subjective tests are performed to evaluate the effectiveness of the algorithms in terms of depth effect, visual comfort and overall 3D quality. The resultant 3DTV experience is also compared to that of cinema. We show that by applying the proper transformation technique on the content originally captured for cinema, it is possible to enhance the 3DTV experience. The selection of the appropriate transformation is highly dependent on the content characteristics.
Wallingford, Anna K; Loeb, Gregory M
2016-08-01
We investigated the influence of developmental conditions on adult morphology, reproductive arrest, and winter stress tolerance of the invasive pest of small fruit, Drosophila suzukii (Matsumura) (Diptera: Drosophilidae). Cooler rearing temperatures (15 °C) resulted in larger, darker "winter morph" (WM) adults than "summer morph" flies reared at optimal temperatures (25 °C). Abdominal pigmentation scores and body size measurements of laboratory-reared WMs were similar to those of D. suzukii females captured in late autumn in Geneva, NY. We evaluated reproductive diapause and cold hardiness in live-captured D. suzukii WMs as well as WMs reared in the laboratory from egg to adult under four developmental conditions: static cool temperatures (SWM; 15 °C, 12:12 h L:D), fluctuating temperatures (FWM; 20 °C L: 10 °C D, 12:12 h L:D), and static cool temperatures (15 °C, 12:12 h L:D) followed by posteclosion chilling (CWM; 10 °C) under short-day (SD; 12:12 h L:D) or long-day photoperiods (LD; 16:8 h L:D). Live-captured D. suzukii WMs and CWMs had longer preoviposition times than newly eclosed summer morph adults, indicating a reproductive diapause that was not observed in SWMs or FWMs. Additionally, recovery after acute freeze stress was not different between CWM-SD females and live captured WM females. More 7-d-old CWMs survived 0, -1, or - 3 °C freeze stress than summer morph adults, and more CWM-SD adults survived -3 °C freeze stress than CWM-LD adults. Survival after -3 °C freeze stress was significantly higher in diapausing, CWMs than nondiapausing SWMs and FWMs. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.
2015-01-01
Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764
Mithila, Farha J; Oyola-Reynoso, Stephanie; Thuo, Martin M; Atkinson, Manza Bj
2016-01-01
Structural distortions due to hyperconjugation in organic molecules, like norbornenes, are well captured through X-ray crystallographic data, but are sometimes difficult to visualize especially for those applying chemical knowledge and are not chemists. Crystal structure from the Cambridge database were downloaded and converted to .stl format. The structures were then printed at the desired scale using a 3D printer. Replicas of the crystal structures were accurately reproduced in scale and any resulting distortions were clearly visible from the macroscale models. Through space interactions or effect of through space hyperconjugation was illustrated through loss of symmetry or distortions thereof. The norbornene structures exhibits distortion that cannot be observed through conventional ball and stick modelling kits. We show that 3D printed models derived from crystallographic data capture even subtle distortions in molecules. We translate such crystallographic data into scaled-up models through 3D printing.
Advances in three-dimensional integral imaging: sensing, display, and applications [Invited].
Xiao, Xiao; Javidi, Bahram; Martinez-Corral, Manuel; Stern, Adrian
2013-02-01
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
NASA Astrophysics Data System (ADS)
Ardini, Matteo; Golia, Giordana; Passaretti, Paolo; Cimini, Annamaria; Pitari, Giuseppina; Giansanti, Francesco; Leandro, Luana Di; Ottaviano, Luca; Perrozzi, Francesco; Santucci, Sandro; Morandi, Vittorio; Ortolani, Luca; Christian, Meganne; Treossi, Emanuele; Palermo, Vincenzo; Angelucci, Francesco; Ippoliti, Rodolfo
2016-03-01
Graphene oxide (GO) is rapidly emerging worldwide as a breakthrough precursor material for next-generation devices. However, this requires the transition of its two-dimensional layered structure into more accessible three-dimensional (3D) arrays. Peroxiredoxins (Prx) are a family of multitasking redox enzymes, self-assembling into ring-like architectures. Taking advantage of both their symmetric structure and function, 3D reduced GO-based composites are hereby built up. Results reveal that the ``double-faced'' Prx rings can adhere flat on single GO layers and partially reduce them by their sulfur-containing amino acids, driving their stacking into 3D multi-layer reduced GO-Prx composites. This process occurs in aqueous solution at a very low GO concentration, i.e. 0.2 mg ml-1. Further, protein engineering allows the Prx ring to be enriched with metal binding sites inside its lumen. This feature is exploited to both capture presynthesized gold nanoparticles and grow in situ palladium nanoparticles paving the way to straightforward and ``green'' routes to 3D reduced GO-metal composite materials.Graphene oxide (GO) is rapidly emerging worldwide as a breakthrough precursor material for next-generation devices. However, this requires the transition of its two-dimensional layered structure into more accessible three-dimensional (3D) arrays. Peroxiredoxins (Prx) are a family of multitasking redox enzymes, self-assembling into ring-like architectures. Taking advantage of both their symmetric structure and function, 3D reduced GO-based composites are hereby built up. Results reveal that the ``double-faced'' Prx rings can adhere flat on single GO layers and partially reduce them by their sulfur-containing amino acids, driving their stacking into 3D multi-layer reduced GO-Prx composites. This process occurs in aqueous solution at a very low GO concentration, i.e. 0.2 mg ml-1. Further, protein engineering allows the Prx ring to be enriched with metal binding sites inside its lumen. This feature is exploited to both capture presynthesized gold nanoparticles and grow in situ palladium nanoparticles paving the way to straightforward and ``green'' routes to 3D reduced GO-metal composite materials. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08632a
3D SAPIV particle field reconstruction method based on adaptive threshold.
Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi
2018-03-01
Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.
NASA Astrophysics Data System (ADS)
Harryandi, Sheila
The Niobrara/Codell unconventional tight reservoir play at Wattenberg Field, Colorado has potentially two billion barrels of oil equivalent requiring hundreds of wells to access this resource. The Reservoir Characterization Project (RCP), in conjunction with Anadarko Petroleum Corporation (APC), began reservoir characterization research to determine how to increase reservoir recovery while maximizing operational efficiency. Past research results indicate that targeting the highest rock quality within the reservoir section for hydraulic fracturing is optimal for improving horizontal well stimulation through multi-stage hydraulic fracturing. The reservoir is highly heterogeneous, consisting of alternating chalks and marls. Modeling the facies within the reservoir is very important to be able to capture the heterogeneity at the well-bore scale; this heterogeneity is then upscaled from the borehole scale to the seismic scale to distribute the heterogeneity in the inter-well space. I performed facies clustering analysis to create several facies defining the reservoir interval in the RCP Wattenberg Field study area. Each facies can be expressed in terms of a range of rock property values from wells obtained by cluster analysis. I used the facies classification from the wells to guide the pre-stack seismic inversion and multi-attribute transform. The seismic data extended the facies information and rock quality information from the wells. By obtaining this information from the 3D facies model, I generated a facies volume capturing the reservoir heterogeneity throughout a ten square mile study-area within the field area. Recommendations are made based on the facies modeling, which include the location for future hydraulic fracturing/re-fracturing treatments to improve recovery from the reservoir, and potential deeper intervals for future exploration drilling targets.
Easy and Fast Reconstruction of a 3D Avatar with an RGB-D Sensor.
Mao, Aihua; Zhang, Hong; Liu, Yuxin; Zheng, Yinglong; Li, Guiqing; Han, Guoqiang
2017-05-12
This paper proposes a new easy and fast 3D avatar reconstruction method using an RGB-D sensor. Users can easily implement human body scanning and modeling just with a personal computer and a single RGB-D sensor such as a Microsoft Kinect within a small workspace in their home or office. To make the reconstruction of 3D avatars easy and fast, a new data capture strategy is proposed for efficient human body scanning, which captures only 18 frames from six views with a close scanning distance to fully cover the body; meanwhile, efficient alignment algorithms are presented to locally align the data frames in the single view and then globally align them in multi-views based on pairwise correspondence. In this method, we do not adopt shape priors or subdivision tools to synthesize the model, which helps to reduce modeling complexity. Experimental results indicate that this method can obtain accurate reconstructed 3D avatar models, and the running performance is faster than that of similar work. This research offers a useful tool for the manufacturers to quickly and economically create 3D avatars for products design, entertainment and online shopping.
3D shape representation with spatial probabilistic distribution of intrinsic shape keypoints
NASA Astrophysics Data System (ADS)
Ghorpade, Vijaya K.; Checchin, Paul; Malaterre, Laurent; Trassoudaine, Laurent
2017-12-01
The accelerated advancement in modeling, digitizing, and visualizing techniques for 3D shapes has led to an increasing amount of 3D models creation and usage, thanks to the 3D sensors which are readily available and easy to utilize. As a result, determining the similarity between 3D shapes has become consequential and is a fundamental task in shape-based recognition, retrieval, clustering, and classification. Several decades of research in Content-Based Information Retrieval (CBIR) has resulted in diverse techniques for 2D and 3D shape or object classification/retrieval and many benchmark data sets. In this article, a novel technique for 3D shape representation and object classification has been proposed based on analyses of spatial, geometric distributions of 3D keypoints. These distributions capture the intrinsic geometric structure of 3D objects. The result of the approach is a probability distribution function (PDF) produced from spatial disposition of 3D keypoints, keypoints which are stable on object surface and invariant to pose changes. Each class/instance of an object can be uniquely represented by a PDF. This shape representation is robust yet with a simple idea, easy to implement but fast enough to compute. Both Euclidean and topological space on object's surface are considered to build the PDFs. Topology-based geodesic distances between keypoints exploit the non-planar surface properties of the object. The performance of the novel shape signature is tested with object classification accuracy. The classification efficacy of the new shape analysis method is evaluated on a new dataset acquired with a Time-of-Flight camera, and also, a comparative evaluation on a standard benchmark dataset with state-of-the-art methods is performed. Experimental results demonstrate superior classification performance of the new approach on RGB-D dataset and depth data.
Natural Interaction Based Online Military Boxing Learning System
ERIC Educational Resources Information Center
Yang, Chenglei; Wang, Lu; Sun, Bing; Yin, Xu; Wang, Xiaoting; Liu, Li; Lu, Lin
2013-01-01
Military boxing, a kind of Chinese martial arts, is widespread and health beneficial. In this paper, the authors introduce a military boxing learning system realized by 3D motion capture, Web3D and 3D interactive technologies. The interactions with the system are natural and intuitive. Users can observe and learn the details of each action of the…
AntigenMap 3D: an online antigenic cartography resource.
Barnett, J Lamar; Yang, Jialiang; Cai, Zhipeng; Zhang, Tong; Wan, Xiu-Feng
2012-05-01
Antigenic cartography is a useful technique to visualize and minimize errors in immunological data by projecting antigens to 2D or 3D cartography. However, a 2D cartography may not be sufficient to capture the antigenic relationship from high-dimensional immunological data. AntigenMap 3D presents an online, interactive, and robust 3D antigenic cartography construction and visualization resource. AntigenMap 3D can be applied to identify antigenic variants and vaccine strain candidates for pathogens with rapid antigenic variations, such as influenza A virus. http://sysbio.cvm.msstate.edu/AntigenMap3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; Wang, Cong; Winterfeld, Philip
An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less
Performance analysis of three-dimensional ridge acquisition from live finger and palm surface scans
NASA Astrophysics Data System (ADS)
Fatehpuria, Abhishika; Lau, Daniel L.; Yalla, Veeraganesh; Hassebrook, Laurence G.
2007-04-01
Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating scanner performance. Specifically, we use some image software components developed by the National Institute of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans and the quality of the acquired scans is quantified using the metrics.
Phase 1 Development Report for the SESSA Toolkit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knowlton, Robert G.; Melton, Brad J; Anderson, Robert J.
The Site Exploitation System for Situational Awareness ( SESSA ) tool kit , developed by Sandia National Laboratories (SNL) , is a comprehensive de cision support system for crime scene data acquisition and Sensitive Site Exploitation (SSE). SESSA is an outgrowth of another SNL developed decision support system , the Building R estoration Operations Optimization Model (BROOM), a hardware/software solution for data acquisition, data management, and data analysis. SESSA was designed to meet forensic crime scene needs as defined by the DoD's Military Criminal Investigation Organiza tion (MCIO) . SESSA is a very comprehensive toolki t with a considerable amountmore » of database information managed through a Microsoft SQL (Structured Query Language) database engine, a Geographical Information System (GIS) engine that provides comprehensive m apping capabilities, as well as a an intuitive Graphical User Interface (GUI) . An electronic sketch pad module is included. The system also has the ability to efficiently generate necessary forms for forensic crime scene investigations (e.g., evidence submittal, laboratory requests, and scene notes). SESSA allows the user to capture photos on site, and can read and generate ba rcode labels that limit transcription errors. SESSA runs on PC computers running Windows 7, but is optimized for touch - screen tablet computers running Windows for ease of use at crime scenes and on SSE deployments. A prototype system for 3 - dimensional (3 D) mapping and measur e ments was also developed to complement the SESSA software. The mapping system employs a visual/ depth sensor that captures data to create 3D visualizations of an interior space and to make distance measurements with centimeter - level a ccuracy. Output of this 3D Model Builder module provides a virtual 3D %22walk - through%22 of a crime scene. The 3D mapping system is much less expensive and easier to use than competitive systems. This document covers the basic installation and operation of th e SESSA tool kit in order to give the user enough information to start using the tool kit . SESSA is currently a prototype system and this documentation covers the initial release of the tool kit . Funding for SESSA was provided by the Department of Defense (D oD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL) . ACKNOWLEDGEMENTS The authors wish to acknowledge the funding support for the development of the Site Exploitation System for Situational Awareness (SESSA) toolkit from the Department of Defense (DoD), Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) Rapid Fielding (RF) organization. The project was managed by the Defense Forensic Science Center (DFSC) , formerly known as the U.S. Army Criminal Investigation Laboratory (USACIL). Special thanks to Mr. Garold Warner, of DFSC, who served as the Project Manager. Individuals that worked on the design, functional attributes, algorithm development, system arc hitecture, and software programming include: Robert Knowlton, Brad Melton, Robert Anderson, and Wendy Amai.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, C; Xing, L; Yu, S
Purpose: A correct body contour is essential for the accuracy of dose calculation in radiation therapy. While modern medical imaging technologies provide highly accurate representations of body contours, there are times when a patient’s anatomy cannot be fully captured or there is a lack of easy access to CT/MRI scanning. Recently, handheld cameras have emerged that are capable of performing three dimensional (3D) scans of patient surface anatomy. By combining 3D camera and medical imaging data, the patient’s surface contour can be fully captured. Methods: A proof-of-concept system matches a patient surface model, created using a handheld stereo depth cameramore » (DC), to the available areas of a body contour segmented from a CT scan. The matched surface contour is then converted to a DICOM structure and added to the CT dataset to provide additional contour information. In order to evaluate the system, a 3D model of a patient was created by segmenting the body contour with a treatment planning system (TPS) and fabricated with a 3D printer. A DC and associated software were used to create a 3D scan of the printed phantom. The surface created by the camera was then registered to a CT model that had been cropped to simulate missing scan data. The aligned surface was then imported into the TPS and compared with the originally segmented contour. Results: The RMS error for the alignment between the camera and cropped CT models was 2.26 mm. Mean distance between the aligned camera surface and ground truth model was −1.23 +/−2.47 mm. Maximum deviations were < 1 cm and occurred in areas of high concavity or where anatomy was close to the couch. Conclusion: The proof-of-concept study shows an accurate, easy and affordable method to extend medical imaging for radiation therapy planning using 3D cameras without additional radiation. Intel provided the camera hardware used in this study.« less
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2015-07-01
In the field of orthodontic planning, the creation of a complete digital dental model to simulate and predict treatments is of utmost importance. Nowadays, orthodontists use panoramic radiographs (PAN) and dental crown representations obtained by optical scanning. However, these data do not contain any 3D information regarding tooth root geometries. A reliable orthodontic treatment should instead take into account entire geometrical models of dental shapes in order to better predict tooth movements. This paper presents a methodology to create complete 3D patient dental anatomies by combining digital mouth models and panoramic radiographs. The modeling process is based on using crown surfaces, reconstructed by optical scanning, and root geometries, obtained by adapting anatomical CAD templates over patient specific information extracted from radiographic data. The radiographic process is virtually replicated on crown digital geometries through the Discrete Radon Transform (DRT). The resulting virtual PAN image is used to integrate the actual radiographic data and the digital mouth model. This procedure provides the root references on the 3D digital crown models, which guide a shape adjustment of the dental CAD templates. The entire geometrical models are finally created by merging dental crowns, captured by optical scanning, and root geometries, obtained from the CAD templates. Copyright © 2015 Elsevier Ltd. All rights reserved.
A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images
NASA Astrophysics Data System (ADS)
Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.
2017-03-01
Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.
NASA Spacecraft Captures Image of Brazil Flooding
2011-01-19
On Jan. 18, 2011, NASA Terra spacecraft captured this 3-D perspective image of the city of Nova Friburgo, Brazil. A week of torrential rains triggered a series of deadly mudslides and floods. More details about this image at the Photojournal.
Depth inpainting by tensor voting.
Kulkarni, Mandar; Rajagopalan, Ambasamudram N
2013-06-01
Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.
Assessing the Potential of Low-Cost 3D Cameras for the Rapid Measurement of Plant Woody Structure
Nock, Charles A; Taugourdeau, Olivier; Delagrange, Sylvain; Messier, Christian
2013-01-01
Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software applications. 3D cameras may provide measurements of key components of plant architecture such as stem diameters and lengths, however, few tests of 3D cameras for the measurement of plant architecture have been conducted. Here, we measured Salix branch segments ranging from 2–13 mm in diameter with an Asus Xtion camera to quantify the limits and accuracy of branch diameter measurement with a 3D camera. By scanning at a variety of distances we also quantified the effect of scanning distance. In addition, we also test the sensitivity of the program KinFu for continuous 3D object scanning and modeling as well as other similar software to accurately record stem diameters and capture plant form (<3 m in height). Given its ability to accurately capture the diameter of branches >6 mm, Asus Xtion may provide a novel method for the collection of 3D data on the branching architecture of woody plants. Improvements in camera measurement accuracy and available software are likely to further improve the utility of 3D cameras for plant sciences in the future. PMID:24287538
3-D Flow Visualization with a Light-field Camera
NASA Astrophysics Data System (ADS)
Thurow, B.
2012-12-01
Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.
Nearly automatic motion capture system for tracking octopus arm movements in 3D space.
Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar
2009-08-30
Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.
Oliveira, G H; Palermo-Neto, J
1995-01-01
A gas-liquid chromatographic method with an electron-capture detector was applied for 2,4-dichlorophenoxyacetic acid (2,4-D) determination in the serum and brain tissue of rats acutely intoxicated with the dimethylamine salt of 2,4-D. After extraction with ethyl ether, 2,4-D derivatization was performed using 2-chloroethanol and BCI3. The average recovery values found for serum and brain tissue were 98.5 +/- 4.8 and 93.3 +/- 7.5, respectively. The sensitivity limit of the method was 250 ng/mL for serum and 300 ng/g for brain tissue. The toxic effects of 2,4-D in rats were observed within one-half hour after its oral administration. Results suggest that the toxic mechanism of 2,4-D is related to an action on the central nervous system.
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling
Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min
2016-01-01
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.
Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min
2016-09-27
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Li, K.; Li, S. J.; Liu, Y.; Wang, W.; Wu, C.
2015-08-01
At the present, in trend of shifting the old 2D-output oriented survey to a new 3D-output oriented survey based on BIM technology, the corresponding working methods and workflow for data capture, process, representation, etc. have to be changed.Based on case study of two buildings in the Summer Palace of Beijing, and Jiayuguan Pass at the west end of the Great Wall (both World Heritage sites), this paper puts forward a "structure-and-type method" by means of typological method used in archaeology, Revit family system, and the tectonic logic of building to realize a good coordination between understanding of historic buildings and BIM modelling.
Naraghi, Safa; Mutsvangwa, Tinashe; Goliath, René; Rangaka, Molebogeng X; Douglas, Tania S
2018-05-08
The tuberculin skin test is the most widely used method for detecting latent tuberculosis infection in adults and active tuberculosis in children. We present the development of a mobile-phone based screening tool for measuring the tuberculin skin test induration. The tool makes use of a mobile application developed on the Android platform to capture images of an induration, and photogrammetric reconstruction using Agisoft PhotoScan to reconstruct the induration in 3D, followed by 3D measurement of the induration with the aid of functions from the Python programming language. The system enables capture of images by the person being screened for latent tuberculosis infection. Measurement precision was tested using a 3D printed induration. Real-world use of the tool was simulated by application to a set of mock skin indurations, created by a make-up artist, and the performance of the tool was evaluated. The usability of the application was assessed with the aid of a questionnaire completed by participants. The tool was found to measure the 3D printed induration with greater precision than the current ruler and pen method, as indicated by the lower standard deviation produced (0.3 mm versus 1.1 mm in the literature). There was high correlation between manual and algorithm measurement of mock skin indurations. The height of the skin induration and the definition of its margins were found to influence the accuracy of 3D reconstruction and therefore the measurement error, under simulated real-world conditions. Based on assessment of the user experience in capturing images, a simplified user interface would benefit wide-spread implementation. The mobile application shows good agreement with direct measurement. It provides an alternative method for measuring tuberculin skin test indurations and may remove the need for an in-person follow-up visit after test administration, thus improving latent tuberculosis infection screening throughput. Copyright © 2018 Elsevier Ltd. All rights reserved.
3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications
NASA Astrophysics Data System (ADS)
Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David
2004-08-01
Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.
X-ray phase contrast tomography by tracking near field speckle
Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal
2015-01-01
X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237
3D deformable organ model based liver motion tracking in ultrasound videos
NASA Astrophysics Data System (ADS)
Kim, Jung-Bae; Hwang, Youngkyoo; Oh, Young-Taek; Bang, Won-Chul; Lee, Heesae; Kim, James D. K.; Kim, Chang Yeong
2013-03-01
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor's 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
ERIC Educational Resources Information Center
Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako
2009-01-01
This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Yates, Kenneth; Sullivan, Maura; Clark, Richard
2012-01-01
Cognitive task analysis (CTA) methods were used for 2 surgical procedures to determine (1) the extent that experts omitted critical information, (2) the number of experts required to capture the optimalamount of information, and (3) the effectiveness of a CTA-informed curriculum. Six expert physicians for both the central venous catheter placement and open cricothyrotomy were interviewed. The transcripts were coded, corrected, and aggregated as a "gold standard." The information captured for each surgeon was then analyzed against the gold standard. Experts omitted an average of 34% of the decisions for the central venous catheter and 77% of the decisions for the Cric. Three to 4 experts were required to capture the optimal amount of information. A significant positive effect on performance (t([21]) = 2.08, P = .050), and self-efficacy ratings (t([18]) = 2.38, P = .029) were found for the CTA-informed curriculum for cricothyrotomy. CTA is an effective method to capture expertise in surgery and a valuable component to improve surgical training. Copyright © 2012 Elsevier Inc. All rights reserved.
Epic Dimensions: a Comparative Analysis of 3d Acquisition Methods
NASA Astrophysics Data System (ADS)
Graham, C. A.; Akoglu, K. G.; Lassen, A. W.; Simon, S.
2017-08-01
When it comes to capturing the geometry of a cultural heritage artifact, there is certainly no dearth of possible acquisition techniques. As technology has rapidly developed, the availability of intuitive 3D generating tools has increased exponentially and made it possible even for non-specialists to create many models quickly. Though the by-products of these different acquisition methods may be incongruent in terms of quality, these discrepancies are not problematic, as there are many applications of 3D models, each with their own set of requirements. Comparisons of high-resolution 3D models of an iconic Babylonian tablet, captured via four different closerange technologies discussed in this paper assess which methods of 3D digitization best suit specific intended purposes related to research, conservation and education. Taking into consideration repeatability, time and resource implications, qualitative and quantitative potential and ease of use, this paper presents a study of the strengths and weakness of structured light scanning, triangulation laser scanning, photometric stereo and close-range photogrammetry, in the context of interactive investigation, conditions monitoring, engagement, and dissemination.
NASA Astrophysics Data System (ADS)
Di Giulio, R.; Maietti, F.; Piaia, E.; Medici, M.; Ferrari, F.; Turillazzi, B.
2017-02-01
The generation of high quality 3D models can be still very time-consuming and expensive, and the outcome of digital reconstructions is frequently provided in formats that are not interoperable, and therefore cannot be easily accessed. This challenge is even more crucial for complex architectures and large heritage sites, which involve a large amount of data to be acquired, managed and enriched by metadata. In this framework, the ongoing EU funded project INCEPTION - Inclusive Cultural Heritage in Europe through 3D semantic modelling proposes a workflow aimed at the achievements of efficient 3D digitization methods, post-processing tools for an enriched semantic modelling, web-based solutions and applications to ensure a wide access to experts and non-experts. In order to face these challenges and to start solving the issue of the large amount of captured data and time-consuming processes in the production of 3D digital models, an Optimized Data Acquisition Protocol (DAP) has been set up. The purpose is to guide the processes of digitization of cultural heritage, respecting needs, requirements and specificities of cultural assets.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
Evaluation of lung tumor motion management in radiation therapy with dynamic MRI
NASA Astrophysics Data System (ADS)
Park, Seyoun; Farah, Rana; Shea, Steven M.; Tryggestad, Erik; Hales, Russell; Lee, Junghoon
2017-03-01
Surrogate-based tumor motion estimation and tracing methods are commonly used in radiotherapy despite the lack of continuous real time 3D tumor and surrogate data. In this study, we propose a method to simultaneously track the tumor and external surrogates with dynamic MRI, which allows us to evaluate their reproducible correlation. Four MRIcompatible fiducials are placed on the patient's chest and upper abdomen, and multi-slice 2D cine MRIs are acquired to capture the lung and whole tumor, followed by two-slice 2D cine MRIs to simultaneously track the tumor and fiducials, all in sagittal orientation. A phase-binned 4D-MRI is first reconstructed from multi-slice MR images using body area as a respiratory surrogate and group-wise registration. The 4D-MRI provides 3D template volumes for different breathing phases. 3D tumor position is calculated by 3D-2D template matching in which 3D tumor templates in 4D-MRI reconstruction and the 2D cine MRIs from the two-slice tracking dataset are registered. 3D trajectories of the external surrogates are derived via matching a 3D geometrical model to the fiducial segmentations on the 2D cine MRIs. We tested our method on five lung cancer patients. Internal target volume from 4D-CT showed average sensitivity of 86.5% compared to the actual tumor motion for 5 min. 3D tumor motion correlated with the external surrogate signal, but showed a noticeable phase mismatch. The 3D tumor trajectory showed significant cycle-to-cycle variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, there was significant phase mismatch between surrogate signals obtained from fiducials at different locations.
Guyot, Y; Papantoniou, I; Luyten, F P; Geris, L
2016-02-01
The main challenge in tissue engineering consists in understanding and controlling the growth process of in vitro cultured neotissues toward obtaining functional tissues. Computational models can provide crucial information on appropriate bioreactor and scaffold design but also on the bioprocess environment and culture conditions. In this study, the development of a 3D model using the level set method to capture the growth of a microporous neotissue domain in a dynamic culture environment (perfusion bioreactor) was pursued. In our model, neotissue growth velocity was influenced by scaffold geometry as well as by flow- induced shear stresses. The neotissue was modeled as a homogenous porous medium with a given permeability, and the Brinkman equation was used to calculate the flow profile in both neotissue and void space. Neotissue growth was modeled until the scaffold void volume was filled, thus capturing already established experimental observations, in particular the differences between scaffold filling under different flow regimes. This tool is envisaged as a scaffold shape and bioprocess optimization tool with predictive capacities. It will allow controlling fluid flow during long-term culture, whereby neotissue growth alters flow patterns, in order to provide shear stress profiles and magnitudes across the whole scaffold volume influencing, in turn, the neotissue growth.
NASA Astrophysics Data System (ADS)
Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart
2016-04-01
Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.
Miller, Michele; Kruger, Milandie; Kruger, Marius; Olea-Popelka, Francisco; Buss, Peter
2016-04-01
Ninety-four subadult and adult white rhinoceroses (Ceratotherium simum) were captured between February and October, 2009-11, in Kruger National Park and placed in holding bomas prior to translocation to other locations within South Africa. A simple three-category system was developed based on appetite, fecal consistency/volume, and behavior to assess adaptation to bomas. Individual animal and group daily median scores were used to determine trends and when rhinoceroses had successfully adapted to the boma. Seventeen rhinoceroses did not adapt to boma confinement, and 16 were released (1 mortality). No differences in boma scores were observed between rhinoceroses that adapted and those that did not, until day 8, when the first significant differences were observed (adapted score=13 versus nonadapted score=10). The time to reach a boma score determined as successful adaptation (median 19 d) matched subjective observations, which was approximately 3 wk for most rhinoceroses. Unsuccessful adaptation was indicated by an individual boma score of less than 15, typically during the first 2 wk, or a declining trend in scores within the first 7-14 d. This scoring system can be used for most locations and could also be easily adapted to other areas in which rhinoceroses are held in captivity. This tool also provides important information for assessing welfare in newly captured rhinoceroses.
He, Sijin; Yong, May; Matthews, Paul M; Guo, Yike
2017-03-01
TranSMART has a wide range of functionalities for translational research and a large user community, but it does not support imaging data. In this context, imaging data typically includes 2D or 3D sets of magnitude data and metadata information. Imaging data may summarise complex feature descriptions in a less biased fashion than user defined plain texts and numeric numbers. Imaging data also is contextualised by other data sets and may be analysed jointly with other data that can explain features or their variation. Here we describe the tranSMART-XNAT Connector we have developed. This connector consists of components for data capture, organisation and analysis. Data capture is responsible for imaging capture either from PACS system or directly from an MRI scanner, or from raw data files. Data are organised in a similar fashion as tranSMART and are stored in a format that allows direct analysis within tranSMART. The connector enables selection and download of DICOM images and associated resources using subjects' clinical phenotypic and genotypic criteria. tranSMART-XNAT connector is written in Java/Groovy/Grails. It is maintained and available for download at https://github.com/sh107/transmart-xnat-connector.git. sijin@ebi.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
miRiaD: A Text Mining Tool for Detecting Associations of microRNAs with Diseases.
Gupta, Samir; Ross, Karen E; Tudor, Catalina O; Wu, Cathy H; Schmidt, Carl J; Vijay-Shanker, K
2016-04-29
MicroRNAs are increasingly being appreciated as critical players in human diseases, and questions concerning the role of microRNAs arise in many areas of biomedical research. There are several manually curated databases of microRNA-disease associations gathered from the biomedical literature; however, it is difficult for curators of these databases to keep up with the explosion of publications in the microRNA-disease field. Moreover, automated literature mining tools that assist manual curation of microRNA-disease associations currently capture only one microRNA property (expression) in the context of one disease (cancer). Thus, there is a clear need to develop more sophisticated automated literature mining tools that capture a variety of microRNA properties and relations in the context of multiple diseases to provide researchers with fast access to the most recent published information and to streamline and accelerate manual curation. We have developed miRiaD (microRNAs in association with Disease), a text-mining tool that automatically extracts associations between microRNAs and diseases from the literature. These associations are often not directly linked, and the intermediate relations are often highly informative for the biomedical researcher. Thus, miRiaD extracts the miR-disease pairs together with an explanation for their association. We also developed a procedure that assigns scores to sentences, marking their informativeness, based on the microRNA-disease relation observed within the sentence. miRiaD was applied to the entire Medline corpus, identifying 8301 PMIDs with miR-disease associations. These abstracts and the miR-disease associations are available for browsing at http://biotm.cis.udel.edu/miRiaD . We evaluated the recall and precision of miRiaD with respect to information of high interest to public microRNA-disease database curators (expression and target gene associations), obtaining a recall of 88.46-90.78. When we expanded the evaluation to include sentences with a wide range of microRNA-disease information that may be of interest to biomedical researchers, miRiaD also performed very well with a F-score of 89.4. The informativeness ranking of sentences was evaluated in terms of nDCG (0.977) and correlation metrics (0.678-0.727) when compared to an annotator's ranked list. miRiaD, a high performance system that can capture a wide variety of microRNA-disease related information, extends beyond the scope of existing microRNA-disease resources. It can be incorporated into manual curation pipelines and serve as a resource for biomedical researchers interested in the role of microRNAs in disease. In our ongoing work we are developing an improved miRiaD web interface that will facilitate complex queries about microRNA-disease relationships, such as "In what diseases does microRNA regulation of apoptosis play a role?" or "Is there overlap in the sets of genes targeted by microRNAs in different types of dementia?"."
Analysis of Layered Social Networks
2006-09-01
C . Anderson , O . P . John , D . Keltner , and A. M. Kring. Who attains social sta- tus... of action is provided by the following equation, U ( c ) = ∑ d (PdEx), where 56 Religious Financial Commercial Military Infrastructure Avoid Secure...Assuming that this methodology can indeed be applied to the transmission of information, the matrix powers ( p > 2) actually capture a variety of walks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motionmore » model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less
Using LiDAR data to measure the 3D green biomass of Beijing urban forest in China.
He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu
2013-01-01
The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m(3), of which coniferous was 28.7871 million m(3) and broad-leaf was 370.3424 million m(3). The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities.
Using LiDAR Data to Measure the 3D Green Biomass of Beijing Urban Forest in China
He, Cheng; Convertino, Matteo; Feng, Zhongke; Zhang, Siyu
2013-01-01
The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired'Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m3, of which coniferous was 28.7871 million m3 and broad-leaf was 370.3424 million m3. The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities. PMID:24146792
Rabattu, Pierre-Yves; Massé, Benoit; Ulliana, Federico; Rousset, Marie-Christine; Rohmer, Damien; Léon, Jean-Claude; Palombi, Olivier
2015-01-01
Embryology is a complex morphologic discipline involving a set of entangled mechanisms, sometime difficult to understand and to visualize. Recent computer based techniques ranging from geometrical to physically based modeling are used to assist the visualization and the simulation of virtual humans for numerous domains such as surgical simulation and learning. On the other side, the ontology-based approach applied to knowledge representation is more and more successfully adopted in the life-science domains to formalize biological entities and phenomena, thanks to a declarative approach for expressing and reasoning over symbolic information. 3D models and ontologies are two complementary ways to describe biological entities that remain largely separated. Indeed, while many ontologies providing a unified formalization of anatomy and embryology exist, they remain only descriptive and make the access to anatomical content of complex 3D embryology models and simulations difficult. In this work, we present a novel ontology describing the development of the human embryology deforming 3D models. Beyond describing how organs and structures are composed, our ontology integrates a procedural description of their 3D representations, temporal deformation and relations with respect to their developments. We also created inferences rules to express complex connections between entities. It results in a unified description of both the knowledge of the organs deformation and their 3D representations enabling to visualize dynamically the embryo deformation during the Carnegie stages. Through a simplified ontology, containing representative entities which are linked to spatial position and temporal process information, we illustrate the added-value of such a declarative approach for interactive simulation and visualization of 3D embryos. Combining ontologies and 3D models enables a declarative description of different embryological models that capture the complexity of human developmental anatomy. Visualizing embryos with 3D geometric models and their animated deformations perhaps paves the way towards some kind of hypothesis-driven application. These can also be used to assist the learning process of this complex knowledge. http://www.mycorporisfabrica.org/.
Kychakoff, George; Afromowitz, Martin A; Hugle, Richard E
2005-06-21
A system for detection and control of deposition on pendant tubes in recovery and power boilers includes one or more deposit monitoring sensors operating in infrared regions and about 4 or 8.7 microns and directly producing images of the interior of the boiler. An image pre-processing circuit (95) in which a 2-D image formed by the video data input is captured, and includes a low pass filter for performing noise filtering of said video input. An image segmentation module (105) for separating the image of the recovery boiler interior into background, pendant tubes, and deposition. An image-understanding unit (115) matches derived regions to a 3-D model of said boiler. It derives a 3-D structure the deposition on pendant tubes in the boiler and provides the information about deposits to the plant distributed control system (130) for more efficient operation of the plant pendant tube cleaning and operating systems.
Developing a geoscience knowledge framework for a national geological survey organisation
NASA Astrophysics Data System (ADS)
Howard, Andrew S.; Hatton, Bill; Reitsma, Femke; Lawrie, Ken I. G.
2009-04-01
Geological survey organisations (GSOs) are established by most nations to provide a geoscience knowledge base for effective decision-making on mitigating the impacts of natural hazards and global change, and on sustainable management of natural resources. The value of the knowledge base as a national asset is continually enhanced by the exchange of knowledge between GSOs as data and information providers and the stakeholder community as knowledge 'users and exploiters'. Geological maps and associated narrative texts typically form the core of national geoscience knowledge bases, but have some inherent limitations as methods of capturing and articulating knowledge. Much knowledge about the three-dimensional (3D) spatial interpretation and its derivation and uncertainty, and the wider contextual value of the knowledge, remains intangible in the minds of the mapping geologist in implicit and tacit form. To realise the value of these knowledge assets, the British Geological Survey (BGS) has established a workflow-based cyber-infrastructure to enhance its knowledge management and exchange capability. Future geoscience surveys in the BGS will contribute to a national, 3D digital knowledge base on UK geology, with the associated implicit and tacit information captured as metadata, qualitative assessments of uncertainty, and documented workflows and best practice. Knowledge-based decision-making at all levels of society requires both the accessibility and reliability of knowledge to be enhanced in the grid-based world. Establishment of collaborative cyber-infrastructures and ontologies for geoscience knowledge management and exchange will ensure that GSOs, as knowledge-based organisations, can make their contribution to this wider goal.
Digital dental surface registration with laser scanner for orthodontics set-up planning
NASA Astrophysics Data System (ADS)
Alcaniz-Raya, Mariano L.; Albalat, Salvador E.; Grau Colomer, Vincente; Monserrat, Carlos A.
1997-05-01
We present an optical measuring system based on laser structured light suitable for its diary use in orthodontics clinics that fit four main requirements: (1) to avoid use of stone models, (2) to automatically discriminate geometric points belonging to teeth and gum, (3) to automatically calculate diagnostic parameters used by orthodontists, (4) to make use of low cost and easy to use technology for future commercial use. Proposed technique is based in the use of hydrocolloids mould used by orthodontists for stone model obtention. These mould of the inside of patient's mouth are composed of very fluent materials like alginate or hydrocolloids that reveal fine details of dental anatomy. Alginate mould are both very easy to obtain and very low costly. Once captured, alginate moulds are digitized by mean of a newly developed and patented 3D dental scanner. Developed scanner is based in the optical triangulation method based in the projection of a laser line on the alginate mould surface. Line deformation gives uncalibrated shape information. Relative linear movements of the mould with respect to the sensor head gives more sections thus obtaining a full 3D uncalibrated dentition model. Developed device makes use of redundant CCD in the sensor head and servocontrolled linear axis for mould movement. Last step is calibration to get a real and precise X, Y, Z image. All the process is done automatically. The scanner has been specially adapted for 3D dental anatomy capturing in order to fulfill specific requirements such as: scanning time, accuracy, security and correct acquisition of 'hidden points' in alginate mould. Measurement realized on phantoms with known geometry quite similar to dental anatomy present errors less than 0,1 mm. Scanning of global dental anatomy is 2 minutes, and generation of 3D graphics of dental cast takes approximately 30 seconds in a Pentium-based PC.
Review of Spatial-Database System Usability: Recommendations for the ADDNS Project
2007-12-01
basic GIS background information , with a closer look at spatial databases. A GIS is also a computer- based system designed to capture, manage...foundation for deploying enterprise-wide spatial information systems . According to Oracle® [18], it enables accurate delivery of location- based services...Toronto TR 2007-141 Lanter, D.P. (1991). Design of a lineage- based meta-data base for GIS. Cartography and Geographic Information Systems , 18
Lando, David; Stevens, Tim J; Basu, Srinjan; Laue, Ernest D
2018-01-01
Single-cell chromosome conformation capture approaches are revealing the extent of cell-to-cell variability in the organization and packaging of genomes. These single-cell methods, unlike their multi-cell counterparts, allow straightforward computation of realistic chromosome conformations that may be compared and combined with other, independent, techniques to study 3D structure. Here we discuss how single-cell Hi-C and subsequent 3D genome structure determination allows comparison with data from microscopy. We then carry out a systematic evaluation of recently published single-cell Hi-C datasets to establish a computational approach for the evaluation of single-cell Hi-C protocols. We show that the calculation of genome structures provides a useful tool for assessing the quality of single-cell Hi-C data because it requires a self-consistent network of interactions, relating to the underlying 3D conformation, with few errors, as well as sufficient longer-range cis- and trans-chromosomal contacts.
The suitability of lightfield camera depth maps for coordinate measurement applications
NASA Astrophysics Data System (ADS)
Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael
2015-12-01
Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.
Programmable Spectral Source and Design Tool for 3D Imaging Using Complementary Bandpass Filters
NASA Technical Reports Server (NTRS)
Bae, Youngsam (Inventor); Korniski, Ronald J. (Inventor); Ream, Allen (Inventor); Shearn, Michael J. (Inventor); Shahinian, Hrayr Karnig (Inventor); Fritz, Eric W. (Inventor)
2017-01-01
An endoscopic illumination system for illuminating a subject for stereoscopic image capture, includes a light source which outputs light; a first complementary multiband bandpass filter (CMBF) and a second CMBF, the first and second CMBFs being situated in first and second light paths, respectively, where the first CMBF and the second CMBF filter the light incident thereupon to output filtered light; and a camera which captures video images of the subject and generates corresponding video information, the camera receiving light reflected from the subject and passing through a pupil CMBF pair and a detection lens. The pupil CMBF includes a first pupil CMBF and a second pupil CMBF, the first pupil CMBF being identical to the first CMBF and the second pupil CMBF being identical to the second CMBF, and the detection lens includes one unpartitioned section that covers both the first pupil CMBF and the second pupil CMBF.
Cha, Jungwon; Farhangi, Mohammad Mehdi; Dunlap, Neal; Amini, Amir A
2018-01-01
We have developed a robust tool for performing volumetric and temporal analysis of nodules from respiratory gated four-dimensional (4D) CT. The method could prove useful in IMRT of lung cancer. We modified the conventional graph-cuts method by adding an adaptive shape prior as well as motion information within a signed distance function representation to permit more accurate and automated segmentation and tracking of lung nodules in 4D CT data. Active shape models (ASM) with signed distance function were used to capture the shape prior information, preventing unwanted surrounding tissues from becoming part of the segmented object. The optical flow method was used to estimate the local motion and to extend three-dimensional (3D) segmentation to 4D by warping a prior shape model through time. The algorithm has been applied to segmentation of well-circumscribed, vascularized, and juxtapleural lung nodules from respiratory gated CT data. In all cases, 4D segmentation and tracking for five phases of high-resolution CT data took approximately 10 min on a PC workstation with AMD Phenom II and 32 GB of memory. The method was trained based on 500 breath-held 3D CT data from the LIDC data base and was tested on 17 4D lung nodule CT datasets consisting of 85 volumetric frames. The validation tests resulted in an average Dice Similarity Coefficient (DSC) = 0.68 for all test data. An important by-product of the method is quantitative volume measurement from 4D CT from end-inspiration to end-expiration which will also have important diagnostic value. The algorithm performs robust segmentation of lung nodules from 4D CT data. Signed distance ASM provides the shape prior information which based on the iterative graph-cuts framework is adaptively refined to best fit the input data, preventing unwanted surrounding tissue from merging with the segmented object. © 2017 American Association of Physicists in Medicine.
Tansel, Berrin; Surita, Sharon C
2016-06-01
Siloxane levels in biogas can jeopardize the warranties of the engines used at the biogas to energy facilities. The chemical structure of siloxanes consists of silicon and oxygen atoms, alternating in position, with hydrocarbon groups attached to the silicon side chain. Siloxanes can be either in cyclic (D) or linear (L) configuration and referred with a letter corresponding to their structure followed by a number corresponding to the number of silicon atoms present. When siloxanes are burned, the hydrocarbon fraction is lost and silicon is converted to silicates. The purpose of this study was to evaluate the adequacy of activated carbon gas samplers for quantitative analysis of siloxanes in biogas samples. Biogas samples were collected from a landfill and an anaerobic digester using multiple carbon sorbent tubes assembled in series. One set of samples was collected for 30min (sampling 6-L gas), and the second set was collected for 60min (sampling 12-L gas). Carbon particles were thermally desorbed and analyzed by Gas Chromatography Mass Spectrometry (GC/MS). The results showed that biogas sampling using a single tube would not adequately capture octamethyltrisiloxane (L3), hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). Even with 4 tubes were used in series, D5 was not captured effectively. The single sorbent tube sampling method was adequate only for capturing trimethylsilanol (TMS) and hexamethyldisiloxane (L2). Affinity of siloxanes for activated carbon decreased with increasing molecular weight. Using multiple carbon sorbent tubes in series can be an appropriate method for developing a standard procedure for determining siloxane levels for low molecular weight siloxanes (up to D3). Appropriate quality assurance and quality control procedures should be developed for adequately quantifying the levels of the higher molecular weight siloxanes in biogas with sorbent tubes. Copyright © 2016 Elsevier Ltd. All rights reserved.
SEIS-PROV: Practical Provenance for Seismological Data
NASA Astrophysics Data System (ADS)
Krischer, L.; Smith, J. A.; Tromp, J.
2015-12-01
It is widely recognized that reproducibility is crucial to advance science, but at the same time it is very hard to actually achieve. This results in it being recognized but also mostly ignored by a large fraction of the community. A key ingredient towards full reproducibility is to capture and describe the history of data, an issue known as provenance. We present SEIS-PROV, a practical format and data model to store provenance information for seismological data. In a seismological context, provenance can be seen as information about the processes that generated and modified a particular piece of data. For synthetic waveforms the provenance information describes which solver and settings therein were used to generate it. When looking at processed seismograms, the provenance conveys information about the different time series analysis steps that led to it. Additional uses include the description of derived data types, such as cross-correlations and adjoint sources, enabling their proper storage and exchange. SEIS-PROV is based on W3C PROV (http://www.w3.org/TR/prov-overview/), a standard for generic provenance information. It then applies an additional set of constraints to make it suitable for seismology. We present a definition of the SEIS-PROV format, a way to check if any given file is a valid SEIS-PROV document, and two sample implementations: One in SPECFEM3D GLOBE (https://geodynamics.org/cig/software/specfem3d_globe/) to store the provenance information of synthetic seismograms and another one as part of the ObsPy (http://obspy.org) framework enabling automatic tracking of provenance information during a series of analysis and transformation stages. This, along with tools to visualize and interpret provenance graphs, offers a description of data history that can be readily tracked, stored, and exchanged.
2012-01-01
Background Immunomagnetic separation (IMS) and immunoassays are widely used for pathogen detection. However, novel technology platforms with highly selective antibodies are essential to improve detection sensitivity, specificity and performance. In this study, monoclonal antibodies (MAbs) against Internalin A (InlA) and p30 were generated and used on paramagnetic beads of varying diameters for concentration, as well as on fiber-optic sensor for detection. Results Anti-InlA MAb-2D12 (IgG2a subclass) was specific for Listeria monocytogenes and L. ivanovii, and p30-specific MAb-3F8 (IgM) was specific for the genus Listeria. At all bacterial concentrations (103–108 CFU/mL) tested in the IMS assay; the 1-μm diameter MyOne beads had significantly higher capture efficiency (P < 0.05) than the 2.8-μm diameter M-280 beads with both antibodies. The highest capture efficiency for MyOne-2D12 (49.2% for 105 CFU/mL) was significantly higher (P < 0.05) than that of MyOne-3F8 (16.6 %) and Dynabeads anti-Listeria antibody (9 %). Furthermore, capture efficiency for MyOne-2D12 was highly specific for L. monocytogenes and L. ivanovii. Subsequently, we captured L. monocytogenes by MyOne-2D12 and MyOne-3F8 from hotdogs inoculated with mono- or co-cultures of L. monocytogenes and L. innocua (10–40 CFU/g), enriched for 18 h and detected by fiber-optic sensor and confirmed by plating, light-scattering, and qPCR assays. The detection limit for L. monocytogenes and L. ivanovii by the fiber-optic immunosensor was 3 × 102 CFU/mL using MAb-2D12 as capture and reporter antibody. Selective media plating, light-scattering, and qPCR assays confirmed the IMS and fiber-optic results. Conclusions IMS coupled with a fiber-optic sensor using anti-InlA MAb is highly specific for L. monocytogenes and L. ivanovii and enabled detection of these pathogens at low levels from buffer or food. PMID:23176167
NASA Technical Reports Server (NTRS)
Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)
2010-01-01
A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.
NASA Technical Reports Server (NTRS)
Guan, Chun (Inventor); Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor)
2008-01-01
A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.
NASA Astrophysics Data System (ADS)
Parraman, Carinna
2012-01-01
This presentation highlights issues relating to the digital capture printing of 2D and 3D artefacts and accurate colour reproduction of 3D objects. There are a range of opportunities and technologies for the scanning and printing of two-dimensional and threedimensional artefacts [1]. A successful approach of Polynomial Texture Mapping (PTM) technique, to create a Reflectance Transformation Image (RTI) [2-4] is being used for the conservation and heritage of artworks as these methods are non invasive or non destructive of fragile artefacts. This approach captures surface detail of twodimensional artworks using a multidimensional approach that by using a hemispherical dome comprising 64 lamps to create an entire surface topography. The benefits of this approach are to provide a highly detailed visualization of the surface of materials and objects.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.
Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C
2011-03-01
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.
Beyond 3-D: The New Spectrum of Lidar Applications for Earth and Ecological Sciences
NASA Technical Reports Server (NTRS)
Eitel, Jan U. H.; Hofle, Bernhard; Vierling, Lee A.; Abellan, Antonio; Asner, Gregory P.; Deems, Jeffrey S.; Glennie, Craig L.; Joerg, Phillip C.; LeWinter, Adam L.; Magney, Troy S.;
2016-01-01
Capturing and quantifying the world in three dimensions (x,y,z) using light detection and ranging (lidar) technology drives fundamental advances in the Earth and Ecological Sciences (EES). However, additional lidar dimensions offer the possibility to transcend basic 3-D mapping capabilities, including i) the physical time (t) dimension from repeat lidar acquisition and ii) laser return intensity (LRI?) data dimension based on the brightness of single- or multi-wavelength (?) laser returns. The additional dimensions thus add to the x,y, and z dimensions to constitute the five dimensions of lidar (x,y,z, t, LRI?1... ?n). This broader spectrum of lidar dimensionality has already revealed new insights across multiple EES topics, and will enable a wide range of new research and applications. Here, we review recent advances based on repeat lidar collections and analysis of LRI data to highlight novel applications of lidar remote sensing beyond 3-D. Our review outlines the potential and current challenges of time and LRI information from lidar sensors to expand the scope of research applications and insights across the full range of EES applications.
A genome-wide 3C-method for characterizing the three-dimensional architectures of genomes.
Duan, Zhijun; Andronescu, Mirela; Schutz, Kevin; Lee, Choli; Shendure, Jay; Fields, Stanley; Noble, William S; Anthony Blau, C
2012-11-01
Accumulating evidence demonstrates that the three-dimensional (3D) organization of chromosomes within the eukaryotic nucleus reflects and influences genomic activities, including transcription, DNA replication, recombination and DNA repair. In order to uncover structure-function relationships, it is necessary first to understand the principles underlying the folding and the 3D arrangement of chromosomes. Chromosome conformation capture (3C) provides a powerful tool for detecting interactions within and between chromosomes. A high throughput derivative of 3C, chromosome conformation capture on chip (4C), executes a genome-wide interrogation of interaction partners for a given locus. We recently developed a new method, a derivative of 3C and 4C, which, similar to Hi-C, is capable of comprehensively identifying long-range chromosome interactions throughout a genome in an unbiased fashion. Hence, our method can be applied to decipher the 3D architectures of genomes. Here, we provide a detailed protocol for this method. Published by Elsevier Inc.
Veli, Muhammed; Ozcan, Aydogan
2018-03-27
We present a cost-effective and portable platform based on contact lenses for noninvasively detecting Staphylococcus aureus, which is part of the human ocular microbiome and resides on the cornea and conjunctiva. Using S. aureus-specific antibodies and a surface chemistry protocol that is compatible with human tears, contact lenses are designed to specifically capture S. aureus. After the bacteria capture on the lens and right before its imaging, the captured bacteria are tagged with surface-functionalized polystyrene microparticles. These microbeads provide sufficient signal-to-noise ratio for the quantification of the captured bacteria on the contact lens, without any fluorescent labels, by 3D imaging of the curved surface of each lens using only one hologram taken with a lens-free on-chip microscope. After the 3D surface of the contact lens is computationally reconstructed using rotational field transformations and holographic digital focusing, a machine learning algorithm is employed to automatically count the number of beads on the lens surface, revealing the count of the captured bacteria. To demonstrate its proof-of-concept, we created a field-portable and cost-effective holographic microscope, which weighs 77 g, controlled by a laptop. Using daily contact lenses that are spiked with bacteria, we demonstrated that this computational sensing platform provides a detection limit of ∼16 bacteria/μL. This contact-lens-based wearable sensor can be broadly applicable to detect various bacteria, viruses, and analytes in tears using a cost-effective and portable computational imager that might be used even at home by consumers.
Ma, Qian; Khademhosseinieh, Bahar; Huang, Eric; Qian, Haoliang; Bakowski, Malina A; Troemel, Emily R; Liu, Zhaowei
2016-08-16
The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume.
Treberg, Jason R; Crockett, Elizabeth L; Driedzic, William R
2006-01-01
Elasmobranch fishes are an ancient group of vertebrates that have unusual lipid metabolism whereby storage lipids are mobilized from the liver for peripheral oxidation largely as ketone bodies rather than as nonesterified fatty acids under normal conditions. This reliance on ketones, even when feeding, implies that elasmobranchs are chronically ketogenic. Compared to specimens sampled within 2 d of capture (recently captured), spiny dogfish Squalus acanthias that were held for 16-33 d without apparent feeding displayed a 4.5-fold increase in plasma concentration of d- beta -hydroxybutyrate (from 0.71 to 3.2 mM) and were considered ketotic. Overt activity of carnitine palmitoyltransferase-1 in liver mitochondria from ketotic dogfish was characterized by an increased apparent maximal activity, a trend of increasing affinity (reduced apparent K(m); P=0.09) for l-carnitine, and desensitization to the inhibitor malonyl-CoA relative to recently captured animals. Acetoacetyl-CoA thiolase (ACoAT) activity in isolated liver mitochondria was also markedly increased in the ketotic dogfish compared to recently captured fish, whereas no difference in 3-hydroxy-3-methylglutaryl-CoA synthase activity was found between these groups, suggesting that ACoAT plays a more important role in the activation of ketogenesis in spiny dogfish than in mammals and birds.
Zhang, C. J.; Hua, J. F.; Xu, X. L.; ...
2016-07-11
A new method capable of capturing coherent electric field structures propagating at nearly the speed of light in plasma with a time resolution as small as a few femtoseconds is proposed. This method uses a few femtoseconds long relativistic electron bunch to probe the wake produced in a plasma by an intense laser pulse or an ultra-short relativistic charged particle beam. As the probe bunch traverses the wake, its momentum is modulated by the electric field of the wake, leading to a density variation of the probe after free-space propagation. This variation of probe density produces a snapshot of themore » wake that can directly give many useful information of the wake structure and its evolution. Furthermore, this snapshot allows detailed mapping of the longitudinal and transverse components of the wakefield. We develop a theoretical model for field reconstruction and verify it using 3-dimensional particle-in-cell (PIC) simulations. This model can accurately reconstruct the wakefield structure in the linear regime, and it can also qualitatively map the major features of nonlinear wakes. As a result, the capturing of the injection in a nonlinear wake is demonstrated through 3D PIC simulations as an example of the application of this new method.« less
Virtual viewpoint synthesis in multi-view video system
NASA Astrophysics Data System (ADS)
Li, Fang; Yang, Shiqiang
2005-07-01
In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.
Shape measurement and vibration analysis of moving speaker cone
NASA Astrophysics Data System (ADS)
Zhang, Qican; Liu, Yuankun; Lehtonen, Petri
2014-06-01
Surface three-dimensional (3-D) shape information is needed for many fast processes such as structural testing of material, standing waves on loudspeaker cone, etc. Usually measurement is done from limited number of points using electrical sensors or laser distance meters. Fourier Transform Profilometry (FTP) enables fast shape measurement of the whole surface. Method is based on angled sinusoidal fringe pattern projection and image capturing. FTP requires only one image of the deformed fringe pattern to restore the 3-D shape of the measured object, which makes real-time or dynamic data processing possible. In our experiment the method was used for loudspeaker cone distortion measurement in dynamic conditions. For sound quality issues it is important that the whole cone moves in same phase and there are no partial waves. Our imaging resolution was 1280x1024 pixels and frame rate was 200 fps. Using our setup we found unwanted spatial waves in our sample cone.
NASA Technical Reports Server (NTRS)
Blackmon, Theodore
1998-01-01
Virtual reality (VR) technology has played an integral role for Mars Pathfinder mission, operations Using an automated machine vision algorithm, the 3d topography of the Martian surface was rapidly recovered fro -a the stereo images captured. by the Tender camera to produce photo-realistic 3d models, An advanced, interface was developed for visualization and interaction with. the virtual environment of the Pathfinder landing site for mission scientists at the Space Flight Operations Facility of the Jet Propulsion Laboratory. The VR aspect of the display allowed mission scientists to navigate on Mars in Bud while remaining here on Earth, thus improving their spatial awareness of the rock field that surrounds the lenders Measurements of positions, distances and angles could be easily extracted from the topographic models, providing valuable information for science analysis and mission. planning. Moreover, the VR map of Mars has also been used to assist with the archiving and planning of activities for the Sojourner rover.
Spectra of Full 3-D PIC Simulations of Finite Meteor Trails
NASA Astrophysics Data System (ADS)
Tarnecki, L. K.; Oppenheim, M. M.
2016-12-01
Radars detect plasma trails created by the billions of small meteors that impact the Earth's atmosphere daily, returning data used to infer characteristics of the meteoroid population and upper atmosphere. Researchers use models to investigate the dynamic evolution of the trails. Previously, all models assumed a trail of infinite length, due to the constraints of simulation techniques. We present the first simulations of 3D meteor trails of finite length. This change more accurately captures the physics of the trails. We characterize the turbulence that develops as the trail evolves and study the effects of varying the external electric field, altitude, and initial density. The simulations show that turbulence develops in all cases, and that trails travel with the neutral wind rather than electric field. Our results will allow us to draw more detailed and accurate information from non-specular radar observations of meteors.
Evaluation of 3-D graphics software: A case study
NASA Technical Reports Server (NTRS)
Lores, M. E.; Chasen, S. H.; Garner, J. M.
1984-01-01
An efficient 3-D geometry graphics software package which is suitable for advanced design studies was developed. The advanced design system is called GRADE--Graphics for Advanced Design. Efficiency and ease of use are gained by sacrificing flexibility in surface representation. The immediate options were either to continue development of GRADE or to acquire a commercially available system which would replace or complement GRADE. Test cases which would reveal the ability of each system to satisfy the requirements were developed. A scoring method which adequately captured the relative capabilities of the three systems was presented. While more complex multi-attribute decision methods could be used, the selected method provides all the needed information without being so complex that it is difficult to understand. If the value factors are modestly perturbed, system Z is a clear winner based on its overall capabilities. System Z is superior in two vital areas: surfacing and ease of interface with application programs.
Projection-slice theorem based 2D-3D registration
NASA Astrophysics Data System (ADS)
van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.
2007-03-01
In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.
NASA Astrophysics Data System (ADS)
Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine
2011-12-01
in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.
Automatic priming of attentional control by relevant colors.
Ansorge, Ulrich; Becker, Stefanie I
2012-01-01
We tested whether color word cues automatically primed attentional control settings during visual search, or whether color words were used in a strategic manner for the control of attention. In Experiment 1, we used color words as cues that were informative or uninformative with respect to the target color. Regardless of the cue's informativeness, distractors similar to the color cue captured more attention. In Experiment 2, the participants either indicated their expectation about the target color or recalled the last target color, which was uncorrelated with the present target color. We observed more attentional capture by distractors that were similar to the participants' predictions and recollections, but no difference between effects of the recollected and predicted colors. In Experiment 3, we used 100%-informative word cues that were congruent with the predicted target color (e.g., the word "red" informed that the target would be red) or incongruent with the predicted target color (e.g., the word "green" informed that the target would be red) and found that informative incongruent word cues primed attention capture by a word-similar distractor. Together, the results suggest that word cues (Exps. 1 and 3) and color representations (Exp. 2) primed attention capture in an automatic manner. This indicates that color cues automatically primed temporary adjustments in attention control settings.
The future of structural fieldwork - UAV assisted aerial photogrammetry
NASA Astrophysics Data System (ADS)
Vollgger, Stefan; Cruden, Alexander
2015-04-01
Unmanned aerial vehicles (UAVs), commonly referred to as drones, are opening new and low cost possibilities to acquire high-resolution aerial images and digital surface models (DSM) for applications in structural geology. UAVs can be programmed to fly autonomously along a user defined grid to systematically capture high-resolution photographs, even in difficult to access areas. The photographs are subsequently processed using software that employ SIFT (scale invariant feature transform) and SFM (structure from motion) algorithms. These photogrammetric routines allow the extraction of spatial information (3D point clouds, digital elevation models, 3D meshes, orthophotos) from 2D images. Depending on flight altitude and camera setup, sub-centimeter spatial resolutions can be achieved. By "digitally mapping" georeferenced 3D models and images, orientation data can be extracted directly and used to analyse the structural framework of the mapped object or area. We present UAV assisted aerial mapping results from a coastal platform near Cape Liptrap (Victoria, Australia), where deformed metasediments of the Palaeozoic Lachlan Fold Belt are exposed. We also show how orientation and spatial information of brittle and ductile structures extracted from the photogrammetric model can be linked to the progressive development of folds and faults in the region. Even though there are both technical and legislative limitations, which might prohibit the use of UAVs without prior commercial licensing and training, the benefits that arise from the resulting high-resolution, photorealistic models can substantially contribute to the collection of new data and insights for applications in structural geology.
NASA Astrophysics Data System (ADS)
Nakajo, A.; Cocco, A. P.; DeGostin, M. B.; Peracchio, A. A.; Cassenti, B. N.; Cantoni, M.; Van herle, J.; Chiu, W. K. S.
2016-09-01
The performance of materials for electrochemical energy conversion and storage depends upon the number of electrocatalytic sites available for reaction and their accessibility by the transport of reactants and products. For solid oxide fuel/electrolysis cell materials, standard 3-D measurements such as connected triple-phase boundary (TPB) length and effective transport properties partially inform on how local geometry and network topology causes variability in TPB accessibility. A new measurement, the accessible TPB, is proposed to quantify these effects in detail and characterize material performance. The approach probes the reticulated pathways to each TPB using an analytical electrochemical fin model applied to a 3-D discrete representation of the heterogeneous structure provided by skeleton-based partitioning. The method is tested on artificial and real structures imaged by 3-D x-ray and electron microscopy. The accessible TPB is not uniform and the pattern varies depending upon the structure. Connected TPBs can be even passivated. The sensitivity to manipulations of the local 3-D geometry and topology that standard measurements cannot capture is demonstrated. The clear presence of preferential pathways showcases a non-uniform utilization of the 3-D structure that potentially affects the performance and the resilience to alterations due to degradation phenomena. The concepts presented also apply to electrochemical energy storage and conversion devices such as other types of fuel cells, electrolyzers, batteries and capacitors.
Producing genome structure populations with the dynamic and automated PGS software.
Hua, Nan; Tjong, Harianto; Shin, Hanjun; Gong, Ke; Zhou, Xianghong Jasmine; Alber, Frank
2018-05-01
Chromosome conformation capture technologies such as Hi-C are widely used to investigate the spatial organization of genomes. Because genome structures can vary considerably between individual cells of a population, interpreting ensemble-averaged Hi-C data can be challenging, in particular for long-range and interchromosomal interactions. We pioneered a probabilistic approach for the generation of a population of distinct diploid 3D genome structures consistent with all the chromatin-chromatin interaction probabilities from Hi-C experiments. Each structure in the population is a physical model of the genome in 3D. Analysis of these models yields new insights into the causes and the functional properties of the genome's organization in space and time. We provide a user-friendly software package, called PGS, which runs on local machines (for practice runs) and high-performance computing platforms. PGS takes a genome-wide Hi-C contact frequency matrix, along with information about genome segmentation, and produces an ensemble of 3D genome structures entirely consistent with the input. The software automatically generates an analysis report, and provides tools to extract and analyze the 3D coordinates of specific domains. Basic Linux command-line knowledge is sufficient for using this software. A typical running time of the pipeline is ∼3 d with 300 cores on a computer cluster to generate a population of 1,000 diploid genome structures at topological-associated domain (TAD)-level resolution.
THE THREE-DIMENSIONAL EVOLUTION TO CORE COLLAPSE OF A MASSIVE STAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couch, Sean M.; Chatzopoulos, Emmanouil; Arnett, W. David
2015-07-20
We present the first three-dimensional (3D) simulation of the final minutes of iron core growth in a massive star, up to and including the point of core gravitational instability and collapse. We capture the development of strong convection driven by violent Si burning in the shell surrounding the iron core. This convective burning builds the iron core to its critical mass and collapse ensues, driven by electron capture and photodisintegration. The non-spherical structure and motion generated by 3D convection is substantial at the point of collapse, with convective speeds of several hundreds of km s{sup −1}. We examine the impactmore » of such physically realistic 3D initial conditions on the core-collapse supernova mechanism using 3D simulations including multispecies neutrino leakage and find that the enhanced post-shock turbulence resulting from 3D progenitor structure aids successful explosions. We conclude that non-spherical progenitor structure should not be ignored, and should have a significant and favorable impact on the likelihood for neutrino-driven explosions. In order to make simulating the 3D collapse of an iron core feasible, we were forced to make approximations to the nuclear network making this effort only a first step toward accurate, self-consistent 3D stellar evolution models of the end states of massive stars.« less
[Reliability of iWitness photogrammetry in maxillofacial application].
Jiang, Chengcheng; Song, Qinggao; He, Wei; Chen, Shang; Hong, Tao
2015-06-01
This study aims to test the accuracy and precision of iWitness photogrammetry for measuring the facial tissues of mannequin head. Under ideal circumstances, the 3D landmark coordinates were repeatedly obtained from a mannequin head using iWitness photogrammetric system with different parameters, to examine the precision of this system. The differences between the 3D data and their true distance values of mannequin head were computed. Operator error of 3D system in non-zoom and zoom status were 0.20 mm and 0.09 mm, and the difference was significant (P 0.05). Image captured error of 3D system was 0.283 mm, and there was no significant difference compared with the same group of images (P>0.05). Error of 3D systen with recalibration was 0.251 mm, and the difference was not statistically significant compared with image captured error (P>0.05). Good congruence was observed between means derived from the 3D photos and direct anthropometry, with difference ranging from -0.4 mm to +0.4 mm. This study provides further evidence of the high reliability of iWitness photogrammetry for several craniofacial measurements, including landmarks and inter-landmark distances. The evaluated system can be recommended for the evaluation and documentation of the facial surface.
Analysis Methodology for Optimal Selection of Ground Station Site in Space Missions
NASA Astrophysics Data System (ADS)
Nieves-Chinchilla, J.; Farjas, M.; Martínez, R.
2013-12-01
Optimization of ground station sites is especially important in complex missions that include several small satellites (clusters or constellations) such as the QB50 project, where one ground station would be able to track several spatial vehicles, even simultaneously. In this regard the design of the communication system has to carefully take into account the ground station site and relevant signal phenomena, depending on the frequency band. To propose the optimal location of the ground station, these aspects become even more relevant to establish a trusted communication link due to the ground segment site in urban areas and/or selection of low orbits for the space segment. In addition, updated cartography with high resolution data of the location and its surroundings help to develop recommendations in the design of its location for spatial vehicles tracking and hence to improve effectiveness. The objectives of this analysis methodology are: completion of cartographic information, modelling the obstacles that hinder communication between the ground and space segment and representation in the generated 3D scene of the degree of impairment in the signal/noise of the phenomena that interferes with communication. The integration of new technologies of geographic data capture, such as 3D Laser Scan, determine that increased optimization of the antenna elevation mask, in its AOS and LOS azimuths along the horizon visible, maximizes visibility time with spatial vehicles. Furthermore, from the three-dimensional cloud of points captured, specific information is selected and, using 3D modeling techniques, the 3D scene of the antenna location site and surroundings is generated. The resulting 3D model evidences nearby obstacles related to the cartographic conditions such as mountain formations and buildings, and any additional obstacles that interfere with the operational quality of the antenna (other antennas and electronic devices that emit or receive in the same bandwidth). To check/test the spatial proposal of the ground station site, this analysis methodology uses mission simulation software of spatial vehicles to analyze and quantify how the geographic accuracy of the position of the spatial vehicles along the horizon visible from the antenna, increases communication time with the ground station. Experimental results that have been obtained from a ground station located at ETSIT-UPM in Spain (QBito Nanosatellite, UPM spacecraft mission within the QB50 project) show that selection of the optimal site increases the field of view from the antenna and hence helps to meet mission requirements.
Clément, Julien; Hagemeister, Nicola; Aissaoui, Rachid; de Guise, Jacques A
2014-01-01
Numerous studies have described 3D kinematics, 3D kinetics and electromyography (EMG) of the lower limbs during quasi-static or dynamic squatting activities. One study compared these two squatting conditions but only at low speed on healthy subjects, and provided no information on kinetics and EMG of the lower limbs. The purpose of the present study was to contrast simultaneous recordings of 3D kinematics, 3D kinetics and EMG of the lower limbs during quasi-stat ic and fast-dynamic squats in healthy and pathological subjects. Ten subjects were recruited: five healthy and five osteoarthritis subjects. A motion-capture system, force plate, and surface electrodes respectively recorded 3D kinematics, 3D kinetics and EMG of the lower limbs. Each subject performed a quasi-static squat and several fast-dynamic squats from 0° to 70° of knee flexion. The two squatting conditions were compared for positions where quasi-static and fast-dynamic knee flexion-extension angles were similar. Mean differences between quasi-static and fast-dynamic squats were 1.5° for rotations, 1.9 mm for translations, 2.1% of subjects' body weight for ground reaction forces, 6.6 Nm for torques, 11.2 mm for center of pressure, and 6.3% of maximum fast-dynamic electromyographic activities for EMG. Some significant differences (p<0.05) were found in internal rotation, anterior translation, vertical force and EMG. All differences between quasi-static and fast-dynamic squats were small. 69.5% of compared data were equivalent. In conclusion, this study showed that quasi-static and fast-dynamic squatting activities are comparable in terms of 3D kinematics, 3D kinetics and EMG, although some reservations still remain. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang
2017-12-01
While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
First measurements of water and D/H on Mars with ExoMars / NOMAD
NASA Astrophysics Data System (ADS)
Villanueva, Geronimo Luis; Liuzzi, Giuliano; Mumma, Michael J.; Carine Vandaele, Ann; Thomas, Ian; Smith, Michael D.; Daerden, Frank; Ristic, Bojan; Patel, Manish; Bellucci, Giancarlo; Lopez-Moreno, Jose; NOMAD Team
2017-10-01
We present preliminary data collected by the high-resolution NOMAD (Nadir and Occultation for MArs Discovery) instrument onboard the ExoMars / Trace Gas Orbiter (TGO) targeting several lines of water (H2O), deuterated water (HDO) and carbon dioxide (CO2). TGO is the first spacecraft on Mars specifically tailored to search for trace constituents, with the NOMAD instrument providing high spectral resolution (λ/dλ~ 20,000) over the 2-5 um spectral region. Such capabilities allow us to probe with unprecedented accuracy and sensitivity a multitude of organic species (e.g., CH4, CH3OH, H2CO, C2H6) and to map isotopic signatures (e.g., D/H, 13C/12C) across the whole planet.In particular, isotopic ratios are among the most valuable indicators for the loss of volatiles from an atmosphere. Because the escape rates for each isotope are slightly different (larger for the lighter forms), over long times the atmosphere becomes enriched in the heavy isotopic forms. By probing the current isotopic ratios, one can then infer the amount of matter lost to space over the planet’s evolution. Deuterium fractionation also reveals information about the cycle of water on the planet and informs us of its stability on short- and long-term scales, including its release from active regions on Mars having a characteristic D/H signature.Upon its successful launch in March/2016, we acquired critical calibration data in Apr/2016 and in June/2016, while during the Mars-Orbit-Capture phase, we also acquired Mars nadir data in Nov/2016 and in Feb-Mar/2017. Full science operations are expected to start upon final orbit insertion in early 2018. In this paper, we report initial retrievals of water and D/H derived during the Mars-Orbit-Capture phase and discuss the prospects for mapping of isotopic signatures during the nominal science phase.
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations
Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.
Zhao, Liya; Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.
Capture of unstable protein complex on the streptavidin-coated single-walled carbon nanotubes
NASA Astrophysics Data System (ADS)
Liu, Zunfeng; Voskamp, Patrick; Zhang, Yue; Chu, Fuqiang; Abrahams, Jan Pieter
2013-04-01
Purification of unstable protein complexes is a bottleneck for investigation of their 3D structure and in protein-protein interaction studies. In this paper, we demonstrate that streptavidin-coated single-walled carbon nanotubes (Strep•SWNT) can be used to capture the biotinylated DNA- EcoRI complexes on a 2D surface and in solution using atomic force microscopy and electrophoresis analysis, respectively. The restriction enzyme EcoRI forms unstable complexes with DNA in the absence of Mg2+. Capturing the EcoRI-DNA complexes on the Strep•SWNT succeeded in the absence of Mg2+, demonstrating that the Strep•SWNT can be used for purifying unstable protein complexes.
Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.
Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi
2014-10-20
We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.
A standardization model based on image recognition for performance evaluation of an oral scanner.
Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok
2017-12-01
Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
A cross-platform solution for light field based 3D telemedicine.
Wang, Gengkun; Xiang, Wei; Pickering, Mark
2016-03-01
Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Fast high resolution reconstruction in multi-slice and multi-view cMRI
NASA Astrophysics Data System (ADS)
Velasco Toledo, Nelson; Romero Castro, Eduardo
2015-01-01
Cardiac magnetic resonance imaging (cMRI) is an useful tool in diagnosis, prognosis and research since it functionally tracks the heart structure. Although useful, this imaging technique is limited in spatial resolution because heart is a constant moving organ, also there are other non controled conditions such as patient movements and volumetric changes during apnea periods when data is acquired, those conditions limit the time to capture high quality information. This paper presents a very fast and simple strategy to reconstruct high resolution 3D images from a set of low resolution series of 2D images. The strategy is based on an information reallocation algorithm which uses the DICOM header to relocate voxel intensities in a regular grid. An interpolation method is applied to fill empty places with estimated data, the interpolation resamples the low resolution information to estimate the missing information. As a final step a gaussian filter that denoises the final result. A reconstructed image evaluation is performed using as a reference a super-resolution reconstructed image. The evaluation reveals that the method maintains the general heart structure with a small loss in detailed information (edge sharpening and blurring), some artifacts related with input information quality are detected. The proposed method requires low time and computational resources.
Gis-Based Smart Cartography Using 3d Modeling
NASA Astrophysics Data System (ADS)
Malinverni, E. S.; Tassetti, A. N.
2013-08-01
3D City Models have evolved to be important tools for urban decision processes and information systems, especially in planning, simulation, analysis, documentation and heritage management. On the other hand existing and in use numerical cartography is often not suitable to be used in GIS because not geometrically and topologically correctly structured. The research aim is to 3D structure and organize a numeric cartography for GIS and turn it into CityGML standardized features. The work is framed around a first phase of methodological analysis aimed to underline which existing standard (like ISO and OGC rules) can be used to improve the quality requirement of a cartographic structure. Subsequently, from this technical specifics, it has been investigated the translation in formal contents, using an owner interchange software (SketchUp), to support some guide lines implementations to generate a GIS3D structured in GML3. It has been therefore predisposed a test three-dimensional numerical cartography (scale 1:500, generated from range data captured by 3D laser scanner), tested on its quality according to the previous standard and edited when and where necessary. Cad files and shapefiles are converted into a final 3D model (Google SketchUp model) and then exported into a 3D city model (CityGML LoD1/LoD2). The GIS3D structure has been managed in a GIS environment to run further spatial analysis and energy performance estimate, not achievable in a 2D environment. In particular geometrical building parameters (footprint, volume etc.) are computed and building envelop thermal characteristics are derived from. Lastly, a simulation is carried out to deal with asbestos and home renovating charges and show how the built 3D city model can support municipal managers with risk diagnosis of the present situation and development of strategies for a sustainable redevelop.
A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).
Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A
2013-01-01
The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information
NASA Astrophysics Data System (ADS)
Hosoi, F.
2014-12-01
Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of consecutive voxels that filled the outer surface and the interior of the stem and large branches. From the model, the woody material volume of any part of the target tree can be directly calculated easily by counting the number of corresponding voxels and multiplying the result by the per-voxel volume.
Margolin, Ezra J; Mlynarczyk, Carrie M; Mulhall, John P; Stember, Doron S; Stahl, Peter J
2017-06-01
Non-curvature penile deformities are prevalent and bothersome manifestations of Peyronie's disease (PD), but the quantitative metrics that are currently used to describe these deformities are inadequate and non-standardized, presenting a barrier to clinical research and patient care. To introduce erect penile volume (EPV) and percentage of erect penile volume loss (percent EPVL) as novel metrics that provide detailed quantitative information about non-curvature penile deformities and to study the feasibility and reliability of three-dimensional (3D) photography for measurement of quantitative penile parameters. We constructed seven penis models simulating deformities found in PD. The 3D photographs of each model were captured in triplicate by four observers using a 3D camera. Computer software was used to generate automated measurements of EPV, percent EPVL, penile length, minimum circumference, maximum circumference, and angle of curvature. The automated measurements were statistically compared with measurements obtained using water-displacement experiments, a tape measure, and a goniometer. Accuracy of 3D photography for average measurements of all parameters compared with manual measurements; inter-test, intra-observer, and inter-observer reliabilities of EPV and percent EPVL measurements as assessed by the intraclass correlation coefficient. The 3D images were captured in a median of 52 seconds (interquartile range = 45-61). On average, 3D photography was accurate to within 0.3% for measurement of penile length. It overestimated maximum and minimum circumferences by averages of 4.2% and 1.6%, respectively; overestimated EPV by an average of 7.1%; and underestimated percent EPVL by an average of 1.9%. All inter-test, inter-observer, and intra-observer intraclass correlation coefficients for EPV and percent EPVL measurements were greater than 0.75, reflective of excellent methodologic reliability. By providing highly descriptive and reliable measurements of penile parameters, 3D photography can empower researchers to better study volume-loss deformities in PD and enable clinicians to offer improved clinical assessment, communication, and documentation. This is the first study to apply 3D photography to the assessment of PD and to accurately measure the novel parameters of EPV and percent EPVL. This proof-of-concept study is limited by the lack of data in human subjects, which could present additional challenges in obtaining reliable measurements. EPV and percent EPVL are novel metrics that can be quickly, accurately, and reliably measured using computational analysis of 3D photographs and can be useful in describing non-curvature volume-loss deformities resulting from PD. Margolin EJ, Mlynarczyk CM, Muhall JP, et al. Three-Dimensional Photography for Quantitative Assessment of Penile Volume-Loss Deformities in Peyronie's Disease. J Sex Med 2017;14:829-833. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho
2018-06-01
Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Tacit Knowledge Capture and the Brain-Drain at Electrical Utilities
NASA Astrophysics Data System (ADS)
Perjanik, Nicholas Steven
As a consequence of an aging workforce, electric utilities are at risk of losing their most experienced and knowledgeable electrical engineers. In this research, the problem was a lack of understanding of what electric utilities were doing to capture the tacit knowledge or know-how of these engineers. The purpose of this qualitative research study was to explore the tacit knowledge capture strategies currently used in the industry by conducting a case study of 7 U.S. electrical utilities that have demonstrated an industry commitment to improving operational standards. The research question addressed the implemented strategies to capture the tacit knowledge of retiring electrical engineers and technical personnel. The research methodology involved a qualitative embedded case study. The theories used in this study included knowledge creation theory, resource-based theory, and organizational learning theory. Data were collected through one time interviews of a senior electrical engineer or technician within each utility and a workforce planning or training professional within 2 of the 7 utilities. The analysis included the use of triangulation and content analysis strategies. Ten tacit knowledge capture strategies were identified: (a) formal and informal on-boarding mentorship and apprenticeship programs, (b) formal and informal off-boarding mentorship programs, (c) formal and informal training programs, (d) using lessons learned during training sessions, (e) communities of practice, (f) technology enabled tools, (g) storytelling, (h) exit interviews, (i) rehiring of retirees as consultants, and (j) knowledge risk assessments. This research contributes to social change by offering strategies to capture the know-how needed to ensure operational continuity in the delivery of safe, reliable, and sustainable power.
Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.
You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen
2017-03-31
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.
Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor
You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen
2017-01-01
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face. PMID:28362349
Three-dimensional imaging of cultural heritage artifacts with holographic printers
NASA Astrophysics Data System (ADS)
Kang, Hoonjong; Stoykova, Elena; Berberova, Nataliya; Park, Jiyong; Nazarova, Dimana; Park, Joo Sup; Kim, Youngmin; Hong, Sunghee; Ivanov, Branimir; Malinowski, Nikola
2016-01-01
Holography is defined as a two-steps process of capture and reconstruction of the light wavefront scattered from three-dimensional (3D) objects. Capture of the wavefront is possible due to encoding of both amplitude and phase in the hologram as a result of interference of the light beam coming from the object and mutually coherent reference beam. Three-dimensional imaging provided by holography motivates development of digital holographic imaging methods based on computer generation of holograms as a holographic display or a holographic printer. The holographic printing technique relies on combining digital 3D object representation and encoding of the holographic data with recording of analog white light viewable reflection holograms. The paper considers 3D contents generation for a holographic stereogram printer and a wavefront printer as a means of analogue recording of specific artifacts which are complicated objects with regards to conventional analog holography restrictions.
Responsive 3D microstructures from virus building blocks.
Oh, Seungwhan; Kwak, Eun-A; Jeon, Seongho; Ahn, Suji; Kim, Jong-Man; Jaworski, Justyn
2014-08-13
Fabrication of 3D biological structures reveals dynamic response to external stimuli. A liquid-crystalline bridge extrusion technique is used to generate 3D structures allowing the capture of Rayleigh-like instabilities, facilitating customization of smooth, helical, or undulating periodic surface textures. By integrating intrinsic biochemical functionality and synthetic components into controlled structures, this strategy offers a new form of adaptable materials. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
3D Human Motion Editing and Synthesis: A Survey
Wang, Xin; Chen, Qiudi; Wang, Wanliang
2014-01-01
The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395
Nanocellulosic materials as bioinks for 3D bioprinting.
Piras, Carmen C; Fernández-Prieto, Susana; De Borggraeve, Wim M
2017-09-26
3D bioprinting is a new developing technology with lots of promise in tissue engineering and regenerative medicine. Being biocompatible, biodegradable, renewable and cost-effective, cellulosic nanomaterials have recently captured the attention of researchers due to their applicability as inks for 3D bioprinting. Although a number of cellulose-based bioinks have been reported, the potential of cellulose nanofibrils and nanocrystals has not been fully explored yet. This minireview aims at highlighting the use of nanocellulosic materials for 3D bioprinting as an emerging, promising, new research field.
NASA Astrophysics Data System (ADS)
Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie
2009-03-01
Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.
Active 3D camera design for target capture on Mars orbit
NASA Astrophysics Data System (ADS)
Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno
2010-04-01
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.
Do the contents of working memory capture attention? Yes, but cognitive control matters.
Han, Suk Won; Kim, Min-Shik
2009-10-01
There has been a controversy on whether working memory can guide attentional selection. Some researchers have reported that the contents of working memory guide attention automatically in visual search (D. Soto, D. Heinke, G. W. Humphreys, & M. J. Blanco, 2005). On the other hand, G.F. Woodman and S. J. Luck (2007) reported that they could not find any evidence of attentional capture by working memory. In the present study, we tried to find an integrative explanation for the different sets of results. We report evidence for attentional capture by working memory, but this effect was eliminated when search was perceptually demanding or the onset of the search was delayed long enough for cognitive control of search to be implemented under particular conditions. We suggest that perceptual difficulty and the time course of cognitive control as important factors that determine when information in working memory influences attention. PsycINFO Database Record (c) 2009 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Marshall, Jason P.; Hudson, Troy L.; Andrade, José E.
2017-10-01
The InSight mission launches in 2018 to characterize several geophysical quantities on Mars, including the heat flow from the planetary interior. This quantity will be calculated by utilizing measurements of the thermal conductivity and the thermal gradient down to 5 meters below the Martian surface. One of the components of InSight is the Mole, which hammers into the Martian regolith to facilitate these thermal property measurements. In this paper, we experimentally investigated the effect of the Mole's penetrating action on regolith compaction and mechanical properties. Quasi-static and dynamic experiments were run with a 2D model of the 3D cylindrical mole. Force resistance data was captured with load cells. Deformation information was captured in images and analyzed using Digitial Image Correlation (DIC). Additionally, we used existing approximations of Martian regolith thermal conductivity to estimate the change in the surrounding granular material's thermal conductivity due to the Mole's penetration. We found that the Mole has the potential to cause a high degree of densification, especially if the initial granular material is relatively loose. The effect on the thermal conductivity from this densification was found to be relatively small in first-order calculations though more complete thermal models incorporating this densification should be a subject of further investigation. The results obtained provide an initial estimate of the Mole's impact on Martian regolith thermal properties.
Biomechanics Analysis of Combat Sport (Silat) By Using Motion Capture System
NASA Astrophysics Data System (ADS)
Zulhilmi Kaharuddin, Muhammad; Badriah Khairu Razak, Siti; Ikram Kushairi, Muhammad; Syawal Abd. Rahman, Mohamed; An, Wee Chang; Ngali, Z.; Siswanto, W. A.; Salleh, S. M.; Yusup, E. M.
2017-01-01
‘Silat’ is a Malay traditional martial art that is practiced in both amateur and in professional levels. The intensity of the motion spurs the scientific research in biomechanics. The main purpose of this abstract is to present the biomechanics method used in the study of ‘silat’. By using the 3D Depth Camera motion capture system, two subjects are to perform ‘Jurus Satu’ in three repetitions each. One subject is set as the benchmark for the research. The videos are captured and its data is processed using the 3D Depth Camera server system in the form of 16 3D body joint coordinates which then will be transformed into displacement, velocity and acceleration components by using Microsoft excel for data calculation and Matlab software for simulation of the body. The translated data obtained serves as an input to differentiate both subjects’ execution of the ‘Jurus Satu’. Nine primary movements with the addition of five secondary movements are observed visually frame by frame from the simulation obtained to get the exact frame that the movement takes place. Further analysis involves the differentiation of both subjects’ execution by referring to the average mean and standard deviation of joints for each parameter stated. The findings provide useful data for joints kinematic parameters as well as to improve the execution of ‘Jurus Satu’ and to exhibit the process of learning a movement that is relatively unknown by the use of a motion capture system.
Networks in Social Policy Problems
NASA Astrophysics Data System (ADS)
Vedres, Balázs; Scotti, Marco
2012-08-01
1. Introduction M. Scotti and B. Vedres; Part I. Information, Collaboration, Innovation: The Creative Power of Networks: 2. Dissemination of health information within social networks C. Dhanjal, S. Blanchemanche, S. Clemençon, A. Rona-Tas and F. Rossi; 3. Scientific teams and networks change the face of knowledge creation S. Wuchty, J. Spiro, B. F. Jones and B. Uzzi; 4. Structural folds: the innovative potential of overlapping groups B. Vedres and D. Stark; 5. Team formation and performance on nanoHub: a network selection challenge in scientific communities D. Margolin, K. Ognyanova, M. Huang, Y. Huang and N. Contractor; Part II. Influence, Capture, Corruption: Networks Perspectives on Policy Institutions: 6. Modes of coordination of collective action: what actors in policy making? M. Diani; 7. Why skewed distributions of pay for executives is the cause of much grief: puzzles and few answers so far B. Kogut and J.-S. Yang; 8. Networks of institutional capture: a case of business in the State apparatus E. Lazega and L. Mounier; 9. The social and institutional structure of corruption: some typical network configurations of corruption transactions in Hungary Z. Szántó, I. J. Tóth and S. Varga; Part III. Crisis, Extinction, World System Change: Network Dynamics on a Large Scale: 10. How creative elements help the recovery of networks after crisis: lessons from biology A. Mihalik, A. S. Kaposi, I. A. Kovács, T. Nánási, R. Palotai, Á. Rák, M. S. Szalay-Beko and P. Csermely; 11. Networks and globalization policies D. R. White; 12. Network science in ecology: the structure of ecological communities and the biodiversity question A. Bodini, S. Allesina and C. Bondavalli; 13. Supply security in the European natural gas pipeline network M. Scotti and B. Vedres; 14. Conclusions and outlook A.-L. Barabási; Index.
A prototype system for forecasting landslides in the Seattle, Washington, area
Chleborad, Alan F.; Baum, Rex L.; Godt, Jonathan W.; Powers, Philip S.
2008-01-01
Empirical rainfall thresholds and related information form the basis of a prototype system for forecasting landslides in the Seattle area. The forecasts are tied to four alert levels, and a decision tree guides the use of thresholds to determine the appropriate level. From analysis of historical landslide data, we developed a formula for a cumulative rainfall threshold (CT), P3 = 88.9 − 0.67P15, defined by rainfall amounts in millimeters during consecutive 3 d (72 h) periods, P3, and the 15 d (360 h) period before P3, P15. The variable CT captures more than 90% of historical events of three or more landslides in 1 d and 3 d periods recorded from 1978 to 2003. However, the low probability of landslide occurrence on a day when the CT is exceeded at one or more rain gauges (8.4%) justifies a low-level of alert for possible landslide occurrence, but it does trigger more vigilant monitoring of rainfall and soil wetness. Exceedance of a rainfall intensity-duration threshold I = 82.73D−1.13, for intensity, I (mm/hr), and duration, D (hr), corresponds to a higher probability of landslide occurrence (30%) and forms the basis for issuing warnings of impending, widespread occurrence of landslides. Information about the area of exceedance and soil wetness can be used to increase the certainty of landslide forecasts (probabilities as great as 71%). Automated analysis of real-time rainfall and subsurface water data and digital quantitative precipitation forecasts are needed to fully implement a warning system based on the two thresholds.
[Principles of the EOS™ X-ray machine and its use in daily orthopedic practice].
Illés, Tamás; Somoskeöy, Szabolcs
2012-02-26
The EOS™ X-ray machine, based on a Nobel prize-winning invention in Physics in the field of particle detection, is capable of simultaneously capturing biplanar X-ray images by slot scanning of the whole body in an upright, physiological load-bearing position, using ultra low radiation doses. The simultaneous capture of spatially calibrated anterioposterior and lateral images allows the performance of a three-dimensional (3D) surface reconstruction of the skeletal system by a special software. Parts of the skeletal system in X-ray images and 3D-reconstructed models appear in true 1:1 scale for size and volume, thus spinal and vertebral parameters, lower limb axis lengths and angles, as well as any relevant clinical parameters in orthopedic practice could be very precisely measured and calculated. Visualization of 3D reconstructed models in various views by the sterEOS 3D software enables the presentation of top view images, through which one can analyze the rotational conditions of lower limbs, joints and spine deformities in horizontal plane and this provides revolutionary novel possibilities in orthopedic surgery, especially in spine surgery.
Test of 4-body Theory via Polarized p-T Capture Below 80 keV
NASA Astrophysics Data System (ADS)
Canon, R. S.; Gaff, S. J.; Kelley, J. H.; Schreiber, E. C.; Weller, H. R.; Wulf, E. A.; Prior, R. M.; Spraker, M.; Tilley, D. R.
1998-10-01
Our previous study of polarized p-d capture at energies below 80 keV revealed the major role played by MEC effects and provided a clean testing ground for state-of-the-art 3-body theory (the ``Ay puzzle'' remains)(G. Schmid et al); PRL 76, 3088(1996); PRC 56, 2565(1997). Four-body theory is on the threshold(A. Fonseca,W. Glöckle,A. Kievsky,H. Witala;Private communication) of being able to make similar ab-initio predictions. The p-T capture reaction is expected to exhibit strong MEC effects at very low energies for reasons similar to those in p-d capture. Preliminary results indicate finite values of A_y(90^circ) in the 50-80 keV region. These results will be discussed with respect to their implications on the M1 strength present in this reaction. Plans for future measurements and analysis will also be described.
FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †
Lee, Sukhan
2018-01-01
The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506
3D Hall MHD-EPIC Simulations of Ganymede's Magnetosphere
NASA Astrophysics Data System (ADS)
Zhou, H.; Toth, G.; Jia, X.
2017-12-01
Fully kinetic modeling of a complete 3D magnetosphere is still computationally expensive and not feasible on current computers. While magnetohydrodynamic (MHD) models have been successfully applied to a wide range of plasma simulation, they cannot capture some important kinetic effects. We have recently developed a new modeling tool to embed the implicit particle-in-cell (PIC) model iPIC3D into the Block-Adaptive-Tree-Solarwind-Roe-Upwind-Scheme (BATS-R-US) magnetohydrodynamic model. This results in a kinetic model of the regions where kinetic effects are important. In addition to the MHD-EPIC modeling of the magnetosphere, the improved model presented here is now able to represent the moon as a resistive body. We use a stretched spherical grid with adaptive mesh refinement (AMR) to capture the resistive body and its boundary. A semi-implicit scheme is employed for solving the magnetic induction equation to allow time steps that are not limited by the resistivity. We have applied the model to Ganymede, the only moon in the solar system known to possess a strong intrinsic magnetic field, and included finite resistivity beneath the moon`s surface to model the electrical properties of the interior in a self-consistent manner. The kinetic effects of electrons and ions on the dayside magnetopause and tail current sheet are captured with iPIC3D. Magnetic reconnections under different upstream background conditions of several Galileo flybys are simulated to study the global reconnection rate and the magnetospheric dynamics
García-Jacas, C R; Marrero-Ponce, Y; Barigye, S J; Hernández-Ortega, T; Cabrera-Leyva, L; Fernández-Castillo, A
2016-12-01
Novel N-tuple topological/geometric cutoffs to consider specific inter-atomic relations in the QuBiLS-MIDAS framework are introduced in this manuscript. These molecular cutoffs permit the taking into account of relations between more than two atoms by using (dis-)similarity multi-metrics and the concepts related with topological and Euclidean-geometric distances. To this end, the kth two-, three- and four-tuple topological and geometric neighbourhood quotient (NQ) total (or local-fragment) spatial-(dis)similarity matrices are defined, to represent 3D information corresponding to the relations between two, three and four atoms of the molecular structures that satisfy certain cutoff criteria. First, an analysis of a diverse chemical space for the most common values of topological/Euclidean-geometric distances, bond/dihedral angles, triangle/quadrilateral perimeters, triangle area and volume was performed in order to determine the intervals to take into account in the cutoff procedures. A variability analysis based on Shannon's entropy reveals that better distribution patterns are attained with the descriptors based on the cutoffs proposed (QuBiLS-MIDAS NQ-MDs) with regard to the results obtained when all inter-atomic relations are considered (QuBiLS-MIDAS KA-MDs - 'Keep All'). A principal component analysis shows that the novel molecular cutoffs codify chemical information captured by the respective QuBiLS-MIDAS KA-MDs, as well as information not captured by the latter. Lastly, a QSAR study to obtain deeper knowledge of the contribution of the proposed methods was carried out, using four molecular datasets (steroids (STER), angiotensin converting enzyme (ACE), thermolysin inhibitors (THER) and thrombin inhibitors (THR)) widely used as benchmarks in the evaluation of several methodologies. One to four variable QSAR models based on multiple linear regression were developed for each compound dataset following the original division into training and test sets. The results obtained reveal that the novel cutoff procedures yield superior performances relative to those of the QuBiLS-MIDAS KA-MDs in the prediction of the biological activities considered. From the results achieved, it can be suggested that the proposed N-tuple topological/geometric cutoffs constitute a relevant criteria for generating MDs codifying particular atomic relations, ultimately useful in enhancing the modelling capacity of the QuBiLS-MIDAS 3D-MDs.
Recent advances in the application of electron tomography to materials chemistry.
Leary, Rowan; Midgley, Paul A; Thomas, John Meurig
2012-10-16
Nowadays, tomography plays a central role in pureand applied science, in medicine, and in many branches of engineering and technology. It entails reconstructing the three-dimensional (3D) structure of an object from a tilt series of two-dimensional (2D) images. Its origin goes back to 1917, when Radon showed mathematically how a series of 2D projection images could be converted to the 3D structural one. Tomographic X-ray and positron scanning for 3D medical imaging, with a resolution of ∼1 mm, is now ubiquitous in major hospitals. Electron tomography, a relatively new chemical tool, with a resolution of ∼1 nm, has been recently adopted by materials chemists as an invaluable aid for the 3D study of the morphologies, spatially-discriminating chemical compositions, and defect properties of nanostructured materials. In this Account, we review the advances that have been made in facilitating the recording of the required series of 2D electron microscopic images and the subsequent process of 3D reconstruction of specimens that are vulnerable, to a greater or lesser degree, to electron beam damage. We describe how high-fidelity 3D tomograms may be obtained from relatively few 2D images by incorporating prior structural knowledge into the reconstruction process. In particular, we highlight the vital role of compressed sensing, a recently developed procedure well-known to information theorists that exploits ideas of image compression and "sparsity" (that the important image information can be captured in a reduced data set). We also touch upon another promising approach, "discrete" tomography, which builds into the reconstruction process a prior assumption that the object can be described in discrete terms, such as the number of constituent materials and their expected densities. Other advances made recently that we outline, such as the availability of aberration-corrected electron microscopes, electron wavelength monochromators, and sophisticated specimen goniometers, have all contributed significantly to the further development of quantitative 3D studies of nanostructured materials, including nanoparticle-heterogeneous catalysts, fuel-cell components, and drug-delivery systems, as well as photovoltaic and plasmonic devices, and are likely to enhance our knowledge of many other facets of materials chemistry, such as organic-inorganic composites, solar-energy devices, bionanotechnology, biomineralization, and energy-storage systems composed of high-permittivity metal oxides.
3D printing from cardiovascular CT: a practical guide and review
Birbara, Nicolette S.; Hussain, Tarique; Greil, Gerald; Foley, Thomas A.; Pather, Nalini
2017-01-01
Current cardiovascular imaging techniques allow anatomical relationships and pathological conditions to be captured in three dimensions. Three-dimensional (3D) printing, or rapid prototyping, has also become readily available and made it possible to transform virtual reconstructions into physical 3D models. This technology has been utilised to demonstrate cardiovascular anatomy and disease in clinical, research and educational settings. In particular, 3D models have been generated from cardiovascular computed tomography (CT) imaging data for purposes such as surgical planning and teaching. This review summarises applications, limitations and practical steps required to create a 3D printed model from cardiovascular CT. PMID:29255693
Zhu, Shuyan; Li, Hualin; Yang, Mengsu; Pang, Stella W
2018-05-31
Three-dimensional (3D) multilayered plasmonic structures consisting of Au submicrometric squares on top of SU-8 submicrometric pillars, Au asymmetrical submicrometric structures in the middle, and Au asymmetrical submicrometric holes at the bottom were fabricated through reversal nanoimprint technology. Compared with two-dimensional and quasi-3D plasmonic structures, the 3D multilayered plasmonic structures showed higher electromagnetic field intensity, longer plasmon decay length and larger plasmon sensing area, which are desirable for highly sensitive localized surface plasmonic resonance biosensors. The sensitivity and resonance peak wavelength of the 3D multilayered plasmonic structures could be adjusted by varying the offset between the top and bottom SU-8 submicrometric pillars from 31% to 56%, and the highest sensitivity of 382 and 442 nm/refractive index unit were observed for resonance peaks at 581 and 805 nm, respectively. Live lung cancer A549 cells with a low concentration of 5×103 cells/ml and a low sample volume of 2 µl could be detected by the 3D multilayered plasmonic structures integrated in a microfluidic system. The 3D plasmonic biosensors also had the advantages of detecting DNA hybridization by capturing the complementary target DNA in the low concentration range of 10-14 to 10-7 M, and providing a large peak shift of 82 nm for capturing 10-7 M complementary target DNA without additional signal amplification. Creative Commons Attribution license.
A mobile trauma database with charge capture.
Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin
2005-11-01
Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient revenue. Greater familiarity and broader use of mobile database technology holds the potential for even greater revenue capture.
Challenges in Flying Quadrotor Unmanned Aerial Vehicle for 3d Indoor Reconstruction
NASA Astrophysics Data System (ADS)
Yan, J.; Grasso, N.; Zlatanova, S.; Braggaar, R. C.; Marx, D. B.
2017-09-01
Three-dimensional modelling plays a vital role in indoor 3D tracking, navigation, guidance and emergency evacuation. Reconstruction of indoor 3D models is still problematic, in part, because indoor spaces provide challenges less-documented than their outdoor counterparts. Challenges include obstacles curtailing image and point cloud capture, restricted accessibility and a wide array of indoor objects, each with unique semantics. Reconstruction of indoor environments can be achieved through a photogrammetric approach, e.g. by using image frames, aligned using recurring corresponding image points (CIP) to build coloured point clouds. Our experiments were conducted by flying a QUAV in three indoor environments and later reconstructing 3D models which were analysed under different conditions. Point clouds and meshes were created using Agisoft PhotoScan Professional. We concentrated on flight paths from two vantage points: 1) safety and security while flying indoors and 2) data collection needed for reconstruction of 3D models. We surmised that the main challenges in providing safe flight paths are related to the physical configuration of indoor environments, privacy issues, the presence of people and light conditions. We observed that the quality of recorded video used for 3D reconstruction has a high dependency on surface materials, wall textures and object types being reconstructed. Our results show that 3D indoor reconstruction predicated on video capture using a QUAV is indeed feasible, but close attention should be paid to flight paths and conditions ultimately influencing the quality of 3D models. Moreover, it should be decided in advance which objects need to be reconstructed, e.g. bare rooms or detailed furniture.
Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio
2016-04-14
The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.
Taking Advantage of Selective Change Driven Processing for 3D Scanning
Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A.; Pardo, Fernando
2013-01-01
This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist–Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes. PMID:24084110
Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio
2016-01-01
The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344
Lin, Da; Hong, Ping; Zhang, Siheng; Xu, Weize; Jamal, Muhammad; Yan, Keji; Lei, Yingying; Li, Liang; Ruan, Yijun; Fu, Zhen F; Li, Guoliang; Cao, Gang
2018-05-01
Chromosome conformation capture (3C) technologies can be used to investigate 3D genomic structures. However, high background noise, high costs, and a lack of straightforward noise evaluation in current methods impede the advancement of 3D genomic research. Here we developed a simple digestion-ligation-only Hi-C (DLO Hi-C) technology to explore the 3D landscape of the genome. This method requires only two rounds of digestion and ligation, without the need for biotin labeling and pulldown. Non-ligated DNA was efficiently removed in a cost-effective step by purifying specific linker-ligated DNA fragments. Notably, random ligation could be quickly evaluated in an early quality-control step before sequencing. Moreover, an in situ version of DLO Hi-C using a four-cutter restriction enzyme has been developed. We applied DLO Hi-C to delineate the genomic architecture of THP-1 and K562 cells and uncovered chromosomal translocations. This technology may facilitate investigation of genomic organization, gene regulation, and (meta)genome assembly.
D and D Knowledge Management Information Tool - 2012 - 12106
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upadhyay, H.; Lagos, L.; Quintero, W.
2012-07-01
Deactivation and decommissioning (D and D) work is a high priority activity across the Department of Energy (DOE) complex. Subject matter specialists (SMS) associated with the different ALARA (As-Low-As-Reasonably-Achievable) Centers, DOE sites, Energy Facility Contractors Group (EFCOG) and the D and D community have gained extensive knowledge and experience over the years in the cleanup of the legacy waste from the Manhattan Project. To prevent the D and D knowledge and expertise from being lost over time from the evolving and aging workforce, DOE and the Applied Research Center (ARC) at Florida International University (FIU) proposed to capture and maintainmore » this valuable information in a universally available and easily usable system. D and D KM-IT provides single point access to all D and D related activities through its knowledge base. It is a community driven system. D and D KM-IT makes D and D knowledge available to the people who need it at the time they need it and in a readily usable format. It uses the World Wide Web as the primary source for content in addition to information collected from subject matter specialists and the D and D community. It brings information in real time through web based custom search processes and its dynamic knowledge repository. Future developments include developing a document library, providing D and D information access on mobile devices for the Technology module and Hotline, and coordinating multiple subject matter specialists to support the Hotline. The goal is to deploy a high-end sophisticated and secured system to serve as a single large knowledge base for all the D and D activities. The system consolidates a large amount of information available on the web and presents it to users in the simplest way possible. (authors)« less
Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds
NASA Astrophysics Data System (ADS)
Koppanyi, Z.; Toth, C., K.
2015-03-01
Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.
Infinite family of three-dimensional Floquet topological paramagnets
NASA Astrophysics Data System (ADS)
Potter, Andrew C.; Vishwanath, Ashvin; Fidkowski, Lukasz
2018-06-01
We uncover an infinite family of time-reversal symmetric 3 d interacting topological insulators of bosons or spins, in time-periodically driven systems, which we term Floquet topological paramagnets (FTPMs). These FTPM phases exhibit intrinsically dynamical properties that could not occur in thermal equilibrium and are governed by an infinite set of Z2-valued topological invariants, one for each prime number. The topological invariants are physically characterized by surface magnetic domain walls that act as unidirectional quantum channels, transferring quantized packets of information during each driving period. We construct exactly solvable models realizing each of these phases, and discuss the anomalous dynamics of their topologically protected surface states. Unlike previous encountered examples of Floquet SPT phases, these 3 d FTPMs are not captured by group cohomology methods and cannot be obtained from equilibrium classifications simply by treating the discrete time translation as an ordinary symmetry. The simplest such FTPM phase can feature anomalous Z2 (toric code) surface topological order, in which the gauge electric and magnetic excitations are exchanged in each Floquet period, which cannot occur in a pure 2 d system without breaking time reversal symmetry.
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-01
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.
Qu, Yufu; Huang, Jianyu; Zhang, Xuan
2018-01-14
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.
Forensic print extraction using 3D technology and its processing
NASA Astrophysics Data System (ADS)
Rajeev, Srijith; Shreyas, Kamath K. M.; Panetta, Karen; Agaian, Sos S.
2017-05-01
Biometric evidence plays a crucial role in criminal scene analysis. Forensic prints can be extracted from any solid surface such as firearms, doorknobs, carpets and mugs. Prints such as fingerprints, palm prints, footprints and lip-prints can be classified into patent, latent, and three-dimensional plastic prints. Traditionally, law enforcement officers capture these forensic traits using an electronic device or extract them manually, and save the data electronically using special scanners. The reliability and accuracy of the method depends on the ability of the officer or the electronic device to extract and analyze the data. Furthermore, the 2-D acquisition and processing system is laborious and cumbersome. This can lead to the increase in false positive and true negative rates in print matching. In this paper, a method and system to extract forensic prints from any surface, irrespective of its shape, is presented. First, a suitable 3-D camera is used to capture images of the forensic print, and then the 3-D image is processed and unwrapped to obtain 2-D equivalent biometric prints. Computer simulations demonstrate the effectiveness of using 3-D technology for biometric matching of fingerprints, palm prints, and lip-prints. This system can be further extended to other biometric and non-biometric modalities.
Estimating DoD Transportation Spending: Analyses of Contract and Payment Transactions
2007-01-01
the Defense Logistics Agency (DLA) and of expenditures by shipment material and volume (or cube). The analysis of DLA expenditures appears in Appendix...Defense Logistics Agency (DLA) not completely captured in DD350 data. Limited implementation outside the United States confines inferences that can be...Defense Information Systems Agency DLA Defense Logistics Agency DoD U.S. Department of Defense DoDAAC U.S. Department of Defense Activity Address Code
Non-invasive analysis of root-soil interaction using three complementary imaging approaches
NASA Astrophysics Data System (ADS)
Haber-Pohlmeier, Sabina; Tötzke, Christian; Pohlmeier, Andreas; Rudolph-Mohr, Nicole; Kardjilov, Nikolay; Lehmann, Eberhard; Oswald, Sascha E.
2016-04-01
Plant roots are known to modify physical, chemical and biological properties of the rhizosphere, thereby, altering conditions for water and nutrient uptake. We aim for capturing the dynamic processes occurring at the soil-root interface in situ. A combination of neutron (NI), magnetic resonance (MRI) and micro-focus X-ray tomography (CT) is applied to monitor the rhizosphere of young plants grown in sandy soil in cylindrical containers (diameter 3 cm). A novel transportable low field MRI system is operated directly at the neutron facility allowing for combined measurements of the very same sample capturing the same hydro-physiological state. The combination of NI, MRI and CT provides three-dimensional access to the root system in respect to structure and hydraulics of the rhizosphere and the transport of dissolved marker substances. The high spatial resolution of neutron imaging and its sensitivity for water can be exploited for the 3D analysis of the root morphology and detailed mapping of three-dimensional water content at the root soil interface and the surrounding soil. MRI has the potential to yield complementary information about the mobility of water, which can be bound in small pores or in the polymeric network of root exudates (mucilage layer). We inject combined tracers (GdDPTA or D2O) to study water fluxes through soil, rhizosphere and roots. Additional CT measurements reveal mechanical impacts of roots on the local microstructure of soil, e.g. showing soil compaction or the formation of cracks. We co-register the NT, MRI and CT data to integrate the complementary information into an aligned 3D data set. This allows, e.g., for co-localization of compacted soil regions or cracks with the specific local soil hydraulics, which is needed to distinguish the contribution of root exudation from mechanical impacts when interpreting altered hydraulic properties of the rhizosphere. Differences between rhizosphere and bulk soil can be detected and interpreted in terms of root growth, root exudation, and root water uptake. Thus, we demonstrate that such a multi-imaging approach can be used as powerful tool contributing to a more comprehensive picture of the rhizosphere.
NASA Astrophysics Data System (ADS)
Calderon, Christopher P.; Weiss, Lucien E.; Moerner, W. E.
2014-05-01
Experimental advances have improved the two- (2D) and three-dimensional (3D) spatial resolution that can be extracted from in vivo single-molecule measurements. This enables researchers to quantitatively infer the magnitude and directionality of forces experienced by biomolecules in their native environment. Situations where such force information is relevant range from mitosis to directed transport of protein cargo along cytoskeletal structures. Models commonly applied to quantify single-molecule dynamics assume that effective forces and velocity in the x ,y (or x ,y,z) directions are statistically independent, but this assumption is physically unrealistic in many situations. We present a hypothesis testing approach capable of determining if there is evidence of statistical dependence between positional coordinates in experimentally measured trajectories; if the hypothesis of independence between spatial coordinates is rejected, then a new model accounting for 2D (3D) interactions can and should be considered. Our hypothesis testing technique is robust, meaning it can detect interactions, even if the noise statistics are not well captured by the model. The approach is demonstrated on control simulations and on experimental data (directed transport of intraflagellar transport protein 88 homolog in the primary cilium).
Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope
NASA Astrophysics Data System (ADS)
Yoshimoto, Kayo; Watabe, Kenji; Fujinaga, Tetsuji; Iijima, Hideki; Tsujii, Masahiko; Takahashi, Hideya; Takehara, Tetsuo; Yamada, Kenji
2017-02-01
Because the view angle of the endoscope is narrow, it is difficult to get the whole image of the digestive tract at once. If there are more than two lesions in the digestive tract, it is hard to understand the 3D positional relationship among the lesions. Virtual endoscopy using CT is a present standard method to get the whole view of the digestive tract. Because the virtual endoscopy is designed to detect the irregularity of the surface, it cannot detect lesions that lack irregularity including early cancer. In this study, we propose a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. The method is as follows: 1) capture sequential images of the digestive tract by moving the endoscope, 2) reconstruct 3D surface pattern for each frame by stereo images, 3) estimate the position of the endoscope by image analysis, 4) reconstitute the entire image of the digestive tract by combining the 3D surface pattern. To confirm the validity of this method, we experimented with a straight tube inside of which circles were allocated at equal distance of 20 mm. We captured sequential images and the reconstituted image of the tube revealed that the distance between each circle was 20.2 +/- 0.3 mm (n=7). The results suggest that this method of endoscopic entire 3D image acquisition may help us understand 3D positional relationship among the lesions such as early esophageal cancer that cannot be detected by virtual endoscopy using CT.
Ma, Wenxiu; Ay, Ferhat; Lee, Choli; Gulsoy, Gunhan; Deng, Xinxian; Cook, Savannah; Hesson, Jennifer; Cavanaugh, Christopher; Ware, Carol B; Krumm, Anton; Shendure, Jay; Blau, C Anthony; Disteche, Christine M; Noble, William S; Duan, ZhiJun
2018-06-01
The folding and three-dimensional (3D) organization of chromatin in the nucleus critically impacts genome function. The past decade has witnessed rapid advances in genomic tools for delineating 3D genome architecture. Among them, chromosome conformation capture (3C)-based methods such as Hi-C are the most widely used techniques for mapping chromatin interactions. However, traditional Hi-C protocols rely on restriction enzymes (REs) to fragment chromatin and are therefore limited in resolution. We recently developed DNase Hi-C for mapping 3D genome organization, which uses DNase I for chromatin fragmentation. DNase Hi-C overcomes RE-related limitations associated with traditional Hi-C methods, leading to improved methodological resolution. Furthermore, combining this method with DNA capture technology provides a high-throughput approach (targeted DNase Hi-C) that allows for mapping fine-scale chromatin architecture at exceptionally high resolution. Hence, targeted DNase Hi-C will be valuable for delineating the physical landscapes of cis-regulatory networks that control gene expression and for characterizing phenotype-associated chromatin 3D signatures. Here, we provide a detailed description of method design and step-by-step working protocols for these two methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Ibarra Zannatha, Juan Manuel; Tamayo, Alejandro Justo Malo; Sánchez, Angel David Gómez; Delgado, Jorge Enrique Lavín; Cheu, Luis Eduardo Rodríguez; Arévalo, Wilson Alexander Sierra
2013-11-01
This paper presents a stroke rehabilitation (SR) system for the upper limbs, developed as an interactive virtual environment (IVE) based on a commercial 3D vision system (a Microsoft Kinect), a humanoid robot (an Aldebaran's Nao), and devices producing ergonometric signals. In one environment, the rehabilitation routines, developed by specialists, are presented to the patient simultaneously by the humanoid and an avatar inside the IVE. The patient follows the rehabilitation task, while his avatar copies his gestures that are captured by the Kinect 3D vision system. The information of the patient movements, together with the signals obtained from the ergonometric measurement devices, is used also to supervise and to evaluate the rehabilitation progress. The IVE can also present an RGB image of the patient. In another environment, that uses the same base elements, four game routines--Touch the balls 1 and 2, Simon says, and Follow the point--are used for rehabilitation. These environments are designed to create a positive influence in the rehabilitation process, reduce costs, and engage the patient. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Fostering Persistence: 3D Printing and the Unforeseen Impact on Equity
ERIC Educational Resources Information Center
FitzPatrick, Daniel L.; Dominguez, Victoria S.
2017-01-01
Teaching persistence and problem solving must begin with selecting a problem that can be solved mathematically, that allows for multiple methods of solving, and that generally captures the attention and curiosity of the student (Marcus and Fey 2003; NCTM 1991; Van de Walle 2003). This article shows how a STEM three-dimensional (3D) printing…
3D Microstructures for Materials and Damage Models
Livescu, Veronica; Bronkhorst, Curt Allan; Vander Wiel, Scott Alan
2017-02-01
Many challenges exist with regard to understanding and representing complex physical processes involved with ductile damage and failure in polycrystalline metallic materials. Currently, the ability to accurately predict the macroscale ductile damage and failure response of metallic materials is lacking. Research at Los Alamos National Laboratory (LANL) is aimed at building a coupled experimental and computational methodology that supports the development of predictive damage capabilities by: capturing real distributions of microstructural features from real material and implementing them as digitally generated microstructures in damage model development; and, distilling structure-property information to link microstructural details to damage evolution under a multitudemore » of loading states.« less
Compressive self-interference Fresnel digital holography with faithful reconstruction
NASA Astrophysics Data System (ADS)
Wan, Yuhong; Man, Tianlong; Han, Ying; Zhou, Hongqiang; Wang, Dayong
2017-05-01
We developed compressive self-interference digital holographic approach that allows retrieving three-dimensional information of the spatially incoherent objects from single-shot captured hologram. The Fresnel incoherent correlation holography is combined with parallel phase-shifting technique to instantaneously obtain spatial-multiplexed phase-shifting holograms. The recording scheme is regarded as compressive forward sensing model, thus the compressive-sensing-based reconstruction algorithm is implemented to reconstruct the original object from the under sampled demultiplexed sub-holograms. The concept was verified by simulations and experiments with simulating use of the polarizer array. The proposed technique has great potential to be applied in 3D tracking of spatially incoherent samples.
Simultaneous tumor and surrogate motion tracking with dynamic MRI for radiation therapy planning
NASA Astrophysics Data System (ADS)
Park, Seyoun; Farah, Rana; Shea, Steven M.; Tryggestad, Erik; Hales, Russell; Lee, Junghoon
2018-01-01
Respiration-induced tumor motion is a major obstacle for achieving high-precision radiotherapy of cancers in the thoracic and abdominal regions. Surrogate-based estimation and tracking methods are commonly used in radiotherapy, but with limited understanding of quantified correlation to tumor motion. In this study, we propose a method to simultaneously track the lung tumor and external surrogates to evaluate their spatial correlation in a quantitative way using dynamic MRI, which allows real-time acquisition without ionizing radiation exposure. To capture the lung and whole tumor, four MRI-compatible fiducials are placed on the patient’s chest and upper abdomen. Two different types of acquisitions are performed in the sagittal orientation including multi-slice 2D cine MRIs to reconstruct 4D-MRI and two-slice 2D cine MRIs to simultaneously track the tumor and fiducials. A phase-binned 4D-MRI is first reconstructed from multi-slice MR images using body area as a respiratory surrogate and groupwise registration. The 4D-MRI provides 3D template volumes for different breathing phases. 3D tumor position is calculated by 3D-2D template matching in which 3D tumor templates in the 4D-MRI reconstruction and the 2D cine MRIs from the two-slice tracking dataset are registered. 3D trajectories of the external surrogates are derived via matching a 3D geometrical model of the fiducials to their segmentations on the 2D cine MRIs. We tested our method on ten lung cancer patients. Using a correlation analysis, the 3D tumor trajectory demonstrates a noticeable phase mismatch and significant cycle-to-cycle motion variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, there was significant phase mismatch between surrogate signals obtained from the fiducials at different locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yang; Li, Libo; Yang, Jiangfeng
Three metal–organic frameworks (MOFs), [Cu(INA){sub 2}], [Cu(INA){sub 2}I{sub 2}] and [Cu(INA){sub 2}(H{sub 2}O){sub 2}(NH{sub 3}){sub 2}], were synthesized with 3D, 2D, and 0D structures, respectively. Reversible flexible structural changes of these MOFs were reported. Through high temperature (60–100 °C) stimulation of I{sub 2} or ambient temperature stimulation of NH{sub 3}, [Cu(INA){sub 2}] (3D) converted to [Cu(INA){sub 2}I{sub 2}] (2D) and [Cu(INA){sub 2}(H{sub 2}O){sub 2}(NH{sub 3}){sub 2}] (0D); as the temperature increased to 150 °C, the MOFs changed back to their original form. In this way, this 3D MOF has potential application in the capture of I{sub 2} and NH{sub 3}more » from polluted water and air. XRD, TGA, SEM, NH{sub 3}-TPD, and the measurement of gas adsorption were used to describe the changes in processes regarding the structure, morphology, and properties. - Graphical abstract: Through I{sub 2}, NH{sub 3} molecules and thermal stimulation, the three MOFs can achieve reversible flexible structural changes. Different methods were used to prove the flexible reversible changes. - Highlights: • [Cu(INA){sub 2}] can flexible transform to [Cu(INA){sub 2}I{sub 2}] and [Cu(INA){sub 2}(H{sub 2}O){sub 2}(NH{sub 3}){sub 2}] by adsorbing I{sub 2} or NH{sub 3}. • The reversible flexible transformation related to material source, temperature and concentration. • Potential applications for the capture of I{sub 2} and NH{sub 3} from polluted water or air.« less
NREL Research Earns Two Prestigious R&D 100 Awards | News | NREL
blades for a closer shave, the three-layered SJ3 cell captures different light frequencies, ensuring the shape everyday life for many Americans. Winners of the R&D 100 Awards are selected by an independent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja
2013-02-01
Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less
NASA Astrophysics Data System (ADS)
Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming
2017-12-01
Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modelling.
Wu, Hang; Mao, Yongrong; Chen, Meng; Pan, Hui; Huang, Xunduan; Ren, Min; Wu, Hao; Li, Jiali; Xu, Zhongdong; Yuan, Hualing; Geng, Ming; Weaver, David T; Zhang, Lixin; Zhang, Buchang
2015-03-01
BldD (SACE_2077), a key developmental regulator in actinomycetes, is the first identified transcriptional factor in Saccharopolyspora erythraea positively regulating erythromycin production and morphological differentiation. Although the BldD of S. erythraea binds to the promoters of erythromycin biosynthetic genes, the interaction affinities are relatively low, implying the existence of its other target genes in S. erythraea. Through the genomic systematic evolution of ligands by exponential enrichment (SELEX) method that we herein improved, four DNA sequences of S. erythraea A226, corresponding to the promoter regions of SACE_0306 (beta-galactosidase), SACE_0811 (50S ribosomal protein L25), SACE_3410 (fumarylacetoacetate hydrolase), and SACE_6014 (aldehyde dehydrogenase), were captured with all three BldD concentrations of 0.5, 1, and 2 μM, while the previously identified intergenic regions of eryBIV-eryAI and ermE-eryCI plus the promoter region of SACE_7115, the amfC homolog for aerial mycelium formation, could be captured only when the BldD's concentration reached 2 μM. Electrophoretic mobility shift assay (EMSA) analysis indicated that BldD specifically bound to above seven DNA sequences, and quantitative real-time PCR (qRT-PCR) assay showed that the transcriptional levels of the abovementioned target genes decreased when bldD was disrupted in A226. Furthermore, SACE_7115 and SACE_0306 in A226 were individually inactivated, showing that SACE_7115 was predominantly involved in aerial mycelium formation, while SACE_0306 mainly controlled erythromycin production. This study provides valuable information for better understanding of the pleiotropic regulator BldD in S. erythraea, and the improved method may be useful for uncovering regulatory networks of other transcriptional factors.
3D Geological Mapping - uncovering the subsurface to increase environmental understanding
NASA Astrophysics Data System (ADS)
Kessler, H.; Mathers, S.; Peach, D.
2012-12-01
Geological understanding is required for many disciplines studying natural processes from hydrology to landscape evolution. The subsurface structure of rocks and soils and their properties occupies three-dimensional (3D) space and geological processes operate in time. Traditionally geologists have captured their spatial and temporal knowledge in 2 dimensional maps and cross-sections and through narrative, because paper maps and later two dimensional geographical information systems (GIS) were the only tools available to them. Another major constraint on using more explicit and numerical systems to express geological knowledge is the fact that a geologist only ever observes and measures a fraction of the system they study. Only on rare occasions does the geologist have access to enough real data to generate meaningful predictions of the subsurface without the input of conceptual understanding developed from and knowledge of the geological processes responsible for the deposition, emplacement and diagenesis of the rocks. This in turn has led to geology becoming an increasingly marginalised science as other disciplines have embraced the digital world and have increasingly turned to implicit numerical modelling to understand environmental processes and interactions. Recent developments in geoscience methodology and technology have gone some way to overcoming these barriers and geologists across the world are beginning to routinely capture their knowledge and combine it with all available subsurface data (of often highly varying spatial distribution and quality) to create regional and national geological three dimensional geological maps. This is re-defining the way geologists interact with other science disciplines, as their concepts and knowledge are now expressed in an explicit form that can be used downstream to design process models structure. For example, groundwater modellers can refine their understanding of groundwater flow in three dimensions or even directly parameterize their numerical models using outputs from 3D mapping. In some cases model code is being re-designed in order to deal with the increasing geological complexity expressed by Geologists. These 3D maps contain have inherent uncertainty, just as their predecessors, 2D geological maps had, and there remains a significant body of work to quantify and effectively communicate this uncertainty. Here we present examples of regional and national 3D maps from Geological Survey Organisations worldwide and how these are being used to better solve real-life environmental problems. The future challenge for geologists is to make these 3D maps easily available in an accessible and interoperable form so that the environmental science community can truly integrate the hidden subsurface into a common understanding of the whole geosphere.
Comparison of the AUSM(+) and H-CUSP Schemes for Turbomachinery Applications
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2003-01-01
Many turbomachinery CFD codes use second-order central-difference (C-D) schemes with artificial viscosity to control point decoupling and to capture shocks. While C-D schemes generally give accurate results, they can also exhibit minor numerical problems including overshoots at shocks and at the edges of viscous layers, and smearing of shocks and other flow features. In an effort to improve predictive capability for turbomachinery problems, two C-D codes developed by Chima, RVCQ3D and Swift, were modified by the addition of two upwind schemes: the AUSM+ scheme developed by Liou, et al., and the H-CUSP scheme developed by Tatsumi, et al. Details of the C-D scheme and the two upwind schemes are described, and results of three test cases are shown. Results for a 2-D transonic turbine vane showed that the upwind schemes eliminated viscous layer overshoots. Results for a 3-D turbine vane showed that the upwind schemes gave improved predictions of exit flow angles and losses, although the HCUSP scheme predicted slightly higher losses than the other schemes. Results for a 3-D supersonic compressor (NASA rotor 37) showed that the AUSM+ scheme predicted exit distributions of total pressure and temperature that are not generally captured by C-D codes. All schemes showed similar convergence rates, but the upwind schemes required considerably more CPU time per iteration.
Chen, Chin-Sheng; Chen, Po-Chun; Hsu, Chih-Ming
2016-01-01
This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) descriptor for the 3D point cloud data then encodes the geometry and viewpoint, so an object can be simultaneously recognized and registered in a stable pose and the information is stored in a database. The VFH is robust to a large degree of surface noise and missing depth information so it is reliable for stereo data. However, the pose estimation for an object fails when the object is placed symmetrically to the viewpoint. To overcome this problem, this study proposes a modified viewpoint feature histogram (MVFH) descriptor that consists of two parts: a surface shape component that comprises an extended fast point feature histogram and an extended viewpoint direction component. The MVFH descriptor characterizes an object’s pose and enhances the system’s ability to identify objects with mirrored poses. Finally, the refined pose is further estimated using an iterative closest point when the object has been recognized and the pose roughly estimated by the MVFH descriptor and it has been registered on a database. The estimation results demonstrate that the MVFH feature descriptor allows more accurate pose estimation. The experiments also show that the proposed method can be applied in vision-guided robotic grasping systems. PMID:27886080
Calibration of LiBaF3: Ce Scintillator for Fission Spectrum Neutrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeder, Paul L.; Bowyer, Sonya M.
2002-05-21
The scintillator LiBaF3 doped with small amounts of Ce+3 has the ability to distinguish heavy charged particles (p, d, t, or a) from beta and/or gamma radiation based on the presence or absence of ns components in the scintillation light output. Because the neutron capture reaction on 6Li produces recoil alphas and tritons, this scintillator also discriminates between neutron induced events and beta or gamma interactions. An experimental technique using a time-tagged 252Cf source has been used to measure the efficiency of this scintillator for neutron capture, the calibration of neutron capture pulse height, and the pulse height resolution -more » all as a function of incident neutron energy.« less
Pose-oblivious shape signature.
Gal, Ran; Shamir, Ariel; Cohen-Or, Daniel
2007-01-01
A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models.
Morin, J P; Rochat, D; Malosse, C; Lettere, M; de Chenon, R D; Wibwo, H; Descoins, C
1996-07-01
Ethyl 4-methyloctanoate, which has already been described in Oryctes monoceros, has been identified, using extracts of effluvia collected from males, as being a major component of the male pheromone of O. rhinoceros. Field trials have been carried out in North Sumatra, Indonesia. Ethyl 4-methyloctanoate synthesized in the laboratory and released at 10 mg/d resulted in the capture of 6.8 insects per week per trap, whereas ethyl chrysanthemate (40 mg/d), an allelochemical compound once used as an attractant, only led to the capture of 0.3 insects, and the control none at all. The insects captured with the pheromone were 81% females, the majority being sexually mature. Discovery of this compound opens up new prospects for O. rhinoceros control.
Optical identification using imperfections in 2D materials
NASA Astrophysics Data System (ADS)
Cao, Yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher S.; Noori, Yasir J.; Bernardo-Gavito, Ramón; Shahrjerdi, Davood; Roedig, Utz; Fal'ko, Vladimir I.; Young, Robert J.
2017-12-01
The ability to uniquely identify an object or device is important for authentication. Imperfections, locked into structures during fabrication, can be used to provide a fingerprint that is challenging to reproduce. In this paper, we propose a simple optical technique to read unique information from nanometer-scale defects in 2D materials. Imperfections created during crystal growth or fabrication lead to spatial variations in the bandgap of 2D materials that can be characterized through photoluminescence measurements. We show a simple setup involving an angle-adjustable transmission filter, simple optics and a CCD camera can capture spatially-dependent photoluminescence to produce complex maps of unique information from 2D monolayers. Atomic force microscopy is used to verify the origin of the optical signature measured, demonstrating that it results from nanometer-scale imperfections. This solution to optical identification with 2D materials could be employed as a robust security measure to prevent counterfeiting.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
High Density Aerial Image Matching: State-Of and Future Prospects
NASA Astrophysics Data System (ADS)
Haala, N.; Cavegn, S.
2016-06-01
Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.
My Corporis Fabrica: an ontology-based tool for reasoning and querying on complex anatomical models
2014-01-01
Background Multiple models of anatomy have been developed independently and for different purposes. In particular, 3D graphical models are specially useful for visualizing the different organs composing the human body, while ontologies such as FMA (Foundational Model of Anatomy) are symbolic models that provide a unified formal description of anatomy. Despite its comprehensive content concerning the anatomical structures, the lack of formal descriptions of anatomical functions in FMA limits its usage in many applications. In addition, the absence of connection between 3D models and anatomical ontologies makes it difficult and time-consuming to set up and access to the anatomical content of complex 3D objects. Results First, we provide a new ontology of anatomy called My Corporis Fabrica (MyCF), which conforms to FMA but extends it by making explicit how anatomical structures are composed, how they contribute to functions, and also how they can be related to 3D complex objects. Second, we have equipped MyCF with automatic reasoning capabilities that enable model checking and complex queries answering. We illustrate the added-value of such a declarative approach for interactive simulation and visualization as well as for teaching applications. Conclusions The novel vision of ontologies that we have developed in this paper enables a declarative assembly of different models to obtain composed models guaranteed to be anatomically valid while capturing the complexity of human anatomy. The main interest of this approach is its declarativity that makes possible for domain experts to enrich the knowledge base at any moment through simple editors without having to change the algorithmic machinery. This provides MyCF software environment a flexibility to process and add semantics on purpose for various applications that incorporate not only symbolic information but also 3D geometric models representing anatomical entities as well as other symbolic information like the anatomical functions. PMID:24936286
The D3 Middleware Architecture
NASA Technical Reports Server (NTRS)
Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang
2002-01-01
DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.
Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin
2015-12-05
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.
Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin
2015-01-01
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168
Epsky, Nancy D; Gill, Micah A; Mangan, Robert L
2015-08-01
In field tests conducted in south Florida to test grape juice as a bait for the Caribbean fruit fly, Anastrepha suspensa Loew, high numbers of Zaprionus indianus Gupta were captured in traps with aqueous grape juice. These experiments included comparisons of grape juice bait with established A. suspensa protein-based baits (ammonium acetate + putrescine lures, or torula yeast) or wine, a bait found previously to be attractive to Z. indianus. Effects of different preservatives (polypropylene glycol, polyethylene glycol, proxel, or sodium tetraborate) and bait age were also tested. Traps with grape juice baits captured more A. suspensa than unbaited traps, but more were captured in traps with grape juice plus preservative baits and the highest numbers were captured in traps containing the established protein-based baits. In contrast, grape juice baits without preservative that were prepared on the day of deployment (0 d) or that were aged for 3-4 d in the laboratory captured the highest numbers of Z. indianus, while solutions that were aged in the laboratory for 6 or 9 d captured fewer. Although these studies found that aqueous grape juice is a poor bait for A. suspensa, we found that actively fermenting aqueous grape juice may be an effective bait for Z. indianus. Published by Oxford University Press on behalf of Entomological Society of America 2015. This work is written by US Government employees and is in the public domain in the US.
Kruh, Jonathan N; Garrett, Kenneth A; Huntington, Brian; Robinson, Steve; Melki, Samir A
2017-01-01
To identify risks factors for retreatment post-laser in situ keratomeliusis (LASIK). A retrospective chart review from December 2008 to September 2012 identified 1,402 patients (2,581 eyes) that underwent LASIK treatment for myopia with the Intralase™ FS, STAR S4 IR™ Excimer Laser, and WaveScan WaveFront™ technology. In this group, 83 patients were retreated. All charts were reviewed for preoperative age, gender, initial manifest refraction spherical equivalent (MRSE), total astigmatism, and iris registration. Increased incidence rates of retreatment post-LASIK were preoperative age >40 years (p < 0.001), initial MRSE > -3.0 D (p = 0.02), and astigmatism >1D (p = 0.001). Iris registration capture did not significantly reduce the retreatment rate (p = 0.12). Risk factors for retreatment included preoperative age >40 years, initial MRSE > -3.0 D, and astigmatism >1D. There was no difference in retreatment rate for patients based on gender or iris registration capture.
Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-07-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.
Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-01-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600
Perumal, Veeradasan; Saheed, Mohamed Shuaib Mohamed; Mohamed, Norani Muti; Saheed, Mohamed Salleh Mohamed; Murthe, Satisvar Sundera; Gopinath, Subash C B; Chiu, Jian-Ming
2018-09-30
Tuberculosis (TB) is a chronic and infectious airborne disease which requires a diagnosing system with high sensitivity and specificity. However, the traditional gold standard method for TB detection remains unreliable with low specificity and sensitivity. Nanostructured composite materials coupled with impedimetric sensing utilised in this study offered a feasible solution. Herein, novel gold (Au) nanorods were synthesized on 3D graphene grown by chemical vapour deposition. The irregularly spaced and rippled morphology of 3D graphene provided a path for Au nanoparticles to self-assemble and form rod-like structures on the surface of the 3D graphene. The formation of Au nanorods were showcased through scanning electron microscopy which revealed the evolution of Au nanoparticle into Au islets. Eventually, it formed nanorods possessing lengths of ~ 150 nm and diameters of ~ 30 nm. The X-ray diffractogram displayed appropriate peaks suitable to defect-free and high crystalline graphene with face centered cubic Au. The strong optical interrelation between Au nanorod and 3D graphene was elucidated by Raman spectroscopy analysis. Furthermore, the anchored Au nanorods on 3D graphene nanocomposite enables feasible bio-capturing on the exposed Au surface on defect free graphene. The impedimetric sensing of DNA sequence from TB on 3D graphene/Au nanocomposite revealed a remarkable wide detection linear range from 10 fM to 0.1 µM, displays the capability of detecting femtomolar DNA concentration. Overall, the novel 3D graphene/Au nanocomposite demonstrated here offers high-performance bio-sensing and opens a new avenue for TB detection. Copyright © 2018 Elsevier B.V. All rights reserved.
LIME: 3D visualisation and interpretation of virtual geoscience models
NASA Astrophysics Data System (ADS)
Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias
2017-04-01
Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
Electron capture from circular Rydberg atoms
NASA Astrophysics Data System (ADS)
Lundsgaard, M. F. V.; Chen, Z.; Lin, C. D.; Toshima, N.
1995-02-01
Electron capture cross sections from circular Rydberg states as a function of the angle cphi between the ion velocity and the angular momentum of the circular orbital have been reported recently by Hansen et al. [Phys. Rev. Lett. 71, 1522 (1993)]. We show that the observed cphi dependence can be explained in terms of the propensity rule that governs the dependence of electron capture cross sections on the magnetic quantum numbers of the initial excited states. We also carried out close-coupling calculations to show that electron capture from the circular H(3d,4f,5g) states by protons at the same scaled velocity has nearly the same cphi dependence.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.
Barre, Arnaud; Armand, Stéphane
2014-04-01
C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Unsupervised building detection from irregularly spaced LiDAR and aerial imagery
NASA Astrophysics Data System (ADS)
Shorter, Nicholas Sven
As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and aerial imagery are available, then the algorithm will use them both for improved accuracy. Additionally, the proposed approach does not employ severely limiting assumptions thus enabling the end user to apply the approach to a wider variety of different building types. The proposed approach is extensively tested using real data sets and it is also compared with other existing techniques. Experimental results are presented.
A multimodal biometric authentication system based on 2D and 3D palmprint features
NASA Astrophysics Data System (ADS)
Aggithaya, Vivek K.; Zhang, David; Luo, Nan
2008-03-01
This paper presents a new personal authentication system that simultaneously exploits 2D and 3D palmprint features. Here, we aim to improve the accuracy and robustness of existing palmprint authentication systems using 3D palmprint features. The proposed system uses an active stereo technique, structured light, to capture 3D image or range data of the palm and a registered intensity image simultaneously. The surface curvature based method is employed to extract features from 3D palmprint and Gabor feature based competitive coding scheme is used for 2D representation. We individually analyze these representations and attempt to combine them with score level fusion technique. Our experiments on a database of 108 subjects achieve significant improvement in performance (Equal Error Rate) with the integration of 3D features as compared to the case when 2D palmprint features alone are employed.
Multi-Scale Porous Ultra High Temperature Ceramics
2015-01-08
different techniques: replica, particle stabilized foams, ice templating (freeze casting) and partial sintering. The pore morphology (closed-bubble...the porosity, pore size, shape and morphology . X-Ray Tomography was used to study their 3D microstructure. The 3D microstructures captured with...four different techniques: replica, particle stabilized foams, ice templating (freeze casting) and partial sintering. The pore morphology (closed-bubble
3-D rendition (Conference Presentation)
NASA Astrophysics Data System (ADS)
Izdebski, Krzysztof; Blanco, Matthew; Sova, Jaroslaw; Di Lorenzo, Enrico
2017-02-01
Growl, a style of extreme vocalization used for the production of bizarre and scary voice by heavy metal singes captured by HSDP is simply fascinating and shows that this sound is produced predominantly by the supraglottic structures. To enhance our understanding of how this process is accomplished. The obtained images were processed to be viewed in 3-D. The results are shown and discussed.
Influence of camera parameters on the quality of mobile 3D capture
NASA Astrophysics Data System (ADS)
Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska
2010-01-01
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
D Land Cover Classification Based on Multispectral LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
Wave optics theory and 3-D deconvolution for the light field microscope
Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc
2013-01-01
Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. PMID:24150383
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok
2017-01-01
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243
Gallerani, Giulia; Cocchi, Claudia; Bocchini, Martine; Piccinini, Filippo; Fabbri, Francesco
2017-01-01
Circulating tumor cells (CTCs) are associated with poor survival in metastatic cancer. Their identification, phenotyping, and genotyping could lead to a better understanding of tumor heterogeneity and thus facilitate the selection of patients for personalized treatment. However, this is hampered because of the rarity of CTCs. We present an innovative approach for sampling a high volume of the patient blood and obtaining information about presence, phenotype, and gene translocation of CTCs. The method combines immunofluorescence staining and DNA fluorescent-in-situ-hybridization (DNA FISH) and is based on a functionalized medical wire. This wire is an innovative device that permits the in vivo isolation of CTCs from a large volume of peripheral blood. The blood volume screened by a 30-min administration of the wire is approximately 1.5-3 L. To demonstrate the feasibility of this approach, epithelial cell adhesion molecule (EpCAM) expression and the chromosomal translocation of the ALK gene were determined in non-small-cell lung cancer (NSCLC) cell lines captured by the functionalized wire and stained with an immuno-DNA FISH approach. Our main challenge was to perform the assay on a 3D structure, the functionalized wire, and to determine immuno-phenotype and FISH signals on this support using a conventional fluorescence microscope. The results obtained indicate that catching CTCs and analyzing their phenotype and chromosomal rearrangement could potentially represent a new companion diagnostic approach and provide an innovative strategy for improving personalized cancer treatments. PMID:29286485
Chang, Sung-A; Kim, Hyung-Kwan; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Kwon, Oh Min; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K
2013-04-01
Left ventricular (LV) mass is an important prognostic indicator in hypertrophic cardiomyopathy. Although LV mass can be easily calculated using conventional echocardiography, it is based on geometric assumptions and has inherent limitations in asymmetric left ventricles. Real-time three-dimensional echocardiographic (RT3DE) imaging with single-beat capture provides an opportunity for the accurate estimation of LV mass. The aim of this study was to validate this new technique for LV mass measurement in patients with hypertrophic cardiomyopathy. Sixty-nine patients with adequate two-dimensional (2D) and three-dimensional echocardiographic image quality underwent cardiac magnetic resonance (CMR) imaging and echocardiography on the same day. Real-time three-dimensional echocardiographic images were acquired using an Acuson SC2000 system, and CMR-determined LV mass was considered the reference standard. Left ventricular mass was derived using the formula of the American Society of Echocardiography (M-mode mass), the 2D-based truncated ellipsoid method (2D mass), and the RT3DE technique (RT3DE mass). The mean time for RT3DE analysis was 5.85 ± 1.81 min. Intraclass correlation analysis showed a close relationship between RT3DE and CMR LV mass (r = 0.86, P < .0001). However, LV mass by the M-mode or 2D technique showed a smaller intraclass correlation coefficient compared with CMR-determined mass (r = 0.48, P = .01, and r = 0.71, P < .001, respectively). Bland-Altman analysis showed reasonable limits of agreement between LV mass by RT3DE imaging and by CMR, with a smaller positive bias (19.5 g [9.1%]) compared with that by the M-mode and 2D methods (-35.1 g [-20.2%] and 30.6 g [17.6%], respectively). RT3DE measurement of LV mass using the single-beat capture technique is practical and more accurate than 2D or M-mode LV mass in patients with hypertrophic cardiomyopathy. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.
Dynamic integral imaging technology for 3D applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Huang, Yi-Pai; Javidi, Bahram; Martínez-Corral, Manuel; Shieh, Han-Ping D.; Jen, Tai-Hsiang; Hsieh, Po-Yuan; Hassanfiroozi, Amir
2017-05-01
Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode. By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Capturing Cognitive Processing Time for Active Authentication
2014-02-01
cognitive fingerprint for continuous authentication. Its effectiveness has been verified through a campus-wide experiment at Iowa State University...2 3.1 Cognitive Fingerprint Description...brief to capture a “ cognitive fingerprint .” In the current keystroke-authentication commercial market, some products combine the timing information of
Nguyen, Tu Q; Simpson, Pamela M; Braaf, Sandra C; Cameron, Peter A; Judson, Rodney; Gabbe, Belinda J
2018-06-05
Many outcome studies capture the presence of mental health, drug and alcohol comorbidities from administrative datasets and medical records. How these sources compare as predictors of patient outcomes has not been determined. The purpose of the present study was to compare mental health, drug and alcohol comorbidities based on ICD-10-AM coding and medical record documentation for predicting longer-term outcomes in injured patients. A random sample of patients (n = 500) captured by the Victorian State Trauma Registry was selected for the study. Retrospective medical record reviews were conducted to collect data about documented mental health, drug and alcohol comorbidities while ICD-10-AM codes were obtained from routinely collected hospital data. Outcomes at 12-months post-injury were the Glasgow Outcome Scale - Extended (GOS-E), European Quality of Life Five Dimensions (EQ-5D-3L), and return to work. Linear and logistic regression models, adjusted for age and gender, using medical record derived comorbidity and ICD-10-AM were compared using measures of calibration (Hosmer-Lemeshow statistic) and discrimination (C-statistic and R 2 ). There was no demonstrable difference in predictive performance between the medical record and ICD-10-AM models for predicting the GOS-E, EQ-5D-3L utility sore and EQ-5D-3L mobility, self-care, usual activities and pain/discomfort items. The area under the receiver operating characteristic (AUC) for models using medical record derived comorbidity (AUC 0.68, 95% CI: 0.63, 0.73) was higher than the model using ICD-10-AM data (AUC 0.62, 95% CI: 0.57, 0.67) for predicting the EQ-5D-3L anxiety/depression item. The discrimination of the model for predicting return to work was higher with inclusion of the medical record data (AUC 0.69, 95% CI: 0.63, 0.76) than the ICD-10-AM data (AUC 0.59, 95% CL: 0.52, 0.65). Mental health, drug and alcohol comorbidity information derived from medical record review was not clearly superior for predicting the majority of the outcomes assessed when compared to ICD-10-AM. While information available in medical records may be more comprehensive than in the ICD-10-AM, there appears to be little difference in the discriminative capacity of comorbidities coded in the two sources.
NASA Technical Reports Server (NTRS)
Nickerson, Cheryl A.; Richter, Emily G.; Ott, C. Mark
2006-01-01
Representative, reproducible and high-throughput models of human cells and tissues are critical for a meaningful evaluation of host-pathogen interactions and are an essential component of the research developmental pipeline. The most informative infection models - animals, organ explants and human trials - are not suited for extensive evaluation of pathogenesis mechanisms and screening of candidate drugs. At the other extreme, more cost effective and accessible infection models such as conventional cell culture and static co-culture may not capture physiological and three-dimensional aspects of tissue biology that are important in assessing pathogenesis, and effectiveness and cytotoxicity of therapeutics. Our lab has used innovative bioengineering technology to establish biologically meaningful 3-D models of human tissues that recapitulate many aspects of the differentiated structure and function of the parental tissue in vivo, and we have applied these models to study infectious disease. We have established a variety of different 3-D models that are currently being used in infection studies - including small intestine, colon, lung, placenta, bladder, periodontal ligament, and neuronal models. Published work from our lab has shown that our 3-D models respond to infection with bacterial and viral pathogens in ways that reflect the infection process in vivo. By virtue of their physiological relevance, 3-D cell cultures may also hold significant potential as models to provide insight into the neuropathogenesis of HIV infection. Furthermore, the experimental flexibility, reproducibility, cost-efficiency, and high throughput platform afforded by these 3-D models may have important implications for the design and development of drugs with which to effectively treat neurological complications of HIV infection.
Three-dimensional surface anthropometry: Applications to the human body
NASA Astrophysics Data System (ADS)
Jones, Peter R. M.; Rioux, Marc
1997-09-01
Anthropometry is the study of the measurement of the human body. By tradition this has been carried out taking the measurements from body surface landmarks, such as circumferences and breadths, using simple instruments like tape measures and calipers. Three-dimensional (3D) surface anthropometry enables us to extend the study to 3D geometry and morphology of mainly external human body tissues. It includes the acquisition, indexing, transmission, archiving, retrieval, interrogation and analysis of body size, shape, and surface together with their variability throughout growth and development to adulthood. While 3D surface anthropometry surveying is relatively new, anthropometric surveying using traditional tools, such as calipers and tape measures, is not. Recorded studies of the human form date back to ancient times. Since at least the 17th century 1 investigators have made attempts to measure the human body for physical properties such as weight, size, and centre of mass. Martin documented 'standard' body measurement methods in a handbook in 1928. 2 This paper reviews the past and current literature devoted to the applications of 3D anthropometry because true 3D scanning of the complete human body is fast becoming a reality. We attempt to take readers through different forms of technology which deal with simple forms of projected light to the more complex advanced forms of laser and video technology giving low and/or high resolution 3D data. Information is also given about image capture of size and shape of the whole as well as most component parts of the human body. In particular, the review describes with explanations a multitude of applications, for example, medical, product design, human engineering, anthropometry and ergonomics etc.
Hershberger, P.; Hart, A.; Gregg, J.; Elder, N.; Winton, J.
2006-01-01
Capture of wild, juvenile herring Clupea pallasii from Puget Sound (Washington, USA) and confinement in laboratory tanks resulted in outbreaks of viral hemorrhagic septicemia (VHS), viral erythrocytic necrosis (VEN) and ichthyophoniasis; however, the timing and progression of the 3 diseases differed. The VHS epidemic occurred first, characterized by an initially low infection prevalence that increased quickly with confinement time, peaking at 93 to 98% after confinement for 6 d, then decreasing to negligible levels after 20 d. The VHS outbreak was followed by a VEN epidemic that, within 12 d of confinement, progressed from undetectable levels to 100% infection prevalence with >90% of erythrocytes demonstrating inclusions. The VEN epidemic persisted for 54 d, after which the study was terminated, and was characterized by severe blood dyscrasias including reduction of mean hematocrit from 42 to 6% and replacement of mature erythrocytes with circulating erythroblasts and ghost cells. All fish with ichthyophoniasis at capture died within the first 3 wk of confinement, probably as a result of the multiple stressors associated with capture, transport, confinement, and progression of concomitant viral diseases. The results illustrate the differences in disease ecology and possible synergistic effects of pathogens affecting marine fish and highlight the difficulty in ascribing a single causation to outbreaks of disease among populations of wild fishes. ?? Inter-Research 2006.
NASA Astrophysics Data System (ADS)
Bognot, J. R.; Candido, C. G.; Blanco, A. C.; Montelibano, J. R. Y.
2018-05-01
Monitoring the progress of building's construction is critical in construction management. However, measuring the building construction's progress are still manual, time consuming, error prone, and impose tedious process of analysis leading to delays, additional costings and effort. The main goal of this research is to develop a methodology for building construction progress monitoring based on 3D as-built model of the building from unmanned aerial system (UAS) images, 4D as-planned model (with construction schedule integrated) and, GIS analysis. Monitoring was done by capturing videos of the building with a camera-equipped UAS. Still images were extracted, filtered, bundle-adjusted, and 3D as-built model was generated using open source photogrammetric software. The as-planned model was generated from digitized CAD drawings using GIS. The 3D as-built model was aligned with the 4D as-planned model of building formed from extrusion of building elements, and integration of the construction's planned schedule. The construction progress is visualized via color-coding the building elements in the 3D model. The developed methodology was conducted and applied from the data obtained from an actual construction site. Accuracy in detecting `built' or `not built' building elements ranges from 82-84 % and precision of 50-72 %. Quantified progress in terms of the number of building elements are 21.31% (November 2016), 26.84 % (January 2017) and 44.19 % (March 2017). The results can be used as an input for progress monitoring performance of construction projects and improving related decision-making process.
Program Plan for 2005: NASA Scientific and Technical Information Program
NASA Technical Reports Server (NTRS)
2005-01-01
Throughout 2005 and beyond, NASA will be faced with great challenges and even greater opportunities. Following a period of reevaluation, reinvention, and transformation, we will move rapidly forward to leverage new partnerships, approaches, and technologies that will enhance the way we do business. NASA's Scientific and Technical Information (STI) Program, which functions under the auspices of the Agency's Chief Information Officer (CIO), is an integral part of NASA's future. The program supports the Agency's missions to communicate scientific knowledge and understanding and to help transfer NASA's research and development (R&D) information to the aerospace and academic communities and to the public. The STI Program helps ensure that the Agency will remain at the leading edge of R&D by quickly and efficiently capturing and sharing NASA and worldwide STI to use for problem solving, awareness, and knowledge management and transfer.
Wang, Zanyu; Jiyuan, Yin; Su, Chen; Xinyuan, Qiao
2015-01-01
Abstract Porcine epidemic diarrhea virus (PEDV), a coronavirus, can cause acute diarrhea and dehydration in pigs. In the current study, two positive monoclonal cell lines (5D7 and 3H4) specific for PEDV were established, and the immunoreactivity of the monoclonal antibodies was confirmed by immunofluorescence and dot-immunobinding assays. A method, termed antigen capture enzyme-linked immunosorbent assay (AC-ELISA), which used the monoclonal antibody 5D7 as the detecting antibody and rabbit antiserum of PEDV protein S as the capture antibody, was developed. Compared with the reverse transcription polymerase chain reaction method of detecting PEDV in fecal samples, AC-ELISA showed similar sensitivity and specificity. These results suggested that AC-ELISA would be useful for the diagnosis and epidemiological studies of PEDV. PMID:25658793
3D rocket combustor acoustics model
NASA Technical Reports Server (NTRS)
Priem, Richard J.; Breisacher, Kevin J.
1992-01-01
The theory and procedures for determining the characteristics of pressure oscillations in rocket engines with prescribed burning rate oscillations are presented. Analyses including radial and hub baffles and absorbers can be performed in one, two, or three dimensions. Pressure and velocity oscillations calculated using this procedure are presented for the SSME to show the influence of baffles and absorbers on the burning rate oscillations required to achieve neutral stability. Comparisons are made between the results obtained utilizing 1D, 2D, and 3D assumption with regards to capturing the physical phenomena of interest and computational requirements.
NASA Astrophysics Data System (ADS)
Starks, Michael R.
1990-09-01
A variety of low cost devices for capturing, editing and displaying field sequential 60 cycle stereoscopic video have recently been marketed by 3D TV Corp. and others. When properly used, they give very high quality images with most consumer and professional equipment. Our stereoscopic multiplexers for creating and editing field sequential video in NTSC or component(SVHS, Betacain, RGB) and Home 3D Theater system employing LCD eyeglasses have made 3D movies and television available to a large audience.
Damiati, Samar; Peacock, Martin; Leonhardt, Stefan; Damiati, Laila; Baghdadi, Mohammed A; Becker, Holger; Kodzius, Rimantas; Schuster, Bernhard
2018-02-14
Hepatic oval cells (HOCs) are considered the progeny of the intrahepatic stem cells that are found in a small population in the liver after hepatocyte proliferation is inhibited. Due to their small number, isolation and capture of these cells constitute a challenging task for immunosensor technology. This work describes the development of a 3D-printed continuous flow system and exploits disposable screen-printed electrodes for the rapid detection of HOCs that over-express the OV6 marker on their membrane. Multiwall carbon nanotube (MWCNT) electrodes have a chitosan film that serves as a scaffold for the immobilization of oval cell marker antibodies (anti-OV6-Ab), which enhance the sensitivity of the biomarker and makes the designed sensor specific for oval cells. The developed sensor can be easily embedded into the 3D-printed flow cell to allow cells to be exposed continuously to the functionalized surface. The continuous flow is intended to increase capture of most of the target cells in the specimen. Contact angle measurements were performed to characterize the nature and quality of the modified sensor surface, and electrochemical measurements (cyclic voltammetry (CV) and square wave voltammetry (SWV)) were performed to confirm the efficiency and selectivity of the fabricated sensor to detect HOCs. The proposed method is valuable for capturing rare cells and could provide an effective tool for cancer diagnosis and detection.
Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian
2017-12-06
The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p < 0.01) and significant and moderate to high (r = 0.552, 0.860, p < 0.01). The Myotest could only be used during the ballistic squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.
eLearning and eMaking: 3D Printing Blurring the Digital and the Physical
ERIC Educational Resources Information Center
Loy, Jennifer
2014-01-01
This article considers the potential of 3D printing as an eLearning tool for design education and the role of eMaking in bringing together the virtual and the physical in the design studio. eLearning has matured from the basics of lecture capture into sophisticated, interactive learning activities for students. At the same time, laptops and…
Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K
2015-12-01
This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The measurements will be useful as standard reference data for forensic practices and as material data for future studies in this field. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guarnieri, A.; Fissore, F.; Masiero, A.; Di Donna, A.; Coppa, U.; Vettore, A.
2017-05-01
In the last decade advances in the fields of close-range photogrammetry, terrestrial laser scanning (TLS) and computer vision (CV) have enabled to collect different kind of information about a Cultural Heritage objects and to carry out highly accurate 3D models. Additionally, the integration between laser scanning technology and Finite Element Analysis (FEA) is gaining particular interest in recent years for structural analysis of built heritage, since the increasing computational capabilities allow to manipulate large datasets. In this note we illustrate the approach adopted for surveying, 3D modeling and structural analysis of Villa Revedin-Bolasco, a magnificent historical building located in the small walled town of Castelfranco Veneto, in northern Italy. In 2012 CIRGEO was charged by the University of Padova to carry out a survey of the Villa and Park, as preliminary step for subsequent restoration works. The inner geometry of the Villa was captured with two Leica Disto D3a BT hand-held laser meters, while the outer walls of the building were surveyed with a Leica C10 and a Faro Focus 3D 120 terrestrial laser scanners. Ancillary GNSS measurements were also collected for 3D laser model georeferencing. A solid model was then generated from the laser global point cloud in Rhinoceros software, and portion of it was used for simulation in a Finite Element Analysis (FEA). In the paper we discuss in detail all the steps and challenges addressed and solutions adopted concerning the survey, solid modeling and FEA from laser scanning data of the historical complex of Villa Revedin-Bolasco.
Taking geoscience to the IMAX: 3D and 4D insight into geological processes using micro-CT
NASA Astrophysics Data System (ADS)
Dobson, Katherine; Dingwell, Don; Hess, Kai-Uwe; Withers, Philip; Lee, Peter; Pistone, Mattia; Fife, Julie; Atwood, Robert
2015-04-01
Geology is inherently dynamic, and full understanding of any geological system can only be achieved by considering the processes by which change occurs. Analytical limitations mean understanding has largely developed from ex situ analyses of the products of geological change, rather than of the processes themselves. Most methods essentially utilise "snap shot" sampling: and from thin section petrography to high resolution crystal chemical stratigraphy and field volcanology, we capture an incomplete view of a spatially and temporally variable system. Even with detailed experimental work, we can usually only analyse samples before and after we perform an experiment, as routine analysis methods are destructive. Serial sectioning and quenched experiments stopped at different stages can give some insight into the third and fourth dimension, but the true scaling of the processes from the laboratory to the 4D (3D + time) geosphere is still poorly understood. Micro computed tomography (XMT) can visualise the internal structures and spatial associations within geological samples non-destructively. With image resolutions of between 200 microns and 50 nanometres, tomography has the ability to provide a detailed sample assessment in 3D, and quantification of mineral associations, porosity, grain orientations, fracture alignments and many other features. This allows better understanding of the role of the complex geometries and associations within the samples, but the challenge of capturing the processes that generate and modify these structures remains. To capture processes, recent work has focused on developing experimental capability for in situ experiments on geological materials. Data presented will showcase examples from recent experiments where high speed synchrotron x-ray tomography has been used to acquire each 3D image in under 2 seconds. We present a suite of studies that showcase how it is now possible to take quantification of many geological processed into 3D and 4D. This will include tracking the interactions between bubbles and crystals in a deforming magma, the dissolution of individual mineral grains from low grade ores, and quantification of three phase flow in sediments and soils. Our aim is to demonstrate how XMT can provide new insight into dynamic processes in all geoscience disciplines, and give you some insight into where 4D geoscience could take us next.
Terrain Modelling for Immersive Visualization for the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Wright, J.; Hartman, F.; Cooper, B.; Maxwell, S.; Yen, J.; Morrison, J.
2004-01-01
Immersive environments are being used to support mission operations at the Jet Propulsion Laboratory. This technology contributed to the Mars Pathfinder Mission in planning sorties for the Sojourner rover and is being used for the Mars Exploration Rover (MER) missions. The stereo imagery captured by the rovers is used to create 3D terrain models, which can be viewed from any angle, to provide a powerful and information rich immersive visualization experience. These technologies contributed heavily to both the mission success and the phenomenal level of public outreach achieved by Mars Pathfinder and MER. This paper will review the utilization of terrain modelling for immersive environments in support of MER.
Modeling the Relationship Between Porosity and Permeability During Oxidation of Ablative Materials
NASA Technical Reports Server (NTRS)
Thornton, John M.; Panerai, Francesco; Ferguson, Joseph C.; Borner, Arnaud; Mansour, Nagi N.
2017-01-01
The ablative materials used in thermal protection systems (TPS) undergo oxidation during atmospheric entry which leads to an in-depth change in both permeability and porosity. These properties have a significant affect on heat transfer in a TPS during entry. X-ray micro-tomography has provided 3D images capturing the micro-structure of TPS materials. In this study, we use micro-tomography based simulations to create high-fidelity models relating permeability to porosity during oxidation of FiberForm, the carbon fiber preform of the Phenolic Impregnated Carbon Ablator (PICA) often used as a TPS material. The goal of this study is to inform full-scale models and reduce uncertainty in TPS modeling.
Automatic 2D-to-3D image conversion using 3D examples from the internet
NASA Astrophysics Data System (ADS)
Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.
2012-03-01
The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.
Bornik, Alexander; Urschler, Martin; Schmalstieg, Dieter; Bischof, Horst; Krauskopf, Astrid; Schwark, Thorsten; Scheurer, Eva; Yen, Kathrin
2018-06-01
Three-dimensional (3D) crime scene documentation using 3D scanners and medical imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) are increasingly applied in forensic casework. Together with digital photography, these modalities enable comprehensive and non-invasive recording of forensically relevant information regarding injuries/pathologies inside the body and on its surface. Furthermore, it is possible to capture traces and items at crime scenes. Such digitally secured evidence has the potential to similarly increase case understanding by forensic experts and non-experts in court. Unlike photographs and 3D surface models, images from CT and MRI are not self-explanatory. Their interpretation and understanding requires radiological knowledge. Findings in tomography data must not only be revealed, but should also be jointly studied with all the 2D and 3D data available in order to clarify spatial interrelations and to optimally exploit the data at hand. This is technically challenging due to the heterogeneous data representations including volumetric data, polygonal 3D models, and images. This paper presents a novel computer-aided forensic toolbox providing tools to support the analysis, documentation, annotation, and illustration of forensic cases using heterogeneous digital data. Conjoint visualization of data from different modalities in their native form and efficient tools to visually extract and emphasize findings help experts to reveal unrecognized correlations and thereby enhance their case understanding. Moreover, the 3D case illustrations created for case analysis represent an efficient means to convey the insights gained from case analysis to forensic non-experts involved in court proceedings like jurists and laymen. The capability of the presented approach in the context of case analysis, its potential to speed up legal procedures and to ultimately enhance legal certainty is demonstrated by introducing a number of representative forensic cases. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Rahardianto, Trias; Gomez, Christopher
2017-07-01
Adequate knowledge of geological structure is an essential for most studies in geoscience, mineral exploration, geo-hazard and disaster management. The geological map is still one the datasets the most commonly used to obtain information about the geological structure such as fault, joint, fold, and unconformities, however in rural areas such as Central Java data is still sparse. Recent progress in data acquisition technologies and computing have increased the interest in how to capture the high-resolution geological data effectively and for a relatively low cost. Some methods such as Airborne Laser Scanning (ALS), Terrestrial Laser Scanning (TLS), and Unmanned Aerial Vehicles (UAVs) have been widely used to obtain this information, however, these methods need a significant investment in hardware, software, and time. Resolving some of those issues, the photogrammetric method structure from motion (SfM) is an image-based method, which can provide solutions equivalent to laser technologies for a relatively low-cost with minimal time, specialization and financial investment. Using SfM photogrammetry, it is possible to generate high resolution 3D images rock surfaces and outcrops, in order to improve the geological understanding of Indonesia. In the present contribution, it is shown that the information about fault and joint can be obtained at high-resolution and in a shorter time than with the conventional grid mapping and remotely sensed topographic surveying. The SfM method produces a point-cloud through image matching and computing. This task can be run with open- source or commercial image processing and 3D reconstruction software. As the point cloud has 3D information as well as RGB values, it allows for further analysis such as DEM extraction and image orthorectification processes. The present paper describes some examples of SfM to identify the fault in the outcrops and also highlight the future possibilities in terms of earthquake hazard assessment, based on fieldwork in the South of Yogyakarta City.
Automatic guidance of attention during real-world visual search.
Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine
2015-08-01
Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty.
Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar
2013-01-01
Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303
Abraham, Leandro; Bromberg, Facundo; Forradellas, Raymundo
2018-04-01
Muscle activation level is currently being captured using impractical and expensive devices which make their use in telemedicine settings extremely difficult. To address this issue, a prototype is presented of a non-invasive, easy-to-install system for the estimation of a discrete level of muscle activation of the biceps muscle from 3D point clouds captured with RGB-D cameras. A methodology is proposed that uses the ensemble of shape functions point cloud descriptor for the geometric characterization of 3D point clouds, together with support vector machines to learn a classifier that, based on this geometric characterization for some points of view of the biceps, provides a model for the estimation of muscle activation for all neighboring points of view. This results in a classifier that is robust to small perturbations in the point of view of the capturing device, greatly simplifying the installation process for end-users. In the discrimination of five levels of effort with values up to the maximum voluntary contraction (MVC) of the biceps muscle (3800 g), the best variant of the proposed methodology achieved mean absolute errors of about 9.21% MVC - an acceptable performance for telemedicine settings where the electric measurement of muscle activation is impractical. The results prove that the correlations between the external geometry of the arm and biceps muscle activation are strong enough to consider computer vision and supervised learning an alternative with great potential for practical applications in tele-physiotherapy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Geologic and Landuse Controls of the Risk for Domestic Well Pollution from Septic Tank Leachate
NASA Astrophysics Data System (ADS)
Horn, J.; Harter, T.
2006-12-01
A highly resolved three-dimensional groundwater model containing a domestic drinking water well and its surrounding gravel pack is simulated with MODFLOW. Typical recharge rates, domestic well depths and well sealing lengths are obtained by analyzing well log data from eastern Stanislaus County, California, an area with a significant rural and suburban population relying on domestic wells and septic tank systems. The domestic well model is run for a range of hydraulic conductivities of both, the gravel pack and the aquifer. Reverse particle tracking with MODPATH 3D is carried out to determine the capture zone of the well as a function of hydraulic conductivity. The resulting capture zone is divided into two areas: Particles representing water entering the top of the well screen represent water that flows downward through the gravel pack from somewhere below the well seal and above the well screen. The source area associated with these particles forms a narrow well-ward elongation of the main capture zone, which represents that of particles flowing horizontally across the gravel pack into the well screen. The properties of the modeled capture zones are compared to existing analytical capture zone models. A clear influence of the gravel pack on capture zone shape and size is shown. Using the information on capture zone geometry, a risk assessment tool is developed to estimate the chance that a domestic well capture zone intersects at least one septic tank drainfield in a checkerboard of rural or suburban lots of a given size, but random drainfield and domestic well distribution. Risk is computed as a function of aquifer and gravel pack hydraulic conductivity, and as a function of lot size. We show the risk of collocation of a septic tank leach field with a domestic well capture zone for various scenarios. This risk is generally highest for high hydraulic conductivities of the gravel pack and the aquifer, limited anisotropy, and higher septic system densities. Under typical conditions, the risk of septic leachate reaching a domestic well is significant and may range from 5% to over 50%.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.
Sohn, Bong-Soo
2017-03-11
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones
Sohn, Bong-Soo
2017-01-01
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487
Hybrid ultrasound and dual-wavelength optoacoustic biomicroscopy for functional neuroimaging
NASA Astrophysics Data System (ADS)
Rebling, Johannes; Estrada, Hector; Zwack, Michael; Sela, Gali; Gottschalk, Sven; Razansky, Daniel
2017-03-01
Many neurological disorders are linked to abnormal activation or pathological alterations of the vasculature in the affected brain region. Obtaining simultaneous morphological and physiological information of neurovasculature is very challenging due to the acoustic distortions and intense light scattering by the skull and brain. In addition, the size of cerebral vasculature in murine brains spans an extended range from just a few microns up to about a millimeter, all to be recorded in 3D and over an area of several dozens of mm2. Numerous imaging techniques exist that excel at characterizing certain aspects of this complex network but are only capable of providing information on a limited spatiotemporal scale. We present a hybrid ultrasound and dual-wavelength optoacoustic microscope, capable of rapid imaging of murine neurovasculature in-vivo, with high spatial resolution down to 12 μm over a large field of view exceeding 50mm2. The dual wavelength imaging capability allows for the visualization of functional blood parameters through an intact skull while pulse-echo ultrasound biomicroscopy images are captured simultaneously by the same scan head. The flexible hybrid design in combination with fast high-resolution imaging in 3D holds promise for generating better insights into the architecture and function of the neurovascular system.
Capturing volcanic plumes in 3D with UAV-based photogrammetry at Yasur Volcano - Vanuatu
NASA Astrophysics Data System (ADS)
Gomez, C.; Kennedy, B.
2018-01-01
As a precise volume of volcanic ash-plume is essential to understand the dynamic of gas emission, exchanges and the eruptive dynamics, we have measured in 3D using photogrammetry a small-size volcanic plume at the summit of Yasur Volcano, Vanuatu. The objective was to collect the altitude and planform shape of the plume as well as the vertical variations of the shape and size. To reach this objective, the authors have used the Structure from Motion photogrammetric method applied to a series of photographs captured in a very short period of time around and above the plume. A total of 146 photographs at 3000 × 4000 pixel were collected as well as the geolocation, the pitch, tilt and orientation of the cameras. The results revealed a "mushroom"-like shape of the plume with a narrow ascending column topped by a turbulent mixing zone. The volume of the plume was calculated to be 13,430 m3 ± 512 m3 (with the error being the cube of the linear error from the Ground Control Points) for a maximum height above the terrain of 63 m. The included error was also kept high because of the irregular distribution of the Ground Control Points that could not be collected in dangerous areas due to the ongoing eruption. Based on this research, it is therefore worth investigating the usage of multiple cameras to capture plumes in 3D over time and the method is also a good complement to the recent development of photogrammetry from space, which can tackle larger-scale eruption plumes.
Wang, Bo-Bo; Liu, Wei; Chen, Ming-Yue; Li, Xuan; Han, Yuan; Xu, Qiang; Sun, Long-Jie; Xie, De-Qiong; Cai, Kui-Zheng; Liu, Yi-Zhong; Liu, Jun-Lin; Yi, Lin-Xin; Wang, Hui; Zhao, Ming-Wang; Li, Xiao-Shan; Wu, Jia-Yan; Yang, Jing; Wang, Yue-Ying
2015-08-01
The nematophagous fungus Duddingtonia flagrans has been investigated as a biological agent for the control of gastrointestinal nematodes infecting domestic animals in other countries. However, D. flagrans has not been detected in China. In this study 1,135 samples were examined from 2012 to 2014; 4 D. flagrans isolates (SDH 035, SDH 091, SFH 089, SFG 170) were obtained from the feces of domestic animals and dung compost. The 4 isolates were then characterized morphologically. The SDH 035 strain was characterized by sequencing the ITS1-5.8S rDNA-ITS2 region. A BLAST search showed that the SDH 035 strain (GenBank KP257593) was 100% identical to Arthrobotrys flagrans (AF106520) and was identified as D. flagrans. The morphological plasticity of the isolated strain and the interaction of this strain with the nematode targets were observed by subjecting the infected trichostrongylide L3 to scanning electron microscopy. At 6 and 8 hr after trichostrongylide L(3) was added, hyphal ramifications were observed and L(3) were captured, respectively. Scanning electron micrographs were obtained at 0, 6, 12, 18, 24, 30, 36, 42, and 48 hr, where 0 is the time when trichostrongylide L(3) were first captured by the fungus. The details of the capture process by the fungus are also described. Chlamydospores were observed in the body of L(3) in the late stage of digestion. A sticky substance and bacteria could be observed in contact areas between predation structures and nematode cuticle.
Non-contact rapid optical coherence elastography by high-speed 4D imaging of elastic waves
NASA Astrophysics Data System (ADS)
Song, Shaozhen; Yoon, Soon Joon; Ambroziński, Łukasz; Pelivanov, Ivan; Li, David; Gao, Liang; Shen, Tueng T.; O'Donnell, Matthew; Wang, Ruikang K.
2017-02-01
Shear wave OCE (SW-OCE) uses an OCT system to track propagating mechanical waves, providing the information needed to map the elasticity of the target sample. In this study we demonstrate high speed, 4D imaging to capture transient mechanical wave propagation. Using a high-speed Fourier domain mode-locked (FDML) swept-source OCT (SS-OCT) system operating at 1.62 MHz A-line rate, the equivalent volume rate of mechanical wave imaging is 16 kvps (kilo-volumes per second), and total imaging time for a 6 x 6 x 3 mm volume is only 0.32 s. With a displacement sensitivity of 10 nanometers, the proposed 4D imaging technique provides sufficient temporal and spatial resolution for real-time optical coherence elastography (OCE). Combined with a new air-coupled, high-frequency focused ultrasound stimulator requiring no contact or coupling media, this near real-time system can provide quantitative information on localized viscoelastic properties. SW-OCE measurements are demonstrated on tissue-mimicking phantoms and porcine cornea under various intra-ocular pressures. In addition, elasticity anisotropy in the cornea is observed. Images of the mechanical wave group velocity, which correlates with tissue elasticity, show velocities ranging from 4-20 m/s depending on pressure and propagation direction. These initial results strong suggest that 4D imaging for real-time OCE may enable high-resolution quantitative mapping of tissue biomechanical properties in clinical applications.
NASA Astrophysics Data System (ADS)
Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua
2018-03-01
Phase-based fringe projection methods have been commonly used for three-dimensional (3D) measurements. However, image saturation results in incorrect intensities in captured fringe pattern images, leading to phase and measurement errors. Existing solutions are complex. This paper proposes an adaptive projection intensity adjustment method to avoid image saturation and maintain good fringe modulation in measuring objects with a high range of surface reflectivities. The adapted fringe patterns are created using only one prior step of fringe-pattern projection and image capture. First, a set of phase-shifted fringe patterns with maximum projection intensity value of 255 and a uniform gray level pattern are projected onto the surface of an object. The patterns are reflected from and deformed by the object surface and captured by a digital camera. The best projection intensities corresponding to each saturated-pixel clusters are determined by fitting a polynomial function to transform captured intensities to projected intensities. Subsequently, the adapted fringe patterns are constructed using the best projection intensities at projector pixel coordinate. Finally, the adapted fringe patterns are projected for phase recovery and 3D shape calculation. The experimental results demonstrate that the proposed method achieves high measurement accuracy even for objects with a high range of surface reflectivities.
Bergueiro, J; Igarzabal, M; Sandin, J C Suarez; Somacal, H R; Vento, V Thatar; Huck, H; Valda, A A; Repetto, M; Kreiner, A J
2011-12-01
Several ion sources have been developed and an ion source test stand has been mounted for the first stage of a Tandem-Electrostatic-Quadrupole facility For Accelerator-Based Boron Neutron Capture Therapy. A first source, designed, fabricated and tested is a dual chamber, filament driven and magnetically compressed volume plasma proton ion source. A 4 mA beam has been accelerated and transported into the suppressed Faraday cup. Extensive simulations of the sources have been performed using both 2D and 3D self-consistent codes. Copyright © 2011 Elsevier Ltd. All rights reserved.
The design of red-blue 3D video fusion system based on DM642
NASA Astrophysics Data System (ADS)
Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao
2016-10-01
Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.
3D-Printed Supercapacitor-Powered Electrochemiluminescent Protein Immunoarray
Kadimisetty, Karteek; Mosa, Islam M.; Malla, Spundana; Satterwhite-Warden, Jennifer E.; Kuhns, Tyler; Faria, Ronaldo C.; Lee, Norman H.; Rusling, James F.
2015-01-01
Herein we report a low cost, sensitive, supercapacitor-powered electrochemiluminescent (ECL) protein immunoarray fabricated by an inexpensive 3-dimensional (3D) printer. The immunosensor detects three cancer biomarker proteins in serum within 35 min. The 3D-printed device employs hand screen printed carbon sensors with gravity flow for sample/reagent delivery and washing. Prostate cancer biomarker proteins, prostate specific antigen (PSA), prostate specific membrane antigen (PSMA) and platelet factor-4 (PF-4) in serum were captured on the antibody-coated carbon sensors followed by delivery of detection-antibody-coated Ru(bpy)32+ (RuBPY)-doped silica nanoparticles in a sandwich immunoassay. ECL light was initiated from RuBPY in the silica nanoparticles by electrochemical oxidation with tripropylamine (TPrA) co-reactant using supercapacitor power and ECL was captured with a CCD camera. The supercapacitor was rapidly photo-recharged between assays using an inexpensive solar cell. Detection limits were 300–500 fg mL−1 for the 3 proteins in undiluted calf serum. Assays of 6 prostate cancer patient serum samples gave good correlation with conventional single protein ELISAs. This technology could provide sensitive onsite cancer diagnostic tests in resource-limited settings with the need for only moderate-level training. PMID:26406460
Probabilistic Feasibility of the Reconstruction Process of Russian-Orthodox Churches
NASA Astrophysics Data System (ADS)
Chizhova, M.; Brunn, A.; Stilla, U.
2016-06-01
The cultural human heritage is important for the identity of following generations and has to be preserved in a suitable manner. In the course of time a lot of information about former cultural constructions has been lost because some objects were strongly damaged by natural erosion or on account of human work or were even destroyed. It is important to capture still available building parts of former buildings, mostly ruins. This data could be the basis for a virtual reconstruction. Laserscanning offers in principle the possibility to take up extensively surfaces of buildings in its actual status. In this paper we assume a priori given 3d-laserscanner data, 3d point cloud for the partly destroyed church. There are many well known algorithms, that describe different methods of extraction and detection of geometric primitives, which are recognized separately in 3d points clouds. In our work we put them in a common probabilistic framework, which guides the complete reconstruction process of complex buildings, in our case russian-orthodox churches. Churches are modeled with their functional volumetric components, enriched with a priori known probabilities, which are deduced from a database of russian-orthodox churches. Each set of components represents a complete church. The power of the new method is shown for a simulated dataset of 100 russian-orthodox churches.
NASA Astrophysics Data System (ADS)
Braidot, Ariel; Favaretto, Guillermo; Frisoli, Melisa; Gemignani, Diego; Gumpel, Gustavo; Massuh, Roberto; Rayan, Josefina; Turin, Matías
2016-04-01
Subjects who practice sports either as professionals or amateurs, have a high incidence of knee injuries. There are a few publications that show studies from a kinematic point of view of lateral-structure-knee injuries, including meniscal (meniscal tears or chondral injury), without anterior cruciate ligament rupture. The use of standard motion capture systems for measuring outdoors sport is hard to implement due to many operative reasons. Recently released, the Microsoft Kinect™ is a sensor that was developed to track movements for gaming purposes and has seen an increased use in clinical applications. The fact that this device is a simple and portable tool allows the acquisition of data of sport common movements in the field. The development and testing of a set of protocols for 3D kinematic measurement using the Microsoft Kinect™ system is presented in this paper. The 3D kinematic evaluation algorithms were developed from information available and with the use of Microsoft’s Software Development Kit 1.8 (SDK). Along with this, an algorithm for calculating the lower limb joints angles was implemented. Thirty healthy adult volunteers were measured, using five different recording protocols for sport characteristic gestures which involve high knee injury risk in athletes.
Miller, Brian; Dawson, Stephen; Vennell, Ross
2013-10-01
Observations are presented of the vocal behavior and three dimensional (3D) underwater movements of sperm whales measured with a passive acoustic array off the coast of Kaikoura, New Zealand. Visual observations and vocal behaviors of whales were used to divide dive tracks into different phases, and depths and movements of whales are reported for each of these phases. Diving depths and movement information from 75 3D tracks of whales in Kaikoura are compared to one and two dimensional tracks of whales studied in other oceans. While diving, whales in Kaikoura had a mean swimming speed of 1.57 m/s, and, on average, dived to a depth of 427 m (SD = 117 m), spending most of their time at depths between 300 and 600 m. Creak vocalizations, assumed to be the prey capture phase of echolocation, occurred throughout the water column from sea surface to sea floor, but most occurred at depths of 400-550 m. Three dimensional measurement of tracking revealed several different "foraging" strategies, including active chasing of prey, lining up slow-moving or unsuspecting prey, and foraging on demersal or benthic prey. These movements provide the first 3D descriptions underwater behavior of whales at Kaikoura.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
Development of an inexpensive optical method for studies of dental erosion process in vitro
NASA Astrophysics Data System (ADS)
Nasution, A. M. T.; Noerjanto, B.; Triwanto, L.
2008-09-01
Teeth have important roles in digestion of food, supporting the facial-structure, as well as in articulation of speech. Abnormality in teeth structure can be initiated by an erosion process due to diet or beverages consumption that lead to destruction which affect their functionality. Research to study the erosion processes that lead to teeth's abnormality is important in order to be used as a care and prevention purpose. Accurate measurement methods would be necessary as a research tool, in order to be capable for quantifying dental destruction's degree. In this work an inexpensive optical method as tool to study dental erosion process is developed. It is based on extraction the parameters from the 3D dental visual information. The 3D visual image is obtained from reconstruction of multiple lateral projection of 2D images that captured from many angles. Using a simple motor stepper and a pocket digital camera, sequence of multi-projection 2D images of premolar tooth is obtained. This images are then reconstructed to produce a 3D image, which is useful for quantifying related dental erosion parameters. The quantification process is obtained from the shrinkage of dental volume as well as surface properties due to erosion process. Results of quantification is correlated to the ones of dissolved calcium atom which released from the tooth using atomic absorption spectrometry. This proposed method would be useful as visualization tool in many engineering, dentistry, and medical research. It would be useful also for the educational purposes.
γ rays from muon capture in I, Au, and Bi
NASA Astrophysics Data System (ADS)
Measday, David F.; Stocki, Trevor J.; Tam, Heywood
2007-04-01
A significant improvement has been made in the identification of γ rays from muon capture in I, Au, and Bi, all monisotopic elements. The (μ-,νn) reaction was clearly observed in all nuclei, but the levels excited do not correlate well with the spectroscopic factors from the (d,He3) reaction. Some (μ-,ν2n), (μ-,ν3n), (μ-,ν4n), (μ-,ν5n) and other reactions have been observed at a lower yield. The muonic x-ray cascades have also been studied in detail.
Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration
NASA Astrophysics Data System (ADS)
Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.
2017-06-01
Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes
Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel
2015-01-01
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.
Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel
2015-04-20
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.
Castelblanco-Martínez, Delma Nataly; Morales-Vela, Benjamin; Slone, Daniel H.; Padilla-Saldívar, Janneth Adriana; Reid, James P.; Hernández-Arana, Héctor Abuid
2015-01-01
Diving or respiratory behavior in aquatic mammals can be used as an indicator of physiological activity and consequently, to infer behavioral patterns. Five Antillean manatees, Trichechus manatus manatus, were captured in Chetumal Bay and tagged with GPS tracking devices. The radios were equipped with a micropower saltwater sensor (SWS), which records the times when the tag assembly was submerged. The information was analyzed to establish individual fine-scale behaviors. For each fix, we established the following variables: distance (D), sampling interval (T), movement rate (D/T), number of dives (N), and total diving duration (TDD). We used logic criteria and simple scatterplots to distinguish between behavioral categories: ‘Travelling’ (D/T ≥ 3 km/h), ‘Surface’ (↓TDD, ↓N), ‘Bottom feeding’ (↑TDD, ↑N) and ‘Bottom resting’ (↑TDD, ↓N). Habitat categories were qualitatively assigned: Lagoon, Channels, Caye shore, City shore, Channel edge, and Open areas. The instrumented individuals displayed a daily rhythm of bottom activities, with surfacing activities more frequent during the night and early in the morning. More investigation into those cycles and other individual fine-scale behaviors related to their proximity to concentrations of human activity would be informative
Ames Lab 101: 3D Metals Printer
Ott, Ryan
2018-01-16
To meet one of the biggest energy challenges of the 21st century - finding alternatives to rare-earth elements and other critical materials - scientists will need new and advanced tools. The Critical Materials Institute at the U.S. Department of Energy's Ames Laboratory has a new one: a 3D printer for metals research. 3D printing technology, which has captured the imagination of both industry and consumers, enables ideas to move quickly from the initial design phase to final form using materials including polymers, ceramics, paper and even food. But the Critical Materials Institute (CMI) will apply the advantages of the 3D printing process in a unique way: for materials discovery.
Full-color large-scaled computer-generated holograms for physical and non-physical objects
NASA Astrophysics Data System (ADS)
Matsushima, Kyoji; Tsuchiyama, Yasuhiro; Sonobe, Noriaki; Masuji, Shoya; Yamaguchi, Masahiro; Sakamoto, Yuji
2017-05-01
Several full-color high-definition CGHs are created for reconstructing 3D scenes including real-existing physical objects. The field of the physical objects are generated or captured by employing three techniques; 3D scanner, synthetic aperture digital holography, and multi-viewpoint images. Full-color reconstruction of high-definition CGHs is realized by RGB color filters. The optical reconstructions are presented for verifying these techniques.
Cannon, T. M.; Shah, A. T.; Skala, M. C.
2017-01-01
Two-photon microscopy of cellular autofluorescence intensity and lifetime (optical metabolic imaging, or OMI) is a promising tool for preclinical drug development. OMI, which exploits the endogenous fluorescence from the metabolic coenzymes NAD(P)H and FAD, is sensitive to changes in cell metabolism produced by drug treatment. Previous studies have shown that drug response, genetic expression, cell-cell communication, and cell signaling in 3D culture match those of the original in vivo tumor, but not those of 2D culture. The goal of this study is to use OMI to quantify dynamic cell-level metabolic differences in drug response in 2D cell lines vs. 3D organoids generated from xenograft tumors of the same cell origin. BT474 cells and Herceptin-resistant BT474 (HR6) cells were tested. Cells were treated with vehicle control, Herceptin, XL147 (PI3K inhibitor), and the combination. The OMI index was used to quantify response, and is a linear combination of the redox ratio (intensity of NAD(P)H divided by FAD), mean NADH lifetime, and mean FAD lifetime. The results confirm that the OMI index resolves significant differences (p<0.05) in drug response for 2D vs. 3D cultures, specifically for BT474 cells 24 hours after Herceptin treatment, for HR6 cells 24 and 72 hours after combination treatment, and for HR6 cells 72 hours after XL147 treatment. Cell-level analysis of the OMI index also reveals differences in the number of cell sub-populations in 2D vs. 3D culture at 24, 48, and 72 hours post-treatment in control and treated groups. Finally, significant increases (p<0.05) in the mean lifetime of NADH and FAD were measured in 2D vs. 3D for both cell lines at 72 hours post-treatment in control and all treatment groups. These whole-population differences in the mean NADH and FAD lifetimes are supported by differences in the number of cell sub-populations in 2D vs. 3D. Overall, these studies confirm that OMI is sensitive to differences in drug response in 2D vs. 3D, and provides further information on dynamic changes in the relative abundance of metabolic cell sub-populations that contribute to this difference. PMID:28663873
Tang, Jinghua; McGrath, Michael; Laszczak, Piotr; Jiang, Liudi; Bader, Dan L; Moser, David; Zahedi, Saeed
2015-12-01
Design and fitting of artificial limbs to lower limb amputees are largely based on the subjective judgement of the prosthetist. Understanding the science of three-dimensional (3D) dynamic coupling at the residuum/socket interface could potentially aid the design and fitting of the socket. A new method has been developed to characterise the 3D dynamic coupling at the residuum/socket interface using 3D motion capture based on a single case study of a trans-femoral amputee. The new model incorporated a Virtual Residuum Segment (VRS) and a Socket Segment (SS) which combined to form the residuum/socket interface. Angular and axial couplings between the two segments were subsequently determined. Results indicated a non-rigid angular coupling in excess of 10° in the quasi-sagittal plane and an axial coupling of between 21 and 35 mm. The corresponding angular couplings of less than 4° and 2° were estimated in the quasi-coronal and quasi-transverse plane, respectively. We propose that the combined experimental and analytical approach adopted in this case study could aid the iterative socket fitting process and could potentially lead to a new socket design. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Emergency Response Virtual Environment for Safe Schools
NASA Technical Reports Server (NTRS)
Wasfy, Ayman; Walker, Teresa
2008-01-01
An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution.
Optical performance analysis of plenoptic camera systems
NASA Astrophysics Data System (ADS)
Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weekes, B.; Ewins, D.; Acciavatti, F.
2014-05-27
To date, differing implementations of continuous scan laser Doppler vibrometry have been demonstrated by various academic institutions, but since the scan paths were defined using step or sine functions from function generators, the paths were typically limited to 1D line scans or 2D areas such as raster paths or Lissajous trajectories. The excitation was previously often limited to a single frequency due to the specific signal processing performed to convert the scan data into an ODS. In this paper, a configuration of continuous-scan laser Doppler vibrometry is demonstrated which permits scanning of arbitrary areas, with the benefit of allowing multi-frequency/broadbandmore » excitation. Various means of generating scan paths to inspect arbitrary areas are discussed and demonstrated. Further, full 3D vibration capture is demonstrated by the addition of a range-finding facility to the described configuration, and iteratively relocating a single scanning laser head. Here, the range-finding facility was provided by a Microsoft Kinect, an inexpensive piece of consumer electronics.« less
Numerical Study of Richtmyer-Meshkov Instability with Re-Shock
NASA Astrophysics Data System (ADS)
Wong, Man Long; Livescu, Daniel; Lele, Sanjiva
2017-11-01
The interaction of a Mach 1.45 shock wave with a perturbed planar interface between two gases with an Atwood number 0.68 is studied through 2D and 3D shock-capturing adaptive mesh refinement (AMR) simulations with physical diffusive and viscous terms. The simulations have initial conditions similar to those in the actual experiment conducted by Poggi et al. [1998]. The development of the flow and evolution of mixing due to the interactions with the first shock and the re-shock are studied together with the sensitivity of various global parameters to the properties of the initial perturbation. Grid resolutions needed for fully resolved and 2D and 3D simulations are also evaluated. Simulations are conducted with an in-house AMR solver HAMeRS built on the SAMRAI library. The code utilizes the high-order localized dissipation weighted compact nonlinear scheme [Wong and Lele, 2017] for shock-capturing and different sensors including the wavelet sensor [Wong and Lele, 2016] to identify regions for grid refinement. First and third authors acknowledge the project sponsor LANL.
Guess, Trent M; Razu, Swithin; Jahandar, Amirhossein; Skubic, Marjorie; Huo, Zhiyu
2017-04-01
The Microsoft Kinect is becoming a widely used tool for inexpensive, portable measurement of human motion, with the potential to support clinical assessments of performance and function. In this study, the relative osteokinematic Cardan joint angles of the hip and knee were calculated using the Kinect 2.0 skeletal tracker. The pelvis segments of the default skeletal model were reoriented and 3-dimensional joint angles were compared with a marker-based system during a drop vertical jump and a hip abduction motion. Good agreement between the Kinect and marker-based system were found for knee (correlation coefficient = 0.96, cycle RMS error = 11°, peak flexion difference = 3°) and hip (correlation coefficient = 0.97, cycle RMS = 12°, peak flexion difference = 12°) flexion during the landing phase of the drop vertical jump and for hip abduction/adduction (correlation coefficient = 0.99, cycle RMS error = 7°, peak flexion difference = 8°) during isolated hip motion. Nonsagittal hip and knee angles did not correlate well for the drop vertical jump. When limited to activities in the optimal capture volume and with simple modifications to the skeletal model, the Kinect 2.0 skeletal tracker can provide limited 3-dimensional kinematic information of the lower limbs that may be useful for functional movement assessment.
Neutron capture on short-lived nuclei via the surrogate (d,pγ) reaction
NASA Astrophysics Data System (ADS)
Cizewski, Jolie A.; Ratkiewicz, Andrew
2018-05-01
Rapid r-process nucleosynthesis is responsible for the creation of about half of the elements heavier than iron. Neutron capture on shortlived nuclei in cold processes or during freeze out from hot processes can have a significant impact on the final observed r-process abundances. We are validating the (d,pγ) reaction as a surrogate for neutron capture with measurements on 95Mo targets and a focus on discrete transitions. The experimental results have been analyzed within the Hauser-Feshbach approach with non-elastic breakup of the deuteron providing a neutron to be captured. Preliminary results support the (d,pγ) reaction as a valid surrogate for neutron capture. We are poised to measure the (d,pγ) reaction in inverse kinematics with unstable beams following the development of the experimental techniques.
Optimization of the Reconstruction Interval in Neurovascular 4D-CTA Imaging
Hoogenboom, T.C.H.; van Beurden, R.M.J.; van Teylingen, B.; Schenk, B.; Willems, P.W.A.
2012-01-01
Summary Time resolved whole brain CT angiography (4D-CTA) is a novel imaging technology providing information regarding blood flow. One of the factors that influence the diagnostic value of this examination is the temporal resolution, which is affected by the gantry rotation speed during acquisition and the reconstruction interval during post-processing. Post-processing determines the time spacing between two reconstructed volumes and, unlike rotation speed, does not affect radiation burden. The data sets of six patients who underwent a cranial 4D-CTA were used for this study. Raw data was acquired using a 320-slice scanner with a rotation speed of 2 Hz. The arterial to venous passage of an intravenous contrast bolus was captured during a 15 s continuous scan. The raw data was reconstructed using four different reconstruction-intervals: 0.2, 0.3, 0.5 and 1.0 s. The results were rated by two observers using a standardized score sheet. The appearance of each lesion was rated correctly in all readings. Scoring for quality of temporal resolution revealed a stepwise improvement from the 1.0 s interval to the 0.3 s interval, while no discernable improvement was noted between the 0.3 s and 0.2 s interval. An increase in temporal resolution may improve the diagnostic quality of cranial 4D-CTA. Using a rotation speed of 0.5 s, the optimal reconstruction interval appears to be 0.3 s, beyond which, changes can no longer be discerned. PMID:23217631
NASA Astrophysics Data System (ADS)
Nolte, David D.
2016-03-01
Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.
Three-dimensional positioning and structure of chromosomes in a human prophase nucleus
Chen, Bo; Yusuf, Mohammed; Hashimoto, Teruo; Estandarte, Ana Katrina; Thompson, George; Robinson, Ian
2017-01-01
The human genetic material is packaged into 46 chromosomes. The structure of chromosomes is known at the lowest level, where the DNA chain is wrapped around a core of eight histone proteins to form nucleosomes. Around a million of these nucleosomes, each about 11 nm in diameter and 6 nm in thickness, are wrapped up into the complex organelle of the chromosome, whose structure is mostly known at the level of visible light microscopy to form a characteristic cross shape in metaphase. However, the higher-order structure of human chromosomes, between a few tens and hundreds of nanometers, has not been well understood. We show a three-dimensional (3D) image of a human prophase nucleus obtained by serial block-face scanning electron microscopy, with 36 of the complete set of 46 chromosomes captured within it. The acquired image allows us to extract quantitative 3D structural information about the nucleus and the preserved, intact individual chromosomes within it, including their positioning and full spatial morphology at a resolution of around 50 nm in three dimensions. The chromosome positions were found, at least partially, to follow the pattern of chromosome territories previously observed only in interphase. The 3D conformation shows parallel, planar alignment of the chromatids, whose occupied volumes are almost fully accounted for by the DNA and known chromosomal proteins. We also propose a potential new method of identifying human chromosomes in three dimensions, on the basis of the measurements of their 3D morphology. PMID:28776025
Analysis of Uncertainty in a Middle-Cost Device for 3D Measurements in BIM Perspective
Sánchez, Alonso; Naranjo, José-Manuel; Jiménez, Antonio; González, Alfonso
2016-01-01
Medium-cost devices equipped with sensors are being developed to get 3D measurements. Some allow for generating geometric models and point clouds. Nevertheless, the accuracy of these measurements should be evaluated, taking into account the requirements of the Building Information Model (BIM). This paper analyzes the uncertainty in outdoor/indoor three-dimensional coordinate measures and point clouds (using Spherical Accuracy Standard (SAS) methods) for Eyes Map, a medium-cost tablet manufactured by e-Capture Research & Development Company, Mérida, Spain. To achieve it, in outdoor tests, by means of this device, the coordinates of targets were measured from 1 to 6 m and cloud points were obtained. Subsequently, these were compared to the coordinates of the same targets measured by a Total Station. The Euclidean average distance error was 0.005–0.027 m for measurements by Photogrammetry and 0.013–0.021 m for the point clouds. All of them satisfy the tolerance for point cloud acquisition (0.051 m) according to the BIM Guide for 3D Imaging (General Services Administration); similar results are obtained in the indoor tests, with values of 0.022 m. In this paper, we establish the optimal distances for the observations in both, Photogrammetry and 3D Photomodeling modes (outdoor) and point out some working conditions to avoid in indoor environments. Finally, the authors discuss some recommendations for improving the performance and working methods of the device. PMID:27669245
Performance characterization of structured light-based fingerprint scanner
NASA Astrophysics Data System (ADS)
Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.
2013-05-01
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
Towards a Quantum Interface between Diamond Spin Qubits and Phonons in an Optical Trap
NASA Astrophysics Data System (ADS)
Ji, Peng; Momeen, M. Ummal; Hsu, Jen-Feng; D'Urso, Brian; Dutt, Gurudev
2014-05-01
We introduce a method to optically levitate a pre-selected nanodiamond crystal in air or vacuum. The nanodiamond containing nitrogen-vacancy (NV) centers is suspended on a monolayer of graphene transferred onto a patterned substrate. Laser light is focused onto the sample, using a home-built confocal microscope with a high numerical aperture (NA = 0.9) objective, simultaneously burning the graphene and creating a 3D optical trap that captures the falling nano-diamond at the beam waist. The trapped diamond is an ultra-high-Q mechanical oscillator, allowing us to engineer strong linear and quadratic coupling between the spin of the NV center and the phonon mode. The system could result in an ideal quantum interface between a spin qubit and vibrational phonon mode, potentially enabling applications in quantum information processing and sensing the development of quantum information storage and processing.
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes
NASA Astrophysics Data System (ADS)
Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio
2017-12-01
A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.
75 FR 46943 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-04
...-capturing process. SAMHSA will place Web site registration information into a Knowledge Management database... September 3, 2010 to: SAMHSA Desk Officer, Human Resources and Housing Branch, Office of Management and...
1992-05-01
methodology, knowledge acquisition, 140 requirements definition, information systems, information engineering, 16. PRICE CODE systems engineering...and knowledge resources. Like manpower, materials, and machines, information and knowledge assets are recognized as vital resources that can be...evolve towards an information -integrated enterprise. These technologies are designed to leverage information and knowledge resources as the key
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Park, Subok; Markey, Mia K.
2017-03-01
Multifocal and multicentric breast cancer (MFMC), i.e., the presence of two or more tumor foci within the same breast, has an immense clinical impact on treatment planning and survival outcomes. Detecting multiple breast tumors is challenging as MFMC breast cancer is relatively uncommon, and human observers do not know the number or locations of tumors a priori. Digital breast tomosynthesis (DBT), in which an x-ray beam sweeps over a limited angular range across the breast, has the potential to improve the detection of multiple tumors.1, 2 However, prior efforts to optimize DBT image quality only considered unifocal breast cancers (e.g.,3-9), so the recommended geometries may not necessarily yield images that are informative for the task of detecting MFMC. Hence, the goal of this study is to employ a 3D multi-lesion (ml) channelized-Hotelling observer (CHO) to identify optimal DBT acquisition geometries for MFMC. Digital breast phantoms and simulated DBT scanners of different geometries (e.g., wide or narrow arc scans, different number of projections in each scan) were used to generate image data for the simulation study. Multiple 3D synthetic lesions were inserted into different breast regions to simulate MF cases and MC cases. 3D partial least squares (PLS) channels, and 3D Laguerre-Gauss (LG) channels were estimated to capture discriminant information and correlations among signals in locally varying anatomical backgrounds, enabling the model observer to make both image-level and location-specific detection decisions. The 3D ml-CHO with PLS channels outperformed that with LG channels in this study. The simulated MC cases and MC cases were not equally difficult for the ml-CHO to detect across the different simulated DBT geometries considered in this analysis. Also, the results suggest that the optimal design of DBT may vary as the task of clinical interest changes, e.g., a geometry that is better for finding at least one lesion may be worse for counting the number of lesions.
Bruse, Jan L; McLeod, Kristin; Biglino, Giovanni; Ntsinjana, Hopewell N; Capelli, Claudio; Hsia, Tain-Yen; Sermesant, Maxime; Pennec, Xavier; Taylor, Andrew M; Schievano, Silvia
2016-05-31
Medical image analysis in clinical practice is commonly carried out on 2D image data, without fully exploiting the detailed 3D anatomical information that is provided by modern non-invasive medical imaging techniques. In this paper, a statistical shape analysis method is presented, which enables the extraction of 3D anatomical shape features from cardiovascular magnetic resonance (CMR) image data, with no need for manual landmarking. The method was applied to repaired aortic coarctation arches that present complex shapes, with the aim of capturing shape features as biomarkers of potential functional relevance. The method is presented from the user-perspective and is evaluated by comparing results with traditional morphometric measurements. Steps required to set up the statistical shape modelling analyses, from pre-processing of the CMR images to parameter setting and strategies to account for size differences and outliers, are described in detail. The anatomical mean shape of 20 aortic arches post-aortic coarctation repair (CoA) was computed based on surface models reconstructed from CMR data. By analysing transformations that deform the mean shape towards each of the individual patient's anatomy, shape patterns related to differences in body surface area (BSA) and ejection fraction (EF) were extracted. The resulting shape vectors, describing shape features in 3D, were compared with traditionally measured 2D and 3D morphometric parameters. The computed 3D mean shape was close to population mean values of geometric shape descriptors and visually integrated characteristic shape features associated with our population of CoA shapes. After removing size effects due to differences in body surface area (BSA) between patients, distinct 3D shape features of the aortic arch correlated significantly with EF (r = 0.521, p = .022) and were well in agreement with trends as shown by traditional shape descriptors. The suggested method has the potential to discover previously unknown 3D shape biomarkers from medical imaging data. Thus, it could contribute to improving diagnosis and risk stratification in complex cardiac disease.
The 3D rocket combustor acoustics model
NASA Technical Reports Server (NTRS)
Priem, Richard J.; Breisacher, Kevin J.
1992-01-01
The theory and procedures for determining the characteristics of pressure oscillations in rocket engines with prescribed burning rate oscillations are presented. Analyses including radial and hub baffles and absorbers can be performed in one, two, and three dimensions. Pressure and velocity oscillations calculated using this procedure are presented for the SSME to show the influence of baffles and absorbers on the burning rate oscillations required to achieve neutral stability. Comparisons are made between the results obtained utilizing 1-D, 2-D, and 3-D assumptions with regards to capturing the physical phenomena of interest and computational requirements.