Sample records for imaging time points

  1. Software for Verifying Image-Correlation Tie Points

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Yagi, Gary

    2008-01-01

    A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.

  2. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  3. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy.

    PubMed

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2013-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.

  4. Imaging study on acupuncture points

    NASA Astrophysics Data System (ADS)

    Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.

    2009-09-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  5. Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas

    2008-01-01

    In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.

  6. A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido

    2012-02-01

    Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.

  7. The vectorization of a ray tracing program for image generation

    NASA Technical Reports Server (NTRS)

    Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.

    1984-01-01

    Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.

  8. SU-F-R-17: Advancing Glioblastoma Multiforme (GBM) Recurrence Detection with MRI Image Texture Feature Extraction and Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, V; Ruan, D; Nguyen, D

    Purpose: To test the potential of early Glioblastoma Multiforme (GBM) recurrence detection utilizing image texture pattern analysis in serial MR images post primary treatment intervention. Methods: MR image-sets of six time points prior to the confirmed recurrence diagnosis of a GBM patient were included in this study, with each time point containing T1 pre-contrast, T1 post-contrast, T2-Flair, and T2-TSE images. Eight Gray-level co-occurrence matrix (GLCM) texture features including Contrast, Correlation, Dissimilarity, Energy, Entropy, Homogeneity, Sum-Average, and Variance were calculated from all images, resulting in a total of 32 features at each time point. A confirmed recurrent volume was contoured, alongmore » with an adjacent non-recurrent region-of-interest (ROI) and both volumes were propagated to all prior time points via deformable image registration. A support vector machine (SVM) with radial-basis-function kernels was trained on the latest time point prior to the confirmed recurrence to construct a model for recurrence classification. The SVM model was then applied to all prior time points and the volumes classified as recurrence were obtained. Results: An increase in classified volume was observed over time as expected. The size of classified recurrence maintained at a stable level of approximately 0.1 cm{sup 3} up to 272 days prior to confirmation. Noticeable volume increase to 0.44 cm{sup 3} was demonstrated at 96 days prior, followed by significant increase to 1.57 cm{sup 3} at 42 days prior. Visualization of the classified volume shows the merging of recurrence-susceptible region as the volume change became noticeable. Conclusion: Image texture pattern analysis in serial MR images appears to be sensitive to detecting the recurrent GBM a long time before the recurrence is confirmed by a radiologist. The early detection may improve the efficacy of targeted intervention including radiosurgery. More patient cases will be included to create a generalizable classification model applicable to a larger patient cohort. NIH R43CA183390 and R01CA188300.NSF Graduate Research Fellowship DGE-1144087.« less

  9. Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection

    PubMed Central

    Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang

    2015-01-01

    Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617

  10. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  11. The near real time image navigation of pictures returned by Voyager 2 at Neptune

    NASA Technical Reports Server (NTRS)

    Underwood, Ian M.; Bachman, Nathaniel J.; Taber, William L.; Wang, Tseng-Chan; Acton, Charles H.

    1990-01-01

    The development of a process for performing image navigation in near real time is described. The process was used to accurately determine the camera pointing for pictures returned by the Voyager 2 spacecraft at Neptune Encounter. Image navigation improves knowledge of the pointing of an imaging instrument at a particular epoch by correlating the spacecraft-relative locations of target bodies in inertial space with the locations of their images in a picture taken at that epoch. More than 8,500 pictures returned by Voyager 2 at Neptune were processed in near real time. The results were used in several applications, including improving pointing knowledge for nonimaging instruments ('C-smithing'), making 'Neptune, the Movie', and providing immediate access to geometrical quantities similar to those traditionally supplied in the Supplementary Experiment Data Record.

  12. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  13. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  14. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  15. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  16. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction strategy is adopted to discard false matched point pairs further; and (4) Affine transformation model is introduced to correct coordinate difference between real-time image and reference image. This resulted in the matching of the two images. SPOT5 Remote sensing images captured at different date and airborne images captured with different flight attitude were used to test the performance of the method from matching accuracy, operation time and ability to overcome rotation. Results show the effectiveness of the approach.

  17. On the Relation Between Facular Bright Points and the Magnetic Field

    NASA Astrophysics Data System (ADS)

    Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran

    1994-12-01

    Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross--comparisons compiled from measurements of over two-thousand bright point structures are presented. Preliminary analysis of the time evolution of bright points in the G--band reveals that the dominant mode of bright point evolution is fission of larger structures into smaller ones and fusion of small structures into conglomerate structures. The characteristic time scale for the fission/fusion process is on the order of minutes.

  18. Consistent and reproducible positioning in longitudinal imaging for phenotyping genetically modified swine

    NASA Astrophysics Data System (ADS)

    Hammond, Emily; Dilger, Samantha K. N.; Stoyles, Nicholas; Judisch, Alexandra; Morgan, John; Sieren, Jessica C.

    2015-03-01

    Recent growth of genetic disease models in swine has presented the opportunity to advance translation of developed imaging protocols, while characterizing the genotype to phenotype relationship. Repeated imaging with multiple clinical modalities provides non-invasive detection, diagnosis, and monitoring of disease to accomplish these goals; however, longitudinal scanning requires repeatable and reproducible positioning of the animals. A modular positioning unit was designed to provide a fixed, stable base for the anesthetized animal through transit and imaging. Post ventilation and sedation, animals were placed supine in the unit and monitored for consistent vitals. Comprehensive imaging was performed with a computed tomography (CT) chest-abdomen-pelvis scan at each screening time point. Longitudinal images were rigidly registered, accounting for rotation, translation, and anisotropic scaling, and the skeleton was isolated using a basic thresholding algorithm. Assessment of alignment was quantified via eleven pairs of corresponding points on the skeleton with the first time point as the reference. Results were obtained with five animals over five screening time points. The developed unit aided in skeletal alignment within an average of 13.13 +/- 6.7 mm for all five subjects providing a strong foundation for developing qualitative and quantitative methods of disease tracking.

  19. Ultrasound Picture Archiving And Communication Systems

    NASA Astrophysics Data System (ADS)

    Koestner, Ken; Hottinger, C. F.

    1982-01-01

    The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.

  20. THEMIS Global Mosaics

    NASA Astrophysics Data System (ADS)

    Gorelick, N. S.; Christensen, P. R.

    2005-12-01

    We have developed techniques to make seamless, controlled global mosaics from the more than 50,000 multi-spectral infrared images of the Mars returned by the THEMIS instrument aboard the Mars Odyssey spacecraft. These images cover more than 95% of the surface at 100m/pixel resolution at both day and night local times. Uncertainties in the position and pointing of the spacecraft, varying local time, and imaging artifacts make creating well-registered mosaics from these datasets a challenging task. In preparation for making global mosaics, many full-resolution regional mosaics have been made. These mosaics typically cover an area 10x10 degrees or smaller, and are constructed from only a few hundred images. To make regional mosaics, individual images are geo-rectified using the USGS ISIS software. This dead-reckoning is sufficient to approximate position to within 400m in cases where the SPICE information was downlinked. Further coregistration of images is handled in two ways: grayscale differences minimization in overlapping regions through integer pixel shifting, or through automatic tie-point generation using a radial symmetry transformation (RST). The RST identifies points within an image that exhibit 4-way symmetry. Martian craters tend to to be very radially symmetric, and the RST can pin-point a crater center to sub-pixel accuracy in both daytime and nighttime images, independent of lighting, time of day, or seasonal effects. Additionally, the RST works well on visible-light images, and in a 1D application, on MOLA tracks, to provide precision tie-points across multiple data sets. The RST often finds many points of symmetry that aren't related to surface features. These "false-hits" are managed using a clustering algorithm that identifies constellations of points that occur in multiple images, independent of scaling or other affine transformations. This technique is able to make use of data in which the "good" tie-points comprise even less than 1% of total candidate tie-points. Once tie-points have been identified, the individual images are warped into their final shape and position, and then mosaiced and blended. To make seamless mosaics, each image can be level adjusted to normalize its values using histogram-fitting, but in most cases a linear contrast stretch to a fixed standard deviation is sufficient, although it destroys the absolute radiometry of the mosaic. For very large mosaics, using a high-pass/low-pass separation, and blending the two pieces separately before recombining them has also provided positive results.

  1. Change Detection Based on Persistent Scatterer Interferometry - a New Method of Monitoring Building Changes

    NASA Astrophysics Data System (ADS)

    Yang, C. H.; Kenduiywo, B. K.; Soergel, U.

    2016-06-01

    Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.

  2. Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease

    PubMed Central

    Jie, Biao; Liu, Mingxia; Liu, Jun

    2016-01-01

    Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313

  3. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  4. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  5. Dual time-point (18)F-FDG PET/CT to assess response to radiofrequency ablation of lung metastases.

    PubMed

    Lafuente, S; Fuster, D; Arguis, P; Granados, U; Perlaza, P; Paredes, P; Vollmer, I; Sánchez, M; Lomeña, F

    2016-01-01

    To establish the usefulness of dual time-point PET/CT imaging in determining the response to radiofrequency ablation (RFA) of solitary lung metastases from gastrointestinal cancer. This prospective study included 18 cases (3 female, 15 male, mean age 71±15 yrs) with solitary lung metastases from malignant digestive tract tumors candidates for RFA. PET/CT images 1h after injection of 4.07MBq/kg of (18)F-FDG (standard images) were performed at baseline, 1 month, and 3 months after RFA. PET/CT images 2h after injection centered in the thorax at 1 month after RFA were also performed (delayed images). A retention index (RI) of dual time-point images was calculated as follows: RI=(SUVmax delayed image-SUVmax standard image/SUVmax standard image)*100. Pathological confirmation of residual tumor by histology of the treated lesion was considered as local recurrence. A negative imaging follow-up was considered as complete response. Local recurrence was found in 6/18 lesions, and complete response in the remaining 12. The mean percentage change in SUVmax at 1 month and at 3 months showed a sensitivity and specificity for PET/CT of 50% and 33%, and 67% and 92%, respectively. The RI at 1 month after RFA showed a sensitivity and specificity of 83% and 92%, respectively. Dual time point PET/CT can predict the outcome at one month after RFA in lung metastases from digestive tract cancers. The RI can be used to indicate the need for further procedures to rule out persistent tumor due to incomplete RFA. Copyright © 2015 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  6. Toward a Global Bundle Adjustment of SPOT 5 - HRS Images

    NASA Astrophysics Data System (ADS)

    Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.

    2012-07-01

    The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.

  7. Motion illusions in optical art presented for long durations are temporally distorted.

    PubMed

    Nather, Francisco Carlos; Mecca, Fernando Figueiredo; Bueno, José Lino Oliveira

    2013-01-01

    Static figurative images implying human body movements observed for shorter and longer durations affect the perception of time. This study examined whether images of static geometric shapes would affect the perception of time. Undergraduate participants observed two Optical Art paintings by Bridget Riley for 9 or 36 s (group G9 and G36, respectively). Paintings implying different intensities of movement (2.0 and 6.0 point stimuli) were randomly presented. The prospective paradigm in the reproduction method was used to record time estimations. Data analysis did not show time distortions in the G9 group. In the G36 group the paintings were differently perceived: that for the 2.0 point one are estimated to be shorter than that for the 6.0 point one. Also for G36, the 2.0 point painting was underestimated in comparison with the actual time of exposure. Motion illusions in static images affected time estimation according to the attention given to the complexity of movement by the observer, probably leading to changes in the storage velocity of internal clock pulses.

  8. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  9. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  10. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  11. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  12. A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2013-01-01

    Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014

  13. Assessment of Image Quality of Repeated Limited Transthoracic Echocardiography After Cardiac Surgery.

    PubMed

    Canty, David J; Heiberg, Johan; Tan, Jen A; Yang, Yang; Royse, Alistair G; Royse, Colin F; Mobeirek, Abdulelah; Shaer, Fayez El; Albacker, Turki; Nazer, Rakan I; Fouda, Muhammed; Bakir, Bakir M; Alsaddique, Ahmed A

    2017-06-01

    The use of limited transthoracic echocardiography (TTE) has been restricted in patients after cardiac surgery due to reported poor image quality. The authors hypothesized that the hemodynamic state could be evaluated in a high proportion of patients at repeated intervals after cardiac surgery. Prospective observational study. Tertiary university hospital. The study comprised 51 patients aged 18 years or older presenting for cardiac surgery. Patients underwent TTE before surgery and at 3 time points after cardiac surgery. Images were assessed offline using an image quality scoring system by 2 expert observers. Hemodynamic state was assessed using the iHeartScan protocol, and the primary endpoint was the proportion of limited TTE studies in which the hemodynamic state was interpretable at each of the 3 postoperative time points. Hemodynamic state interpretability varied over time and was highest before surgery (90%) and lowest on the first postoperative day (49%) (p<0.01). This variation in interpretability over time was reflected in all 3 transthoracic windows, ranging from 43% to 80% before surgery and from 2% to 35% on the first postoperative day (p<0.01). Image quality scores were highest with the apical window, ranging from 53% to 77% across time points, and lowest with the subcostal window, ranging from 4% to 70% across time points (p< 0.01). Hemodynamic state can be determined with TTE in a high proportion of cardiac surgery patients after extubation and removal of surgical drains. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  15. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  16. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  17. Study of Automatic Image Rectification and Registration of Scanned Historical Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Chen, H. R.; Tseng, Y. H.

    2016-06-01

    Historical aerial photographs directly provide good evidences of past times. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica has collected and scanned numerous historical maps and aerial images of Taiwan and China. Some maps or images have been geo-referenced manually, but most of historical aerial images have not been registered since there are no GPS or IMU data for orientation assisting in the past. In our research, we developed an automatic process of matching historical aerial images by SIFT (Scale Invariant Feature Transform) for handling the great quantity of images by computer vision. SIFT is one of the most popular method of image feature extracting and matching. This algorithm extracts extreme values in scale space into invariant image features, which are robust to changing in rotation scale, noise, and illumination. We also use RANSAC (Random sample consensus) to remove outliers, and obtain good conjugated points between photographs. Finally, we manually add control points for registration through least square adjustment based on collinear equation. In the future, we can use image feature points of more photographs to build control image database. Every new image will be treated as query image. If feature points of query image match the features in database, it means that the query image probably is overlapped with control images.With the updating of database, more and more query image can be matched and aligned automatically. Other research about multi-time period environmental changes can be investigated with those geo-referenced temporal spatial data.

  18. Feature-based registration of historical aerial images by Area Minimization

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sudhagar; Schenk, Toni

    2016-06-01

    The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.

  19. 3D single point imaging with compressed sensing provides high temporal resolution R 2* mapping for in vivo preclinical applications.

    PubMed

    Rioux, James A; Beyea, Steven D; Bowen, Chris V

    2017-02-01

    Purely phase-encoded techniques such as single point imaging (SPI) are generally unsuitable for in vivo imaging due to lengthy acquisition times. Reconstruction of highly undersampled data using compressed sensing allows SPI data to be quickly obtained from animal models, enabling applications in preclinical cellular and molecular imaging. TurboSPI is a multi-echo single point technique that acquires hundreds of images with microsecond spacing, enabling high temporal resolution relaxometry of large-R 2 * systems such as iron-loaded cells. TurboSPI acquisitions can be pseudo-randomly undersampled in all three dimensions to increase artifact incoherence, and can provide prior information to improve reconstruction. We evaluated the performance of CS-TurboSPI in phantoms, a rat ex vivo, and a mouse in vivo. An algorithm for iterative reconstruction of TurboSPI relaxometry time courses does not affect image quality or R 2 * mapping in vitro at acceleration factors up to 10. Imaging ex vivo is possible at similar acceleration factors, and in vivo imaging is demonstrated at an acceleration factor of 8, such that acquisition time is under 1 h. Accelerated TurboSPI enables preclinical R 2 * mapping without loss of data quality, and may show increased specificity to iron oxide compared to other sequences.

  20. Diffusion-weighted MR imaging of upper abdominal organs at different time points: Apparent diffusion coefficient normalization using a reference organ.

    PubMed

    Song, Ji Soo; Kwak, Hyo Sung; Byon, Jung Hee; Jin, Gong Yong

    2017-05-01

    To compare the apparent diffusion coefficient (ADC) of upper abdominal organs acquired at different time points, and to investigate the usefulness of normalization. We retrospectively evaluated 58 patients who underwent three rounds of magnetic resonance (MR) imaging including diffusion-weighted imaging of the upper abdomen. MR examinations were performed using three different 3.0 Tesla (T) and one 1.5T systems, with variable b value combinations and respiratory motion compensation techniques. The ADC values of the upper abdominal organs from three different time points were analyzed, using the ADC values of the paraspinal muscle (ADC psm ) and spleen (ADC spleen ) for normalization. Intraclass correlation coefficients (ICC) and comparison of dependent ICCs were used for statistical analysis. The ICCs of the original ADC and ADC psm showed fair to substantial agreement, while ADC spleen showed substantial to almost perfect agreement. The ICC of ADC spleen of all anatomical regions showed less variability compared with that of the original ADC (P < 0.005). Normalized ADC using the spleen as a reference organ significantly decreased variability in measurement of the upper abdominal organs in different MR systems at different time points and could be regarded as an imaging biomarker for future multicenter, longitudinal studies. 5 J. MAGN. RESON. IMAGING 2017;45:1494-1501. © 2016 International Society for Magnetic Resonance in Medicine.

  1. Passive lighting responsive three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Hu, Juanmei

    2017-11-01

    A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.

  2. Comparison of dynamic FDG-microPET study in a rabbit turpentine-induced inflammatory model and in a rabbit VX2 tumor model.

    PubMed

    Hamazawa, Yoshimasa; Koyama, Koichi; Okamura, Terue; Wada, Yasuhiro; Wakasa, Tomoko; Okuma, Tomohisa; Watanabe, Yasuyoshi; Inoue, Yuichi

    2007-01-01

    We investigated the optimum time for the differentiation tumor from inflammation using dynamic FDG-microPET scans obtained by a MicroPET P4 scanner in animal models. Forty-six rabbits with 92 inflammatory lesions that were induced 2, 5, 7, 14, 30 and 60 days after 0.2 ml (Group 1) or 1.0 ml (Group 2) of turpentine oil injection were used as inflammatory models. Five rabbits with 10 VX2 tumors were used as the tumor model. Helical CT scans were performed before the PET studies. In the PET study, after 4 hours fasting, and following transmission scans and dynamic emission data acquisitions were performed until 2 hours after intravenous FDG injection. Images were reconstructed every 10 minutes using a filtered-back projection method. PET images were analyzed visually referring to CT images. For quantitative analysis, the inflammation-to-muscle (I/M) ratio and tumor-to-muscle (T/M) ratio were calculated after regions of interest were set in tumors and muscles referring to CT images and the time-I/M ratio and time-T/M ratio curves (TRCs) were prepared to show the change over time in these ratios. The histological appearance of both inflammatory lesions and tumor lesions were examined and compared with the CT and FDG-microPET images. In visual and quantitative analysis, All the I/M ratios and the T/M ratios increased over time except that Day 60 of Group 1 showed an almost flat curve. The TRC of the T/M ratio showed a linear increasing curve over time, while that of the I/M ratios showed a parabolic increasing over time at the most. FDG uptake in the inflammatory lesions reflected the histological findings. For differentiating tumors from inflammatory lesions with the early image acquired at 40 min for dual-time imaging, the delayed image must be acquired 30 min after the early image, while imaging at 90 min or later after intravenous FDG injection was necessary in single-time-point imaging. Our results suggest the possibility of shortening the overall testing time in clinical practice by adopting dual-time-point imaging rather than single-time-point imaging.

  3. Numerical modeling of a point-source image under relative motion of radiation receiver and atmosphere

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.

    1994-02-01

    A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.

  4. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  5. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  6. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  7. Evaluation of 2-point, 3-point, and 6-point Dixon magnetic resonance imaging with flexible echo timing for muscle fat quantification.

    PubMed

    Grimm, Alexandra; Meyer, Heiko; Nickel, Marcel D; Nittka, Mathias; Raithel, Esther; Chaudry, Oliver; Friedberger, Andreas; Uder, Michael; Kemmler, Wolfgang; Quick, Harald H; Engelke, Klaus

    2018-06-01

    The purpose of this study is to evaluate and compare 2-point (2pt), 3-point (3pt), and 6-point (6pt) Dixon magnetic resonance imaging (MRI) sequences with flexible echo times (TE) to measure proton density fat fraction (PDFF) within muscles. Two subject groups were recruited (G1: 23 young and healthy men, 31 ± 6 years; G2: 50 elderly men, sarcopenic, 77 ± 5 years). A 3-T MRI system was used to perform Dixon imaging on the left thigh. PDFF was measured with six Dixon prototype sequences: 2pt, 3pt, and 6pt sequences once with optimal TEs (in- and opposed-phase echo times), lower resolution, and higher bandwidth (optTE sequences) and once with higher image resolution (highRes sequences) and shortest possible TE, respectively. Intra-fascia PDFF content was determined. To evaluate the comparability among the sequences, Bland-Altman analysis was performed. The highRes 6pt Dixon sequences served as reference as a high correlation of this sequence to magnetic resonance spectroscopy has been shown before. The PDFF difference between the highRes 6pt Dixon sequence and the optTE 6pt, both 3pt, and the optTE 2pt was low (between 2.2% and 4.4%), however, not to the highRes 2pt Dixon sequence (33%). For the optTE sequences, difference decreased with the number of echoes used. In conclusion, for Dixon sequences with more than two echoes, the fat fraction measurement was reliable with arbitrary echo times, while for 2pt Dixon sequences, it was reliable with dedicated in- and opposed-phase echo timing. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Toxicological Tipping Points: Learning Boolean Networks from High-Content Imaging Data. (BOSC meeting)

    EPA Science Inventory

    The objective of this work is to elucidate biological networks underlying cellular tipping points using time-course data. We discretized the high-content imaging (HCI) data and inferred Boolean networks (BNs) that could accurately predict dynamic cellular trajectories. We found t...

  9. Comparison among Reconstruction Algorithms for Quantitative Analysis of 11C-Acetate Cardiac PET Imaging.

    PubMed

    Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li

    2018-01-01

    Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.

  10. Observing Bridge Dynamic Deflection in Green Time by Information Technology

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Zhao, Yongqian; Chen, Mingzhi

    2018-01-01

    As traditional surveying methods are limited to observe bridge dynamic deflection; information technology is adopted to observe bridge dynamic deflection in Green time. Information technology used in this study means that we use digital cameras to photograph the bridge in red time as a zero image. Then, a series of successive images are photographed in green time. Deformation point targets are identified and located by Hough transform. With reference to the control points, the deformation values of these deformation points are obtained by differencing the successive images with a zero image, respectively. Results show that the average measurement accuracies of C0 are 0.46 pixels, 0.51 pixels and 0.74 pixels in X, Z and comprehensive direction. The average measurement accuracies of C1 are 0.43 pixels, 0.43 pixels and 0.67 pixels in X, Z and comprehensive direction in these tests. The maximal bridge deflection is 44.16mm, which is less than 75mm (Bridge deflection tolerance value). Information technology in this paper can monitor bridge dynamic deflection and depict deflection trend curves of the bridge in real time. It can provide data support for the site decisions to the bridge structure safety.

  11. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array—Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-01-01

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813

  12. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  13. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  14. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  15. Concrete thawing studied by single-point ramped imaging.

    PubMed

    Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W

    1997-12-01

    A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.

  16. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, C; Beddar, S; Wen, Z

    Purpose: The purpose of this study is to develop a technique to obtain four-dimensional (4D) magnetic resonance (MR) images that are more representative of a patient’s typical breathing cycle by utilizing an extended acquisition time while minimizing the image artifacts. Methods: The 4D MR data were acquired with the balanced steady state free precession in two-dimensional sagittal plane of view. Each slice was acquired repeatedly for about 15 s, thereby obtaining multiple images at each of the 10 phases in the respiratory cycle. This improves the probability that at least one of the images were acquired at the desired phasemore » during a regular breathing cycle. To create optimal 4D MR images, an iterative approach was used to identify the set of images that yielded the highest slice-to-slice similarity. To assess the effectiveness of the approach, the data set was truncated into periods of 7 s (50 time points), 11 s (75 time points) and the full 15 s (100 time points). The 4D MR images were then sorted with data of the three different acquisition periods for comparison. Results: In general, the 4D MR images sorted using data from longer acquisition periods showed less mismatched artifacts. In addition, the normalized cross correlation (NCC) between slices of a 4D volume increases with increased acquisition period. The average NCC was 0.791 from the 7 s period, 0.794 from the 11 s period and 0.796 from the 15 s period. Conclusion: Our preliminary study showed that extending the acquisition time with the proposed sorting technique can improve image quality and reduce artifact presence in the 4D MR images. Data acquisition over two breathing cycles is a good trade-off between artifact reduction and scan time. This research was partially funded by the the Center for Radiation Oncology Research from UT MD Anderson Cancer Center.« less

  18. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  19. The "Best Worst" Field Optimization and Focusing

    NASA Technical Reports Server (NTRS)

    Vaughnn, David; Moore, Ken; Bock, Noah; Zhou, Wei; Ming, Liang; Wilson, Mark

    2008-01-01

    A simple algorithm for optimizing and focusing lens designs is presented. The goal of the algorithm is to simultaneously create the best and most uniform image quality over the field of view. Rather than relatively weighting multiple field points, only the image quality from the worst field point is considered. When optimizing a lens design, iterations are made to make this worst field point better until such a time as a different field point becomes worse. The same technique is used to determine focus position. The algorithm works with all the various image quality metrics. It works with both symmetrical and asymmetrical systems. It works with theoretical models and real hardware.

  20. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  1. Implementation of real-time nonuniformity correction with multiple NUC tables using FPGA in an uncooled imaging system

    NASA Astrophysics Data System (ADS)

    Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho

    2009-05-01

    This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.

  2. ERRATUM: In vivo evaluation of a neural stem cell-seeded prosthesis In vivo evaluation of a neural stem cell-seeded prosthesis

    NASA Astrophysics Data System (ADS)

    Purcell, E. K.; Seymour, J. P.; Yandamuri, S.; Kipke, D. R.

    2009-08-01

    In the published article, an error was made in figure 5. Specifically, the three-month, NSC-seeded image is a duplicate of the six-week image, and the one-day, probe alone image is a duplicate of the three-month image. The corrected figure is reproduced below. Figure 5 Figure 5. Glial encapsulation of each probe condition over the 3 month time course. Ox-42 labeled microglia and GFAP labeled astrocytes are shown. Images are taken from probes implanted in the same animal at each time point. NSC seeding was associated with reduced non-neuronal density at 1 day post-implantation in comparison to alginate coated probes and at the 1 week time point in comparison to untreated probes (P < 0.001). Glial activation is at its overall peak 1 week after insertion. A thin encapsulation layer surrounds probes at the 6 week and 3 month time points, with NSC-seeded probes having the greatest surrounding non-neuronal density P < 0.001). Interestingly, microglia appeared to have a ramified, or `surveilling', morphology surrounding a neural stem cell-alginate probe initially, whereas activated cells with an amoeboid structure were found near an alginate probe in the same hemisphere of one animal (left panels).

  3. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    NASA Technical Reports Server (NTRS)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  4. Dynamic image reconstruction: MR movies from motion ghosts.

    PubMed

    Xiang, Q S; Henkelman, R M

    1992-01-01

    It has been previously shown that an image with motion ghost artifacts can be decomposed into a ghost mask superimposed over a ghost-free image. The present study demonstrates that the ghost components carry useful dynamic information and should not be discarded. Specifically, ghosts of different orders indicate the intensity and phase of the corresponding harmonics contained in the quasi-periodically varying spin-density distribution. A summation of the ghosts weighted by appropriate temporal phase factors can give a time-dependent dynamic image that is a movie of the object motion. This dynamic image reconstruction technique does not necessarily require monitoring of the motion and thus is easy to implement and operate. It also has a shorter imaging time than point-by-point imaging of temporal variation, because the periodic motion is more efficiently sampled with a limited number of harmonics recorded in the motion ghosts. This technique was tested in both moving phantoms and volunteers. It is believed to be useful for dynamic imaging of time-varying anatomic structures, such as in the cardiovascular system.

  5. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.

  6. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  7. Sensitivity quantification of remote detection NMR and MRI

    NASA Astrophysics Data System (ADS)

    Granwehr, J.; Seeley, J. A.

    2006-04-01

    A sensitivity analysis is presented of the remote detection NMR technique, which facilitates the spatial separation of encoding and detection of spin magnetization. Three different cases are considered: remote detection of a transient signal that must be encoded point-by-point like a free induction decay, remote detection of an experiment where the transient dimension is reduced to one data point like phase encoding in an imaging experiment, and time-of-flight (TOF) flow visualization. For all cases, the sensitivity enhancement is proportional to the relative sensitivity between the remote detector and the circuit that is used for encoding. It is shown for the case of an encoded transient signal that the sensitivity does not scale unfavorably with the number of encoded points compared to direct detection. Remote enhancement scales as the square root of the ratio of corresponding relaxation times in the two detection environments. Thus, remote detection especially increases the sensitivity of imaging experiments of porous materials with large susceptibility gradients, which cause a rapid dephasing of transverse spin magnetization. Finally, TOF remote detection, in which the detection volume is smaller than the encoded fluid volume, allows partial images corresponding to different time intervals between encoding and detection to be recorded. These partial images, which contain information about the fluid displacement, can be recorded, in an ideal case, with the same sensitivity as the full image detected in a single step with a larger coil.

  8. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  9. Time-resolved contrast-enhanced MR angiography of the thorax in adults with congenital heart disease.

    PubMed

    Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich

    2006-10-01

    The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.

  10. Limited diagnostic value of Dual-Time-Point (18)F-FDG PET/CT imaging for classifying solitary pulmonary nodules in granuloma-endemic regions both at visual and quantitative analyses.

    PubMed

    Chen, Song; Li, Xuena; Chen, Meijie; Yin, Yafu; Li, Na; Li, Yaming

    2016-10-01

    This study is aimed to compare the diagnostic power of using quantitative analysis or visual analysis with single time point imaging (STPI) PET/CT and dual time point imaging (DTPI) PET/CT for the classification of solitary pulmonary nodules (SPN) lesions in granuloma-endemic regions. SPN patients who received early and delayed (18)F-FDG PET/CT at 60min and 180min post-injection were retrospectively reviewed. Diagnoses are confirmed by pathological results or follow-ups. Three quantitative metrics, early SUVmax, delayed SUVmax and retention index(the percentage changes between the early SUVmax and delayed SUVmax), were measured for each lesion. Three 5-point scale score was given by blinded interpretations performed by physicians based on STPI PET/CT images, DTPI PET/CT images and CT images, respectively. ROC analysis was performed on three quantitative metrics and three visual interpretation scores. One-hundred-forty-nine patients were retrospectively included. The areas under curve (AUC) of the ROC curves of early SUVmax, delayed SUVmax, RI, STPI PET/CT score, DTPI PET/CT score and CT score are 0.73, 0.74, 0.61, 0.77 0.75 and 0.76, respectively. There were no significant differences between the AUCs in visual interpretation of STPI PET/CT images and DTPI PET/CT images, nor in early SUVmax and delayed SUVmax. The differences of sensitivity, specificity and accuracy between STPI PET/CT and DTPI PET/CT were not significantly different in either quantitative analysis or visual interpretation. In granuloma-endemic regions, DTPI PET/CT did not offer significant improvement over STPI PET/CT in differentiating malignant SPNs in both quantitative analysis and visual interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  12. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  13. Multiple Time-Point 68Ga-PSMA I&T PET/CT for Characterization of Primary Prostate Cancer: Value of Early Dynamic and Delayed Imaging.

    PubMed

    Schmuck, Sebastian; Mamach, Martin; Wilke, Florian; von Klot, Christoph A; Henkenberens, Christoph; Thackeray, James T; Sohns, Jan M; Geworski, Lilli; Ross, Tobias L; Wester, Hans-Juergen; Christiansen, Hans; Bengel, Frank M; Derlin, Thorsten

    2017-06-01

    The aims of this study were to gain mechanistic insights into prostate cancer biology using dynamic imaging and to evaluate the usefulness of multiple time-point Ga-prostate-specific membrane antigen (PSMA) I&T PET/CT for the assessment of primary prostate cancer before prostatectomy. Twenty patients with prostate cancer underwent Ga-PSMA I&T PET/CT before prostatectomy. The PET protocol consisted of early dynamic pelvic imaging, followed by static scans at 60 and 180 minutes postinjection (p.i.). SUVs, time-activity curves, quantitative analysis based on a 2-tissue compartment model, Patlak analysis, histopathology, and Gleason grading were compared between prostate cancer and benign prostate gland. Primary tumors were identified on both early dynamic and delayed imaging in 95% of patients. Tracer uptake was significantly higher in prostate cancer compared with benign prostate tissue at any time point (P ≤ 0.0003) and increased over time. Consequently, the tumor-to-nontumor ratio within the prostate gland improved over time (2.8 at 10 minutes vs 17.1 at 180 minutes p.i.). Tracer uptake at both 60 and 180 minutes p.i. was significantly higher in patients with higher Gleason scores (P < 0.01). The influx rate (Ki) was higher in prostate cancer than in reference prostate gland (0.055 [r = 0.998] vs 0.017 [r = 0.996]). Primary prostate cancer is readily identified on early dynamic and static delayed Ga-PSMA ligand PET images. The tumor-to-nontumor ratio in the prostate gland improves over time, supporting a role of delayed imaging for optimal visualization of prostate cancer.

  14. Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Risi, Matthew D.

    Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.

  15. High speed MRI of laryngeal gestures during speech production

    NASA Astrophysics Data System (ADS)

    Nissenbaum, Jon; Hillman, Robert E.; Kobler, James B.; Curtin, Hugh D.; Halle, Morris; Kirsch, John E.

    2002-05-01

    Dynamic sequences of magnetic resonance images (MRI) of the vocal tract were obtained with a frame rate of 144 frames/second. Changes in vertical position and length of the vocal folds, both observable in the mid-sagittal plane, have been argued to play a role in consonant production in addition to their primary function in the control of vocal fundamental frequency (F0) [W. G. Ewan and R. Krones, J. Phonet. 2, 327-335 (1974); A. Lofqvist et al., Haskins Lab. Status Report Speech Res., SR-97/98, pp. 25-40, 1989], but temporal resolution of available techniques has hindered direct imaging of these articulations. A novel data acquisition sequence was used to circumvent the imaging time imposed by standard MRI (typically 100-500 ms). Images were constructed by having subjects rhythmically repeat short utterances 256 times using the same F0 contour. Sixty-four lines of MR data were sampled during each repetition, at 7 millisecond increments, yielding partial raw data sets for 64 time points. After all repetitions were completed, one frame per time point was constructed by combining raw data from the corresponding time point during every repetition. Preliminary results indicate vocal fold shortening and lowering only during voiced consonants and in production of lower F0.

  16. Study on Low Illumination Simultaneous Polarization Image Registration Based on Improved SURF Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Wanjun; Yang, Xu

    2017-12-01

    Registration of simultaneous polarization images is the premise of subsequent image fusion operations. However, in the process of shooting all-weather, the polarized camera exposure time need to be kept unchanged, sometimes polarization images under low illumination conditions due to too dark result in SURF algorithm can not extract feature points, thus unable to complete the registration, therefore this paper proposes an improved SURF algorithm. Firstly, the luminance operator is used to improve overall brightness of low illumination image, and then create integral image, using Hession matrix to extract the points of interest to get the main direction of characteristic points, calculate Haar wavelet response in X and Y directions to get the SURF descriptor information, then use the RANSAC function to make precise matching, the function can eliminate wrong matching points and improve accuracy rate. And finally resume the brightness of the polarized image after registration, the effect of the polarized image is not affected. Results show that the improved SURF algorithm can be applied well under low illumination conditions.

  17. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  18. Real time three dimensional sensing system

    DOEpatents

    Gordon, S.J.

    1996-12-31

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.

  19. Real time three dimensional sensing system

    DOEpatents

    Gordon, Steven J.

    1996-01-01

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, I; Otto, M; Weichert, J

    Purpose: The focus of this work is to perform Monte Carlo-based dosimetry for several pediatric cancer xenografts in mice treated with a novel radiopharmaceutical {sup 131}I-CLR1404. Methods: Four mice for each tumor cell line were injected with 8–13 µCi/g of the {sup 124}124I-CLR1404. PET/CT images of each individual mouse were acquired at 5–6 time points over the span of 96–170 hours post-injection. Following acquisition, the images were co-registered, resampled, rescaled, corrected for partial volume effects (PVE), and masked. For this work the pre-treatment PET images of {sup 124}I-CLR1404 were used to predict therapeutic doses from {sup 131}I-CLR1404 at each timemore » point by assuming the same injection activity and accounting for the difference in physical decay rates. Tumors and normal tissues were manually contoured using anatomical and functional images. The CT and the PET images were used in the Geant4 (v9.6) Monte Carlo simulation to define the geometry and source distribution, respectively. The total cumulated absorbed dose was calculated by numerically integrating the dose-rate at each time point over all time on a voxel-by-voxel basis. Results: Spatial distributions of the absorbed dose rates and dose volume histograms as well as mean, minimum, maximum, and total dose values for each ROI were generated for each time point. Conclusion: This work demonstrates how mouse-specific MC-based dosimetry could potentially provide more accurate characterization of efficacy of novel radiopharmaceuticals in radionuclide therapy. This work is partially funded by NIH grant CA198392.« less

  1. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  2. Shape and rotational elements of comet 67P/ Churyumov-Gerasimenko derived by stereo-photogrammetric analysis of OSIRIS NAC image data

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger

    2015-04-01

    The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.

  3. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  4. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  5. Implicit multiplane 3D camera calibration matrices for stereo image processing

    NASA Astrophysics Data System (ADS)

    McKee, James W.; Burgett, Sherrie J.

    1997-12-01

    By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.3173

  6. Actuator-Assisted Calibration of Freehand 3D Ultrasound System.

    PubMed

    Koo, Terry K; Silvia, Nathaniel

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.

  7. Actuator-Assisted Calibration of Freehand 3D Ultrasound System

    PubMed Central

    2018-01-01

    Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371

  8. 18F-DCFPyL PET/CT in the Detection of Prostate Cancer at 60 and 120 Minutes: Detection Rate, Image Quality, Activity Kinetics, and Biodistribution.

    PubMed

    Wondergem, Maurits; van der Zant, Friso M; Knol, Remco J J; Lazarenko, Sergiy V; Pruim, Jan; de Jong, Igle J

    2017-11-01

    There is increasing interest in PET/CT with prostate-specific membrane antigen (PSMA) tracers for imaging of prostate cancer because of the higher detection rates of prostate cancer lesions than with PET/CT with choline. For 68 Ga-PSMA-11 tracers, late imaging at 180 min after injection instead of imaging at 45-60 min after injection improves the detection of prostate cancer lesions. For 18 F-DCFPyL, improved detection rates have recently been reported in a small pilot study. In this study, we report the effects of PET/CT imaging at 120 min after injection of 18 F-DCFPyL in comparison to images acquired at 60 min after injection in a larger clinical cohort of 66 consecutive patients with histopathologically proven prostate cancer. Methods: Images were acquired 60 and 120 min after injection of 18 F-DCFPyL. We report the positive lesions specified for anatomic locations (prostate, seminal vesicles, local lymph nodes, distant lymph nodes, bone, and others) at both time points by visual analysis, the image quality at both time points, and a semiquantitative analysis of the tracer activity in both prostate cancer lesions as well as normal tissues at both time points. Results: Our data showed a significantly increasing uptake of 18 F-DCFPyL between 60 and 120 min after injection in 203 lesions characteristic for prostate cancer (median, 10.78 vs. 12.86, P < 0.001, Wilcoxon signed-rank test). By visual analysis, 38.5% of all patients showed more lesions using images at 120 min after injection than using images at 60 min after injection, and in 9.2% a change in TNM staging was found. All lesions seen on images 60 min after injection were also visible on images 120 min after injection. A significantly better mean signal-to-noise ratio of 11.93 was found for images acquired 120 min after injection ( P < 0.001, paired t test; signal-to-noise ratio at 60 min after injection, 11.15). Conclusion: 18 F-DCFPyL PET/CT images at 120 min after injection yield a higher detection rate of prostate cancer characteristic lesions than images at 60 min after injection. Further studies are needed to elucidate the best imaging time point for 18 F-DCFPyL. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  9. Infrared imaging results of an excited planar jet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrington, R.B.

    1991-12-01

    Planar jets are used for many applications including heating, cooling, and ventilation. Generally such a jet is designed to provide good mixing within an enclosure. In building applications, the jet provides both thermal comfort and adequate indoor air quality. Increased mixing rates may lead to lower short-circuiting of conditioned air, elimination of dead zones within the occupied zone, reduced energy costs, increased occupant comfort, and higher indoor air quality. This paper discusses using an infrared imaging system to show the effect of excitation of a jet on the spread angle and on the jet mixing efficiency. Infrared imaging captures amore » large number of data points in real time (over 50,000 data points per image) providing significant advantages over single-point measurements. We used a screen mesh with a time constant of approximately 0.3 seconds as a target for the infrared camera to detect temperature variations in the jet. The infrared images show increased jet spread due to excitation of the jet. Digital data reduction and analysis show change in jet isotherms and quantify the increased mixing caused by excitation. 17 refs., 20 figs.« less

  10. An accelerated image matching technique for UAV orthoimage registration

    NASA Astrophysics Data System (ADS)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  11. Electronic method for autofluorography of macromolecules on two-D matrices

    DOEpatents

    Davidson, Jackson B.; Case, Arthur L.

    1983-01-01

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.

  12. The study of infrared target recognition at sea background based on visual attention computational model

    NASA Astrophysics Data System (ADS)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  13. Neurocognitive Effects of Radiotherapy

    DTIC Science & Technology

    2013-11-05

    tensor imaging ( DTI ), perfusion and diffusion. The majority of patients have completed baseline and at least two additional time-points in regards...completed a 1 hour standard MRI as well as additional testing including diffuse tensor imaging ( DTI ), perfusion and diffusion. The majority of...including diffuse tensor imaging ( DTI ), perfusion and diffusion. The majority of patients have completed baseline and at least two additional time

  14. Enhancing Ground Based Telescope Performance with Image Processing

    DTIC Science & Technology

    2013-11-13

    driven by the need to detect small faint objects with relatively short integration times to avoid streaking of the satellite image across multiple...the time right before the eclipse. The orbital elements of the satellite were entered into the SST’s tracking system, so that the SST could be...short integration times , thereby avoiding streaking of the satellite image across multiple CCD pixels so that the objects are suitably modeled as point

  15. Real-time reconstruction of three-dimensional brain surface MR image using new volume-surface rendering technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Momose, T.; Oku, S.

    It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less

  16. An Automated Blur Detection Method for Histological Whole Slide Imaging

    PubMed Central

    Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2013-01-01

    Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343

  17. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  18. Stereo multiplexed holographic particle image velocimeter

    DOEpatents

    Adrian, Ronald J.; Barnhart, Donald H.; Papen, George A.

    1996-01-01

    A holographic particle image velocimeter employs stereoscopic recording of particle images, taken from two different perspectives and at two distinct points in time for each perspective, on a single holographic film plate. The different perspectives are provided by two optical assemblies, each including a collecting lens, a prism and a focusing lens. Collimated laser energy is pulsed through a fluid stream, with elements carried in the stream scattering light, some of which is collected by each collecting lens. The respective focusing lenses are configured to form images of the scattered light near the holographic plate. The particle images stored on the plate are reconstructed using the same optical assemblies employed in recording, by transferring the film plate and optical assemblies as a single integral unit to a reconstruction site. At the reconstruction site, reconstruction beams, phase conjugates of the reference beams used in recording the image, are directed to the plate, then selectively through either one of the optical assemblies, to form an image reflecting the chosen perspective at the two points in time.

  19. Stereo multiplexed holographic particle image velocimeter

    DOEpatents

    Adrian, R.J.; Barnhart, D.H.; Papen, G.A.

    1996-08-20

    A holographic particle image velocimeter employs stereoscopic recording of particle images, taken from two different perspectives and at two distinct points in time for each perspective, on a single holographic film plate. The different perspectives are provided by two optical assemblies, each including a collecting lens, a prism and a focusing lens. Collimated laser energy is pulsed through a fluid stream, with elements carried in the stream scattering light, some of which is collected by each collecting lens. The respective focusing lenses are configured to form images of the scattered light near the holographic plate. The particle images stored on the plate are reconstructed using the same optical assemblies employed in recording, by transferring the film plate and optical assemblies as a single integral unit to a reconstruction site. At the reconstruction site, reconstruction beams, phase conjugates of the reference beams used in recording the image, are directed to the plate, then selectively through either one of the optical assemblies, to form an image reflecting the chosen perspective at the two points in time. 13 figs.

  20. Image enhancement and color constancy for a vehicle-mounted change detection system

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David

    2016-10-01

    Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.

  1. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.

  2. Ultrasound based mitral valve annulus tracking for off-pump beating heart mitral valve repair

    NASA Astrophysics Data System (ADS)

    Li, Feng P.; Rajchl, Martin; Moore, John; Peters, Terry M.

    2014-03-01

    Mitral regurgitation (MR) occurs when the mitral valve cannot close properly during systole. The NeoChordtool aims to repair MR by implanting artificial chordae tendineae on flail leaflets inside the beating heart, without a cardiopulmonary bypass. Image guidance is crucial for such a procedure due to the lack of direct vision of the targets or instruments. While this procedure is currently guided solely by transesophageal echocardiography (TEE), our previous work has demonstrated that guidance safety and efficiency can be significantly improved by employing augmented virtuality to provide virtual presentation of mitral valve annulus (MVA) and tools integrated with real time ultrasound image data. However, real-time mitral annulus tracking remains a challenge. In this paper, we describe an image-based approach to rapidly track MVA points on 2D/biplane TEE images. This approach is composed of two components: an image-based phasing component identifying images at optimal cardiac phases for tracking, and a registration component updating the coordinates of MVA points. Preliminary validation has been performed on porcine data with an average difference between manually and automatically identified MVA points of 2.5mm. Using a parallelized implementation, this approach is able to track the mitral valve at up to 10 images per second.

  3. Comparison of computation time and image quality between full-parallax 4G-pixels CGHs calculated by the point cloud and polygon-based method

    NASA Astrophysics Data System (ADS)

    Nakatsuji, Noriaki; Matsushima, Kyoji

    2017-03-01

    Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.

  4. Point-of-care and point-of-procedure optical imaging technologies for primary care and global health

    PubMed Central

    Boppart, Stephen A.; Richards-Kortum, Rebecca

    2015-01-01

    Leveraging advances in consumer electronics and wireless telecommunications, low-cost, portable optical imaging devices have the potential to improve screening and detection of disease at the point of care in primary health care settings in both low- and high-resource countries. Similarly, real-time optical imaging technologies can improve diagnosis and treatment at the point of procedure by circumventing the need for biopsy and analysis by expert pathologists, who are scarce in developing countries. Although many optical imaging technologies have been translated from bench to bedside, industry support is needed to commercialize and broadly disseminate these from the patient level to the population level to transform the standard of care. This review provides an overview of promising optical imaging technologies, the infrastructure needed to integrate them into widespread clinical use, and the challenges that must be addressed to harness the potential of these technologies to improve health care systems around the world. PMID:25210062

  5. Point-of-care and point-of-procedure optical imaging technologies for primary care and global health.

    PubMed

    Boppart, Stephen A; Richards-Kortum, Rebecca

    2014-09-10

    Leveraging advances in consumer electronics and wireless telecommunications, low-cost, portable optical imaging devices have the potential to improve screening and detection of disease at the point of care in primary health care settings in both low- and high-resource countries. Similarly, real-time optical imaging technologies can improve diagnosis and treatment at the point of procedure by circumventing the need for biopsy and analysis by expert pathologists, who are scarce in developing countries. Although many optical imaging technologies have been translated from bench to bedside, industry support is needed to commercialize and broadly disseminate these from the patient level to the population level to transform the standard of care. This review provides an overview of promising optical imaging technologies, the infrastructure needed to integrate them into widespread clinical use, and the challenges that must be addressed to harness the potential of these technologies to improve health care systems around the world. Copyright © 2014, American Association for the Advancement of Science.

  6. Hockey Concussion Education Project, Part 1: Susceptibility-weighted imaging study in male and female ice hockey players over a single season

    PubMed Central

    Helmer, Karl G.; Pasternak, Ofer; Fredman, Eli; Preciado, Ronny I.; Koerte, Inga K.; Sasaki, Takeshi; Mayinger, Michael; Johnson, Andrew M.; Holmes, Jeffrey D.; Forwell, Lorie; Skopelja, Elaine N.; Shenton, Martha E.; Echlin, Paul S.

    2015-01-01

    Object Concussion, or mild traumatic brain injury (mTBI), is a commonly occurring sports-related injury, especially in contact sports such as hockey. Cerebral microbleeds (CMBs), which are small, hypointense lesions on T2*-weighted images, can result from TBI. The authors use susceptibility-weighted imaging (SWI) to automatically detect small hypointensities that may be subtle signs of chronic and acute damage due to both subconcussive and concussive injury. The goal was to investigate how the burden of these hypointensities change over time, over a playing season, and postconcussion, compared with subjects who did not suffer a medically observed and diagnosed concussion. Methods Images were obtained in 45 university-level adult male and female ice hockey players before and after a single Canadian Interuniversity Sports season. In addition, 11 subjects (5 men and 6 women) underwent imaging at 72 hours, 2 weeks, and 2 months after concussion. To identify subtle changes in brain tissue and potential CMBs, nonvessel clusters of hypointensities on SWI were automatically identified and a hypointensity burden index was calculated for all subjects at the beginning of the season (BOS) and the end of the season (EOS), in addition to postconcussion time points (where applicable). Results A statistically significant increase in the hypointensity burden, relative to the BOS, was observed for male subjects at the 2-week postconcussion time point. A smaller, nonsignificant rise in the burden for all female subjects was also observed within the same time period. The difference in hypointensity burden was also statistically significant for men with concussions between the 2-week time point and the BOS. There were no significant changes in burden for nonconcussed subjects of either sex between the BOS and EOS time points. However, there was a statistically significant difference in the burden between male and female subjects in the nonconcussed group at both the BOS and EOS time points, with males having a higher burden. Conclusions This method extends the utility of SWI from the enhancement and detection of larger (> 5 mm) CMBs that are often observed in more severe TBI, to concussion in which visual detection of injury is difficult. The hypointensity burden metric proposed here shows statistically significant changes over time in the male subjects. A smaller, nonsignificant increase in the burden metric was observed in the female subjects. PMID:24490839

  7. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  8. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  9. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  10. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  11. Bioluminescent system for dynamic imaging of cell and animal behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hara-Miyauchi, Chikako; Laboratory for Cell Function Dynamics, Brain Science Institute, RIKEN, Saitama 351-0198; Department of Biophysics and Biochemistry, Graduate School of Health Care Sciences, Tokyo Medical and Dental University, Tokyo 113-8510

    2012-03-09

    Highlights: Black-Right-Pointing-Pointer We combined a yellow variant of GFP and firefly luciferase to make ffLuc-cp156. Black-Right-Pointing-Pointer ffLuc-cp156 showed improved photon yield in cultured cells and transgenic mice. Black-Right-Pointing-Pointer ffLuc-cp156 enabled video-rate bioluminescence imaging of freely-moving animals. Black-Right-Pointing-Pointer ffLuc-cp156 mice enabled tracking real-time drug delivery in conscious animals. -- Abstract: The current utility of bioluminescence imaging is constrained by a low photon yield that limits temporal sensitivity. Here, we describe an imaging method that uses a chemiluminescent/fluorescent protein, ffLuc-cp156, which consists of a yellow variant of Aequorea GFP and firefly luciferase. We report an improvement in photon yield by over threemore » orders of magnitude over current bioluminescent systems. We imaged cellular movement at high resolution including neuronal growth cones and microglial cell protrusions. Transgenic ffLuc-cp156 mice enabled video-rate bioluminescence imaging of freely moving animals, which may provide a reliable assay for drug distribution in behaving animals for pre-clinical studies.« less

  12. Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert

    The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less

  13. Comparison between non-invasive methods used on paintings by Goya and his contemporaries: hyperspectral imaging vs. point-by-point spectroscopic analysis.

    PubMed

    Daniel, Floréal; Mounier, Aurélie; Pérez-Arantegui, Josefina; Pardos, Carlos; Prieto-Taboada, Nagore; Fdez-Ortiz de Vallejuelo, Silvia; Castro, Kepa

    2017-06-01

    The development of non-invasive techniques for the characterization of pigments is crucial in order to preserve the integrity of the artwork. In this sense, the usefulness of hyperspectral imaging was demonstrated. It allows pigment characterization of the whole painting. However, it also sometimes requires the complementation of other point-by-point techniques. In the present article, the advantages of hyperspectral imaging over point-by-point spectroscopic analysis were evaluated. For that purpose, three paintings were analysed by hyperspectral imaging, handheld X-ray fluorescence and handheld Raman spectroscopy in order to determine the best non-invasive technique for pigment identifications. Thanks to this work, the main pigments used in Aragonese artworks, and especially in Goya's paintings, were identified and mapped by imaging reflection spectroscopy. All the analysed pigments corresponded to those used at the time of Goya. Regarding the techniques used, the information obtained by the hyperspectral imaging and point-by-point analysis has been, in general, different and complementary. Given this fact, selecting only one technique is not recommended, and the present work demonstrates the usefulness of the combination of all the techniques used as the best non-invasive methodology for the pigments' characterization. Moreover, the proposed methodology is a relatively quick procedure that allows a larger number of Goya's paintings in the museum to be surveyed, increasing the possibility of obtaining significant results and providing a chance for extensive comparisons, which are relevant from the point of view of art history issues.

  14. Analytic relations for magnifications and time delays in gravitational lenses with fold and cusp configurations

    NASA Astrophysics Data System (ADS)

    Congdon, Arthur B.; Keeton, Charles R.; Nordgren, C. Erik

    2008-09-01

    Gravitational lensing provides a unique and powerful probe of the mass distributions of distant galaxies. Four-image lens systems with fold and cusp configurations have two or three bright images near a critical point. Within the framework of singularity theory, we derive analytic relations that are satisfied for a light source that lies a small but finite distance from the astroid caustic of a four-image lens. Using a perturbative expansion of the image positions, we show that the time delay between the close pair of images in a fold lens scales with the cube of the image separation, with a constant of proportionality that depends on a particular third derivative of the lens potential. We also apply our formalism to cusp lenses, where we develop perturbative expressions for the image positions, magnifications and time delays of the images in a cusp triplet. Some of these results were derived previously for a source asymptotically close to a cusp point, but using a simplified form of the lens equation whose validity may be in doubt for sources that lie at astrophysically relevant distances from the caustic. Along with the work of Keeton, Gaudi & Petters, this paper demonstrates that perturbation theory plays an important role in theoretical lensing studies.

  15. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 15-20 Sept. 1993 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on 20 Sept. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured FWHM distribution of the bright points in the image is lognormal with a modal value of 220 km (0.30 sec) and an average value of 250 km (0.35 sec). The smallest measured bright point diameter is 120 km (0.17 sec) and the largest is 600 km (O.69 sec). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  16. 'EPIC' View of Africa and Europe from a Million Miles Away

    NASA Image and Video Library

    2015-07-29

    Africa is front and center in this image of Earth taken by a NASA camera on the Deep Space Climate Observatory (DSCOVR) satellite. The image, taken July 6 from a vantage point one million miles from Earth, was one of the first taken by NASA’s Earth Polychromatic Imaging Camera (EPIC). Central Europe is toward the top of the image with the Sahara Desert to the south, showing the Nile River flowing to the Mediterranean Sea through Egypt. The photographic-quality color image was generated by combining three separate images of the entire Earth taken a few minutes apart. The camera takes a series of 10 images using different narrowband filters -- from ultraviolet to near infrared -- to produce a variety of science products. The red, green and blue channel images are used in these Earth images. The DSCOVR mission is a partnership between NASA, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Air Force, with the primary objective to maintain the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA. DSCOVR was launched in February to its planned orbit at the first Lagrange point or L1, about one million miles from Earth toward the sun. It’s from that unique vantage point that the EPIC instrument is acquiring images of the entire sunlit face of Earth. Data from EPIC will be used to measure ozone and aerosol levels in Earth’s atmosphere, cloud height, vegetation properties and a variety of other features. Image Credit: NASA

  17. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  18. Proton radiography and proton computed tomography based on time-resolved dose measurements

    NASA Astrophysics Data System (ADS)

    Testa, Mauro; Verburg, Joost M.; Rose, Mark; Min, Chul Hee; Tang, Shikui; Hassane Bentefour, El; Paganetti, Harald; Lu, Hsiao-Ming

    2013-11-01

    We present a proof of principle study of proton radiography and proton computed tomography (pCT) based on time-resolved dose measurements. We used a prototype, two-dimensional, diode-array detector capable of fast dose rate measurements, to acquire proton radiographic images expressed directly in water equivalent path length (WEPL). The technique is based on the time dependence of the dose distribution delivered by a proton beam traversing a range modulator wheel in passive scattering proton therapy systems. The dose rate produced in the medium by such a system is periodic and has a unique pattern in time at each point along the beam path and thus encodes the WEPL. By measuring the time dose pattern at the point of interest, the WEPL to this point can be decoded. If one measures the time-dose patterns at points on a plane behind the patient for a beam with sufficient energy to penetrate the patient, the obtained 2D distribution of the WEPL forms an image. The technique requires only a 2D dosimeter array and it uses only the clinical beam for a fraction of second with negligible dose to patient. We first evaluated the accuracy of the technique in determining the WEPL for static phantoms aiming at beam range verification of the brain fields of medulloblastoma patients. Accurate beam ranges for these fields can significantly reduce the dose to the cranial skin of the patient and thus the risk of permanent alopecia. Second, we investigated the potential features of the technique for real-time imaging of a moving phantom. Real-time tumor tracking by proton radiography could provide more accurate validations of tumor motion models due to the more sensitive dependence of proton beam on tissue density compared to x-rays. Our radiographic technique is rapid (˜100 ms) and simultaneous over the whole field, it can image mobile tumors without the problem of interplay effect inherently challenging for methods based on pencil beams. Third, we present the reconstructed pCT images of a cylindrical phantom containing inserts of different materials. As for all conventional pCT systems, the method illustrated in this work produces tomographic images that are potentially more accurate than x-ray CT in providing maps of proton relative stopping power (RSP) in the patient without the need for converting x-ray Hounsfield units to proton RSP. All phantom tests produced reasonable results, given the currently limited spatial and time resolution of the prototype detector. The dose required to produce one radiographic image, with the current settings, is ˜0.7 cGy. Finally, we discuss a series of techniques to improve the resolution and accuracy of radiographic and tomographic images for the future development of a full-scale detector.

  19. Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application

    DOEpatents

    Davidson, J.B.; Case, A.L.

    1981-12-30

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.

  20. a Comparative Case Study of Reflection Seismic Imaging Method

    NASA Astrophysics Data System (ADS)

    Alamooti, M.; Aydin, A.

    2017-12-01

    Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.

  1. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less

  2. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm.

    PubMed

    Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran

    2015-10-01

    Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.

  3. SU-C-204-06: Surface Imaging for the Set-Up of Proton Post-Mastectomy Chestwall Irradiation: Gated Images Vs Non Gated Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batin, E; Depauw, N; MacDonald, S

    Purpose: Historically, the set-up for proton post-mastectomy chestwall irradiation at our institution started with positioning the patient using tattoos and lasers. One or more rounds of orthogonal X-rays at gantry 0° and beamline X-ray at treatment gantry angle were then taken to finalize the set-up position. As chestwall targets are shallow and superficial, surface imaging is a promising tool for set-up and needs to be investigated Methods: The orthogonal imaging was entirely replaced by AlignRT™ (ART) images. The beamline X-Ray image is kept as a confirmation, based primarily on three opaque markers placed on skin surface instead of bony anatomy.more » In the first phase of the process, ART gated images were used to set-up the patient and the same specific point of the breathing curve was used every day. The moves (translations and rotations) computed for each point of the breathing curve during the first five fractions were analyzed for ten patients. During a second phase of the study, ART gated images were replaced by ART non-gated images combined with real-time monitoring. In both cases, ART images were acquired just before treatment to access the patient position compare to the non-gated CT. Results: The average difference between the maximum move and the minimum move depending on the chosen breathing curve point was less than 1.7 mm for all translations and less than 0.7° for all rotations. The average position discrepancy over the course of treatment obtained by ART non gated images combined to real-time monitoring taken before treatment to the planning CT were smaller than the average position discrepancy obtained using ART gated images. The X-Ray validation images show similar results with both ART imaging process. Conclusion: The use of ART non gated images combined with real time imaging allows positioning post-mastectomy chestwall patients in less than 3 mm / 1°.« less

  4. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  5. Imaging of biophoton emission from electrostimulated skin acupuncture point jg4: effect of light enhancers.

    PubMed

    Slawinski, Janusz; Gorski, Zbigniew

    2008-05-01

    Using an ultrasensitive CCD camera, an extremely low light intensity from the acupuncture-sensitive point JG4 at the left hand was recorded. As the intensity of the light was very weak and the time of electrostimulation exceeded the recommended period, the quality of biophoton images was poor. Chemiluminescent and fluorescent hydrophilic, hydrophobic and amphyphilic molecular probes were used to: (i) ensure penetration of probes into skin, (ii) enhance the intensity of BP emission, (iii) shorten time and (iv) obtain information about mechanisms of biophotons generation in EAP-sensitive points and channels. The results obtained partially fulfilled expectations and indicate on the necessity to elaborate special techniques of probes deposition on the skin.

  6. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    NASA Astrophysics Data System (ADS)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  7. Kinetic Analysis of Benign and Malignant Breast Lesions With Ultrafast Dynamic Contrast-Enhanced MRI: Comparison With Standard Kinetic Assessment.

    PubMed

    Abe, Hiroyuki; Mori, Naoko; Tsuchiya, Keiko; Schacht, David V; Pineda, Federico D; Jiang, Yulei; Karczmar, Gregory S

    2016-11-01

    The purposes of this study were to evaluate diagnostic parameters measured with ultrafast MRI acquisition and with standard acquisition and to compare diagnostic utility for differentiating benign from malignant lesions. Ultrafast acquisition is a high-temporal-resolution (7 seconds) imaging technique for obtaining 3D whole-breast images. The dynamic contrast-enhanced 3-T MRI protocol consists of an unenhanced standard and an ultrafast acquisition that includes eight contrast-enhanced ultrafast images and four standard images. Retrospective assessment was performed for 60 patients with 33 malignant and 29 benign lesions. A computer-aided detection system was used to obtain initial enhancement rate and signal enhancement ratio (SER) by means of identification of a voxel showing the highest signal intensity in the first phase of standard imaging. From the same voxel, the enhancement rate at each time point of the ultrafast acquisition and the AUC of the kinetic curve from zero to each time point of ultrafast imaging were obtained. There was a statistically significant difference between benign and malignant lesions in enhancement rate and kinetic AUC for ultrafast imaging and also in initial enhancement rate and SER for standard imaging. ROC analysis showed no significant differences between enhancement rate in ultrafast imaging and SER or initial enhancement rate in standard imaging. Ultrafast imaging is useful for discriminating benign from malignant lesions. The differential utility of ultrafast imaging is comparable to that of standard kinetic assessment in a shorter study time.

  8. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  9. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  10. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    PubMed Central

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  11. Quantitative evaluation of multi-parametric MR imaging marker changes post-laser interstitial ablation therapy (LITT) for epilepsy

    NASA Astrophysics Data System (ADS)

    Tiwari, Pallavi; Danish, Shabbar; Wong, Stephen; Madabhushi, Anant

    2013-03-01

    Laser-induced interstitial thermal therapy (LITT) has recently emerged as a new, less invasive alternative to craniotomy for treating epilepsy; which allows for focussed delivery of laser energy monitored in real time by MRI, for precise removal of the epileptogenic foci. Despite being minimally invasive, the effects of laser ablation on the epileptogenic foci (reflected by changes in MR imaging markers post-LITT) are currently unknown. In this work, we present a quantitative framework for evaluating LITT-related changes by quantifying per-voxel changes in MR imaging markers which may be more reflective of local treatment related changes (TRC) that occur post-LITT, as compared to the standard volumetric analysis which involves monitoring a more global volume change across pre-, and post-LITT MRI. Our framework focuses on three objectives: (a) development of temporal MRI signatures that characterize TRC corresponding to patients with seizure freedom by comparing differences in MR imaging markers and monitoring them over time, (b) identification of the optimal time point when early LITT induced effects (such as edema and mass effect) subside by monitoring TRC at subsequent time-points post-LITT, and (c) identification of contributions of individual MRI protocols towards characterizing LITT-TRC for epilepsy by identifying MR markers that change most dramatically over time and employ individual contributions to create a more optimal weighted MP-MRI temporal profile that can better characterize TRC compared to any individual imaging marker. A cohort of patients were monitored at different time points post-LITT via MP-MRI involving T1-w, T2-w, T2-GRE, T2-FLAIR, and apparent diffusion coefficient (ADC) protocols. Post affine registration of individual MRI protocols to a reference MRI protocol pre-LITT, differences in individual MR markers are computed on a per-voxel basis, at different time-points with respect to baseline (pre-LITT) MRI as well as across subsequent time-points. A time-dependent MRI profile corresponding to successful (seizure-free) is then created that captures changes in individual MR imaging markers over time. Our preliminary analysis on two patient studies suggests that (a) LITT related changes (attributed to swelling and edema) appear to subside within 4-weeks post-LITT, (b) ADC may be more sensitive for evaluating early TRC (up to 3-months), and T1-w may be more sensitive in evaluating early delayed TRC (1-month, 3-months), while T2-w and T2-FLAIR appeared to be more sensitive in identifying late TRC (around 6-months post-LITT) compared to the other MRI protocols under evaluation. T2-GRE was found to be only nominally sensitive in identifying TRC at any follow-up time-point post-LITT. The framework presented in this work thus serves as an important precursor to a comprehensive treatment evaluation framework that can be used to identify sensitive MR markers corresponding to patient response (seizure-freedom or seizure recurrence), with an ultimate objective of making prognostic predictions about patient outcome post-LITT.

  12. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  13. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  14. Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.

    NASA Astrophysics Data System (ADS)

    Dodd, Stirling Scott

    1995-01-01

    Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.

  15. Performance Characteristics of a New LSO PET/CT Scanner With Extended Axial Field-of-View and PSF Reconstruction

    NASA Astrophysics Data System (ADS)

    Jakoby, Bjoern W.; Bercier, Yanic; Watson, Charles C.; Bendriem, Bernard; Townsend, David W.

    2009-06-01

    A new combined lutetium oxyorthosilicate (LSO) PET/CT scanner with an extended axial field-of-view (FOV) of 21.8 cm has been developed (Biograph TruePoint PET/CT with TrueV; Siemens Molecular Imaging) and introduced into clinical practice. The scanner includes the recently announced point spread function (PSF) reconstruction algorithm. The PET components incorporate four rings of 48 detector blocks, 5.4 cm times 5.4 cm in cross-section. Each block comprises a 13 times 13 matrix of 4 times 4 times 20 mm3 elements. Data are acquired with a 4.5 ns coincidence time window and an energy window of 425-650 keV. The physical performance of the new scanner has been evaluated according to the recently revised National Electrical Manufacturers Association (NEMA) NU 2-2007 standard and the results have been compared with a previous PET/CT design that incorporates three rings of block detectors with an axial coverage of 16.2 cm (Biograph TruePoint PET/CT; Siemens Molecular Imaging). In addition to the phantom measurements, patient Noise Equivalent Count Rates (NECRs) have been estimated for a range of patients with different body weights (42-154 kg). The average spatial resolution is the same for both scanners: 4.4 mm (FWHM) and 5.0 mm (FWHM) at 1 cm and 10 cm respectively from the center of the transverse FOV. The scatter fractions of the Biograph TruePoint and Biograph TruePoint TrueV are comparable at 32%. Compared to the three ring design, the system sensitivity and peak NECR with smoothed randoms correction (1R) increase by 82% and 73%, respectively. The increase in sensitivity from the extended axial coverage of the Biograph TruePoint PET/CT with TrueV should allow a decrease in either scan time or injected dose without compromising diagnostic image quality. The contrast improvement with the PSF reconstruction potentially offers enhanced detectability for small lesions.

  16. Preclinical evaluation of spatial frequency domain-enabled wide-field quantitative imaging for enhanced glioma resection

    NASA Astrophysics Data System (ADS)

    Sibai, Mira; Fisher, Carl; Veilleux, Israel; Elliott, Jonathan T.; Leblond, Frederic; Roberts, David W.; Wilson, Brian C.

    2017-07-01

    5-Aminolevelunic acid-induced protoporphyrin IX (PpIX) fluorescence-guided resection (FGR) enables maximum safe resection of glioma by providing real-time tumor contrast. However, the subjective visual assessment and the variable intrinsic optical attenuation of tissue limit this technique to reliably delineating only high-grade tumors that display strong fluorescence. We have previously shown, using a fiber-optic probe, that quantitative assessment using noninvasive point spectroscopic measurements of the absolute PpIX concentration in tissue further improves the accuracy of FGR, extending it to surgically curable low-grade glioma. More recently, we have shown that implementing spatial frequency domain imaging with a fluorescent-light transport model enables recovery of two-dimensional images of [PpIX], alleviating the need for time-consuming point sampling of the brain surface. We present first results of this technique modified for in vivo imaging on an RG2 rat brain tumor model. Despite the moderate errors in retrieving the absorption and reduced scattering coefficients in the subdiffusive regime of 14% and 19%, respectively, the recovered [PpIX] maps agree within 10% of the point [PpIX] values measured by the fiber-optic probe, validating its potential as an extension or an alternative to point sampling during glioma resection.

  17. Adaptive Localization of Focus Point Regions via Random Patch Probabilistic Density from Whole-Slide, Ki-67-Stained Brain Tumor Tissue

    PubMed Central

    Alomari, Yazan M.; MdZin, Reena Rahayu

    2015-01-01

    Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010

  18. GCaMP expression in retinal ganglion cells characterized using a low-cost fundus imaging system

    NASA Astrophysics Data System (ADS)

    Chang, Yao-Chuan; Walston, Steven T.; Chow, Robert H.; Weiland, James D.

    2017-10-01

    Objective. Virus-transduced, intracellular-calcium indicators are effective reporters of neural activity, offering the advantage of cell-specific labeling. Due to the existence of an optimal time window for the expression of calcium indicators, a suitable tool for tracking GECI expression in vivo following transduction is highly desirable. Approach. We developed a noninvasive imaging approach based on a custom-modified, low-cost fundus viewing system that allowed us to monitor and characterize in vivo bright-field and fluorescence images of the mouse retina. AAV2-CAG-GCaMP6f was injected into a mouse eye. The fundus imaging system was used to measure fluorescence at several time points post injection. At defined time points, we prepared wholemount retina mounted on a transparent multielectrode array and used calcium imaging to evaluate the responsiveness of retinal ganglion cells (RGCs) to external electrical stimulation. Main results. The noninvasive fundus imaging system clearly resolves individual (RGCs and axons. RGC fluorescence intensity and the number of observable fluorescent cells show a similar rising trend from week 1 to week 3 after viral injection, indicating a consistent increase of GCaMP6f expression. Analysis of the in vivo fluorescence intensity trend and in vitro neurophysiological responsiveness shows that the slope of intensity versus days post injection can be used to estimate the optimal time for calcium imaging of RGCs in response to external electrical stimulation. Significance. The proposed fundus imaging system enables high-resolution digital fundus imaging in the mouse eye, based on off-the-shelf components. The long-term tracking experiment with in vitro calcium imaging validation demonstrates the system can serve as a powerful tool monitoring the level of genetically-encoded calcium indicator expression, further determining the optimal time window for following experiment.

  19. Image monitoring of pharmaceutical blending processes and the determination of an end point by using a portable near-infrared imaging device based on a polychromator-type near-infrared spectrometer with a high-speed and high-resolution photo diode array detector.

    PubMed

    Murayama, Kodai; Ishikawa, Daitaro; Genkawa, Takuma; Sugino, Hiroyuki; Komiyama, Makoto; Ozaki, Yukihiro

    2015-03-03

    In the present study we have developed a new version (ND-NIRs) of a polychromator-type near-infrared (NIR) spectrometer with a high-resolution photo diode array detector, which we built before (D-NIRs). The new version has four 5 W halogen lamps compared with the three lamps for the older version. The new version also has a condenser lens with a shorter focal point length. The increase in the number of the lamps and the shortening of the focal point of the condenser lens realize high signal-to-noise ratio and high-speed NIR imaging measurement. By using the ND-NIRs we carried out the in-line monitoring of pharmaceutical blending and determined an end point of the blending process. Moreover, to determinate a more accurate end point, a NIR image of the blending sample was acquired by means of a portable NIR imaging device based on ND-NIRs. The imaging result has demonstrated that the mixing time of 8 min is enough for homogeneous mixing. In this way the present study has demonstrated that ND-NIRs and the imaging system based on a ND-NIRs hold considerable promise for process analysis.

  20. Landscape Change Detected Over A 60 Year Period In The Arctic National Wildlife Refuge, Alaska, Using High Resolution Aerial Photographs And Satellite Images

    NASA Astrophysics Data System (ADS)

    Jorgenson, J. C.; Jorgenson, M. T.; Boldenow, M.; Orndahl, K. M.

    2016-12-01

    We documented landscape change over a 60 year period in the Arctic National Wildlife Refuge in northeastern Alaska using aerial photographs and satellite images. We used a stratified random sample to allow inference to the whole refuge (78,050 km2), with five random sites in each of seven ecoregions. Each site (2 km2) had a systematic grid of 100 points for a total of 3500 points. We chose study sites in the overlap area covered by acceptable imagery in three time periods: aerial photographs from 1947 - 1955 and 1978 - 1988, Quick Bird and IKONOS satellite images from 2000 - 2007.At each point a 10 meter radius circle was visually evaluated in ARC-MAP for each time period for vegetation type, disturbance, presence of ice wedge polygon microtopography and surface water. A landscape change category was assigned to each point based on differences detected between the three periods. Change types were assigned for time interval 1, interval 2 and overall. Additional explanatory variables included elevation, slope, aspect, geology, physiography and temperature. Overall, 23% of points changed over the study period. Fire was the most common change agent, affecting 28% of the Boreal Forest points. The next most common change was degradation of soil ice wedges (thermokarst), detected at 12% of the points on the North Slope Tundra. The other most common changes included increase in cover of trees or shrubs (7% of Boreal Forest and Brooks Range points) and erosion or deposition on river floodplains and at the Beaufort Sea coast. Changes on the North Slope Tundra tended to be related to landscape wetting, mainly thermokarst. Changes in the Boreal Forest tended to involve landscape drying, including fire, reduced area of lakes and tree increase on wet sites. The second time interval coincided with a shift towards a warmer climate and had greater change in several categories including thermokarst, lake changes and tree and shrub increase.

  1. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    PubMed

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  2. Reproducibility of dynamically represented acoustic lung images from healthy individuals

    PubMed Central

    Maher, T M; Gat, M; Allen, D; Devaraj, A; Wells, A U; Geddes, D M

    2008-01-01

    Background and aim: Acoustic lung imaging offers a unique method for visualising the lung. This study was designed to demonstrate reproducibility of acoustic lung images recorded from healthy individuals at different time points and to assess intra- and inter-rater agreement in the assessment of dynamically represented acoustic lung images. Methods: Recordings from 29 healthy volunteers were made on three separate occasions using vibration response imaging. Reproducibility was measured using quantitative, computerised assessment of vibration energy. Dynamically represented acoustic lung images were scored by six blinded raters. Results: Quantitative measurement of acoustic recordings was highly reproducible with an intraclass correlation score of 0.86 (very good agreement). Intraclass correlations for inter-rater agreement and reproducibility were 0.61 (good agreement) and 0.86 (very good agreement), respectively. There was no significant difference found between the six raters at any time point. Raters ranged from 88% to 95% in their ability to identically evaluate the different features of the same image presented to them blinded on two separate occasions. Conclusion: Acoustic lung imaging is reproducible in healthy individuals. Graphic representation of lung images can be interpreted with a high degree of accuracy by the same and by different reviewers. PMID:18024534

  3. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  4. a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.

    2018-04-01

    Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.

  5. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  6. Circular Data Images for Directional Data

    NASA Technical Reports Server (NTRS)

    Morpet, William J.

    2004-01-01

    Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.

  7. Image formation of volume holographic microscopy using point spread functions

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Oh, Se Baek; Kou, Shan Shan; Lee, Justin; Sheppard, Colin J. R.; Barbastathis, George

    2010-04-01

    We present a theoretical formulation to quantify the imaging properties of volume holographic microscopy (VHM). Volume holograms are formed by exposure of a photosensitive recording material to the interference of two mutually coherent optical fields. Recently, it has been shown that a volume holographic pupil has spatial and spectral sectioning capability for fluorescent samples. Here, we analyze the point spread function (PSF) to assess the imaging behavior of the VHM with a point source and detector. The coherent PSF of the VHM is derived, and the results are compared with those from conventional microscopy, and confocal microscopy with point and slit apertures. According to our analysis, the PSF of the VHM can be controlled in the lateral direction by adjusting the parameters of the VH. Compared with confocal microscopes, the performance of the VHM is comparable or even potentially better, and the VHM is also able to achieve real-time and three-dimensional (3D) imaging due to its multiplexing ability.

  8. Retinal biometrics based on Iterative Closest Point algorithm.

    PubMed

    Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi

    2017-07-01

    The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.

  9. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu

    2013-06-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.

  10. A Possible Approach to Inclusion of Space and Time in Frame Fields of Quantum Representations of Real and Complex Numbers

    DOE PAGES

    Benioff, Paul

    2009-01-01

    Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices. he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames. he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less

  11. Welding deviation detection algorithm based on extremum of molten pool image contour

    NASA Astrophysics Data System (ADS)

    Zou, Yong; Jiang, Lipei; Li, Yunhua; Xue, Long; Huang, Junfen; Huang, Jiqiang

    2016-01-01

    The welding deviation detection is the basis of robotic tracking welding, but the on-line real-time measurement of welding deviation is still not well solved by the existing methods. There is plenty of information in the gas metal arc welding(GMAW) molten pool images that is very important for the control of welding seam tracking. The physical meaning for the curvature extremum of molten pool contour is revealed by researching the molten pool images, that is, the deviation information points of welding wire center and the molten tip center are the maxima and the local maxima of the contour curvature, and the horizontal welding deviation is the position difference of these two extremum points. A new method of weld deviation detection is presented, including the process of preprocessing molten pool images, extracting and segmenting the contours, obtaining the contour extremum points, and calculating the welding deviation, etc. Extracting the contours is the premise, segmenting the contour lines is the foundation, and obtaining the contour extremum points is the key. The contour images can be extracted with the method of discrete dyadic wavelet transform, which is divided into two sub contours including welding wire and molten tip separately. The curvature value of each point of the two sub contour lines is calculated based on the approximate curvature formula of multi-points for plane curve, and the two points of the curvature extremum are the characteristics needed for the welding deviation calculation. The results of the tests and analyses show that the maximum error of the obtained on-line welding deviation is 2 pixels(0.16 mm), and the algorithm is stable enough to meet the requirements of the pipeline in real-time control at a speed of less than 500 mm/min. The method can be applied to the on-line automatic welding deviation detection.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  13. Correcting spacecraft jitter in HiRISE images

    USGS Publications Warehouse

    Sutton, S. S.; Boyd, A.K.; Kirk, Randolph L.; Cook, Debbie; Backer, Jean; Fennema, A.; Heyd, R.; McEwen, A.S.; Mirchandani, S.D.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.

    2017-01-01

    Mechanical oscillations or vibrations on spacecraft, also called pointing jitter, cause geometric distortions and/or smear in high resolution digital images acquired from orbit. Geometric distortion is especially a problem with pushbroom type sensors, such as the High Resolution Imaging Science Experiment (HiRISE) instrument on board the Mars Reconnaissance Orbiter (MRO). Geometric distortions occur at a range of frequencies that may not be obvious in the image products, but can cause problems with stereo image correlation in the production of digital elevation models, and in measuring surface changes over time in orthorectified images. The HiRISE focal plane comprises a staggered array of fourteen charge-coupled devices (CCDs) with pixel IFOV of 1 microradian. The high spatial resolution of HiRISE makes it both sensitive to, and an excellent recorder of jitter. We present an algorithm using Fourier analysis to resolve the jitter function for a HiRISE image that is then used to update instrument pointing information to remove geometric distortions from the image. Implementation of the jitter analysis and image correction is performed on selected HiRISE images. Resulting corrected images and updated pointing information are made available to the public. Results show marked reduction of geometric distortions. This work has applications to similar cameras operating now, and to the design of future instruments (such as the Europa Imaging System).

  14. SU-D-BRA-04: Computerized Framework for Marker-Less Localization of Anatomical Feature Points in Range Images Based On Differential Geometry Features for Image-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Arimura, H; Toyofuku, F

    Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less

  15. Book Review: Reiner Salzer and Heinz W. Siesler (Eds.): Infrared and Raman spectroscopic imaging, 2nd ed.

    DOE PAGES

    Moore, David Steven

    2015-05-10

    This second edition of "Infrared and Raman Spectroscopic Imaging" propels practitioners in that wide-ranging field, as well as other readers, to the current state of the art in a well-produced and full-color, completely revised and updated, volume. This new edition chronicles the expanded application of vibrational spectroscopic imaging from yesterday's time-consuming point-by-point buildup of a hyperspectral image cube, through the improvements afforded by the addition of focal plane arrays and line scan imaging, to methods applicable beyond the diffraction limit, instructs the reader on the improved instrumentation and image and data analysis methods, and expounds on their application to fundamentalmore » biomedical knowledge, food and agricultural surveys, materials science, process and quality control, and many others.« less

  16. Characterizing Feature Matching Performance Over Long Time Periods (Author’s Manuscript)

    DTIC Science & Technology

    2015-01-05

    older imagery. These applications, including approaches to geo-location, geo- orientation [13], geo-tagging [16], landmark recognition [23], image... orientation between features is less than 10 degrees. We calculate the percent of features from the reference image that fit into each of these three...always because the key point detection algorithm did not find feature points at the same locations and orientation . 5. Conclusions In this paper, we offer

  17. A pseudoinverse deformation vector field generator and its applications

    PubMed Central

    Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.

    2010-01-01

    Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247

  18. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  19. A 4DCT imaging-based breathing lung model with relative hysteresis

    PubMed Central

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-01-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811

  20. A 4DCT imaging-based breathing lung model with relative hysteresis

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long

    2016-12-01

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.

  1. Comparison Between CT and MR Images as More Favorable Reference Data Sets for Fusion Imaging-Guided Radiofrequency Ablation or Biopsy of Hepatic Lesions: A Prospective Study with Focus on Patient's Respiration.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Kim, Kyunga

    2017-10-01

    To identify the more accurate reference data sets for fusion imaging-guided radiofrequency ablation or biopsy of hepatic lesions between computed tomography (CT) and magnetic resonance (MR) images. This study was approved by the institutional review board, and written informed consent was received from all patients. Twelve consecutive patients who were referred to assess the feasibility of radiofrequency ablation or biopsy were enrolled. Automatic registration using CT and MR images was performed in each patient. Registration errors during optimal and opposite respiratory phases, time required for image fusion and number of point locks used were compared using the Wilcoxon signed-rank test. The registration errors during optimal respiratory phase were not significantly different between image fusion using CT and MR images as reference data sets (p = 0.969). During opposite respiratory phase, the registration error was smaller with MR images than CT (p = 0.028). The time and the number of points locks needed for complete image fusion were not significantly different between CT and MR images (p = 0.328 and p = 0.317, respectively). MR images would be more suitable as the reference data set for fusion imaging-guided procedures of focal hepatic lesions than CT images.

  2. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  3. Centric scan SPRITE for spin density imaging of short relaxation time porous materials.

    PubMed

    Chen, Quan; Halse, Meghan; Balcom, Bruce J

    2005-02-01

    The single-point ramped imaging with T1 enhancement (SPRITE) imaging technique has proven to be a very robust and flexible method for the study of a wide range of systems with short signal lifetimes. As a pure phase encoding technique, SPRITE is largely immune to image distortions generated by susceptibility variations, chemical shift and paramagnetic impurities. In addition, it avoids the line width restrictions on resolution common to time-based sampling, frequency encoding methods. The standard SPRITE technique is however a longitudinal steady-state imaging method; the image intensity is related to the longitudinal steady state, which not only decreases the signal-to-noise ratio, but also introduces many parameters into the image signal equation. A centric scan strategy for SPRITE removes the longitudinal steady state from the image intensity equation and increases the inherent image intensity. Two centric scan SPRITE methods, that is, Spiral-SPRITE and Conical-SPRITE, with fast acquisition and greatly reduced gradient duty cycle, are outlined. Multiple free induction decay (FID) points may be acquired during SPRITE sampling for signal averaging to increase signal-to-noise ratio or for T2* and spin density mapping without an increase in acquisition time. Experimental results show that most porous sedimentary rock and concrete samples have a single exponential T2* decay due to susceptibility difference-induced field distortion. Inhomogeneous broadening thus dominates, which suggests that spin density imaging can be easily obtained by SPRITE.

  4. Small Animal Retinal Imaging

    NASA Astrophysics Data System (ADS)

    Choi, WooJhon; Drexler, Wolfgang; Fujimoto, James G.

    Developing and validating new techniques and methods for small animal imaging is an important research area because there are many small animal models of retinal diseases such as diabetic retinopathy, age-related macular degeneration, and glaucoma [1-6]. Because the retina is a multilayered structure with distinct abnormalities occurring in different intraretinal layers at different stages of disease progression, there is a need for imaging techniques that enable visualization of these layers individually at different time points. Although postmortem histology and ultrastructural analysis can be performed for investigating microscopic changes in the retina in small animal models, this requires sacrificing animals, which makes repeated assessment of the same animal at different time points impossible and increases the number of animals required. Furthermore, some retinal processes such as neurovascular coupling cannot be fully characterized postmortem.

  5. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 1993 September 15-20 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on September 20. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured Full Width at Half Maximum (FWHM) distribution of the bright points in the image is lognormal with a modal value of 220 km (0 sec .30) and an average value of 250 km (0 sec .35). The smallest measured bright point diameter is 120 km (0 sec .17) and the largest is 600 km (O sec .69). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  6. Widefield quantitative multiplex surface enhanced Raman scattering imaging in vivo

    NASA Astrophysics Data System (ADS)

    McVeigh, Patrick Z.; Mallia, Rupananda J.; Veilleux, Israel; Wilson, Brian C.

    2013-04-01

    In recent years numerous studies have shown the potential advantages of molecular imaging in vitro and in vivo using contrast agents based on surface enhanced Raman scattering (SERS), however the low throughput of traditional point-scanned imaging methodologies have limited their use in biological imaging. In this work we demonstrate that direct widefield Raman imaging based on a tunable filter is capable of quantitative multiplex SERS imaging in vivo, and that this imaging is possible with acquisition times which are orders of magnitude lower than achievable with comparable point-scanned methodologies. The system, designed for small animal imaging, has a linear response from (0.01 to 100 pM), acquires typical in vivo images in <10 s, and with suitable SERS reporter molecules is capable of multiplex imaging without compensation for spectral overlap. To demonstrate the utility of widefield Raman imaging in biological applications, we show quantitative imaging of four simultaneous SERS reporter molecules in vivo with resulting probe quantification that is in excellent agreement with known quantities (R2>0.98).

  7. Towards Guided Underwater Survey Using Light Visual Odometry

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.

    2017-02-01

    A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.

  8. Comparison and assessment of semi-automatic image segmentation in computed tomography scans for image-guided kidney surgery.

    PubMed

    Glisson, Courtenay L; Altamar, Hernan O; Herrell, S Duke; Clark, Peter; Galloway, Robert L

    2011-11-01

    Image segmentation is integral to implementing intraoperative guidance for kidney tumor resection. Results seen in computed tomography (CT) data are affected by target organ physiology as well as by the segmentation algorithm used. This work studies variables involved in using level set methods found in the Insight Toolkit to segment kidneys from CT scans and applies the results to an image guidance setting. A composite algorithm drawing on the strengths of multiple level set approaches was built using the Insight Toolkit. This algorithm requires image contrast state and seed points to be identified as input, and functions independently thereafter, selecting and altering method and variable choice as needed. Semi-automatic results were compared to expert hand segmentation results directly and by the use of the resultant surfaces for registration of intraoperative data. Direct comparison using the Dice metric showed average agreement of 0.93 between semi-automatic and hand segmentation results. Use of the segmented surfaces in closest point registration of intraoperative laser range scan data yielded average closest point distances of approximately 1 mm. Application of both inverse registration transforms from the previous step to all hand segmented image space points revealed that the distance variability introduced by registering to the semi-automatically segmented surface versus the hand segmented surface was typically less than 3 mm both near the tumor target and at distal points, including subsurface points. Use of the algorithm shortened user interaction time and provided results which were comparable to the gold standard of hand segmentation. Further, the use of the algorithm's resultant surfaces in image registration provided comparable transformations to surfaces produced by hand segmentation. These data support the applicability and utility of such an algorithm as part of an image guidance workflow.

  9. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy

    PubMed Central

    Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786

  10. Correlative imaging across microscopy platforms using the fast and accurate relocation of microscopic experimental regions (FARMER) method

    NASA Astrophysics Data System (ADS)

    Huynh, Toan; Daddysman, Matthew K.; Bao, Ying; Selewa, Alan; Kuznetsov, Andrey; Philipson, Louis H.; Scherer, Norbert F.

    2017-05-01

    Imaging specific regions of interest (ROIs) of nanomaterials or biological samples with different imaging modalities (e.g., light and electron microscopy) or at subsequent time points (e.g., before and after off-microscope procedures) requires relocating the ROIs. Unfortunately, relocation is typically difficult and very time consuming to achieve. Previously developed techniques involve the fabrication of arrays of features, the procedures for which are complex, and the added features can interfere with imaging the ROIs. We report the Fast and Accurate Relocation of Microscopic Experimental Regions (FARMER) method, which only requires determining the coordinates of 3 (or more) conspicuous reference points (REFs) and employs an algorithm based on geometric operators to relocate ROIs in subsequent imaging sessions. The 3 REFs can be quickly added to various regions of a sample using simple tools (e.g., permanent markers or conductive pens) and do not interfere with the ROIs. The coordinates of the REFs and the ROIs are obtained in the first imaging session (on a particular microscope platform) using an accurate and precise encoded motorized stage. In subsequent imaging sessions, the FARMER algorithm finds the new coordinates of the ROIs (on the same or different platforms), using the coordinates of the manually located REFs and the previously recorded coordinates. FARMER is convenient, fast (3-15 min/session, at least 10-fold faster than manual searches), accurate (4.4 μm average error on a microscope with a 100x objective), and precise (almost all errors are <8 μm), even with deliberate rotating and tilting of the sample well beyond normal repositioning accuracy. We demonstrate this versatility by imaging and re-imaging a diverse set of samples and imaging methods: live mammalian cells at different time points; fixed bacterial cells on two microscopes with different imaging modalities; and nanostructures on optical and electron microscopes. FARMER can be readily adapted to any imaging system with an encoded motorized stage and can facilitate multi-session and multi-platform imaging experiments in biology, materials science, photonics, and nanoscience.

  11. Hyperspectral imaging with laser-scanning sum-frequency generation microscopy

    PubMed Central

    Hanninen, Adam; Shu, Ming Wai; Potma, Eric O.

    2017-01-01

    Vibrationally sensitive sum-frequency generation (SFG) microscopy is a chemically selective imaging technique sensitive to non-centrosymmetric molecular arrangements in biological samples. The routine use of SFG microscopy has been hampered by the difficulty of integrating the required mid-infrared excitation light into a conventional, laser-scanning nonlinear optical (NLO) microscope. In this work, we describe minor modifications to a regular laser-scanning microscope to accommodate SFG microscopy as an imaging modality. We achieve vibrationally sensitive SFG imaging of biological samples with sub-μm resolution at image acquisition rates of 1 frame/s, almost two orders of magnitude faster than attained with previous point-scanning SFG microscopes. Using the fast scanning capability, we demonstrate hyperspectral SFG imaging in the CH-stretching vibrational range and point out its use in the study of molecular orientation and arrangement in biologically relevant samples. We also show multimodal imaging by combining SFG microscopy with second-harmonic generation (SHG) and coherent anti-Stokes Raman scattering (CARS) on the same imaging platfrom. This development underlines that SFG microscopy is a unique modality with a spatial resolution and image acquisition time comparable to that of other NLO imaging techniques, making point-scanning SFG microscopy a valuable member of the NLO imaging family. PMID:28966861

  12. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  13. Simultaneous multiple view high resolution surface geometry acquisition using structured light and mirrors.

    PubMed

    Basevi, Hector R A; Guggenheim, James A; Dehghani, Hamid; Styles, Iain B

    2013-03-25

    Knowledge of the surface geometry of an imaging subject is important in many applications. This information can be obtained via a number of different techniques, including time of flight imaging, photogrammetry, and fringe projection profilometry. Existing systems may have restrictions on instrument geometry, require expensive optics, or require moving parts in order to image the full surface of the subject. An inexpensive generalised fringe projection profilometry system is proposed that can account for arbitrarily placed components and use mirrors to expand the field of view. It simultaneously acquires multiple views of an imaging subject, producing a cloud of points that lie on its surface, which can then be processed to form a three dimensional model. A prototype of this system was integrated into an existing Diffuse Optical Tomography and Bioluminescence Tomography small animal imaging system and used to image objects including a mouse-shaped plastic phantom, a mouse cadaver, and a coin. A surface mesh generated from surface capture data of the mouse-shaped plastic phantom was compared with ideal surface points provided by the phantom manufacturer, and 50% of points were found to lie within 0.1mm of the surface mesh, 82% of points were found to lie within 0.2mm of the surface mesh, and 96% of points were found to lie within 0.4mm of the surface mesh.

  14. Using Microsoft PowerPoint as an Astronomical Image Analysis Tool

    NASA Astrophysics Data System (ADS)

    Beck-Winchatz, Bernhard

    2006-12-01

    Engaging students in the analysis of authentic scientific data is an effective way to teach them about the scientific process and to develop their problem solving, teamwork and communication skills. In astronomy several image processing and analysis software tools have been developed for use in school environments. However, the practical implementation in the classroom is often difficult because the teachers may not have the comfort level with computers necessary to install and use these tools, they may not have adequate computer privileges and/or support, and they may not have the time to learn how to use specialized astronomy software. To address this problem, we have developed a set of activities in which students analyze astronomical images using basic tools provided in PowerPoint. These include measuring sizes, distances, and angles, and blinking images. In contrast to specialized software, PowerPoint is broadly available on school computers. Many teachers are already familiar with PowerPoint, and the skills developed while learning how to analyze astronomical images are highly transferable. We will discuss several practical examples of measurements, including the following: -Variations in the distances to the sun and moon from their angular sizes -Magnetic declination from images of shadows -Diameter of the moon from lunar eclipse images -Sizes of lunar craters -Orbital radii of the Jovian moons and mass of Jupiter -Supernova and comet searches -Expansion rate of the universe from images of distant galaxies

  15. Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.

    PubMed

    Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.

  16. Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    PubMed Central

    Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun

    2014-01-01

    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321

  17. Imaging atomic-level random walk of a point defect in graphene

    NASA Astrophysics Data System (ADS)

    Kotakoski, Jani; Mangler, Clemens; Meyer, Jannik C.

    2014-05-01

    Deviations from the perfect atomic arrangements in crystals play an important role in affecting their properties. Similarly, diffusion of such deviations is behind many microstructural changes in solids. However, observation of point defect diffusion is hindered both by the difficulties related to direct imaging of non-periodic structures and by the timescales involved in the diffusion process. Here, instead of imaging thermal diffusion, we stimulate and follow the migration of a divacancy through graphene lattice using a scanning transmission electron microscope operated at 60 kV. The beam-activated process happens on a timescale that allows us to capture a significant part of the structural transformations and trajectory of the defect. The low voltage combined with ultra-high vacuum conditions ensure that the defect remains stable over long image sequences, which allows us for the first time to directly follow the diffusion of a point defect in a crystalline material.

  18. Design of point-of-care (POC) microfluidic medical diagnostic devices

    NASA Astrophysics Data System (ADS)

    Leary, James F.

    2018-02-01

    Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.

  19. Mirrored pyramidal wells for simultaneous multiple vantage point microscopy.

    PubMed

    Seale, K T; Reiserer, R S; Markov, D A; Ges, I A; Wright, C; Janetopoulos, C; Wikswo, J P

    2008-10-01

    We report a novel method for obtaining simultaneous images from multiple vantage points of a microscopic specimen using size-matched microscopic mirrors created from anisotropically etched silicon. The resulting pyramidal wells enable bright-field and fluorescent side-view images, and when combined with z-sectioning, provide additional information for 3D reconstructions of the specimen. We have demonstrated the 3D localization and tracking over time of the centrosome of a live Dictyostelium discoideum. The simultaneous acquisition of images from multiple perspectives also provides a five-fold increase in the theoretical collection efficiency of emitted photons, a property which may be useful for low-light imaging modalities such as bioluminescence, or low abundance surface-marker labelling.

  20. Unified Ultrasonic/Eddy-Current Data Acquisition

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1993-01-01

    Imaging station for detecting cracks and flaws in solid materials developed combining both ultrasonic C-scan and eddy-current imaging. Incorporation of both techniques into one system eliminates duplication of computers and of mechanical scanners; unifies acquisition, processing, and storage of data; reduces setup time for repetitious ultrasonic and eddy-current scans; and increases efficiency of system. Same mechanical scanner used to maneuver either ultrasonic or eddy-current probe over specimen and acquire point-by-point data. For ultrasonic scanning, probe linked to ultrasonic pulser/receiver circuit card, while, for eddy-current imaging, probe linked to impedance-analyzer circuit card. Both ultrasonic and eddy-current imaging subsystems share same desktop-computer controller, containing dedicated plug-in circuit boards for each.

  1. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  2. Time delay of critical images in the vicinity of cusp point of gravitational-lens systems

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-12-01

    We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.

  3. Magnetic resonance imaging (MRI) and relaxation time mapping of concrete

    NASA Astrophysics Data System (ADS)

    Beyea, Steven Donald

    2001-07-01

    The use of Magnetic Resonance Imaging (MRI) of water in concrete is presented. This thesis will approach the problem of MR imaging of concrete by attempting to design new methods, suited to concrete materials, rather than attempting to force the material to suit the method. A number of techniques were developed, which allow the spatial observation of water in concrete in up to three dimensions, and permits the determination of space resolved moisture content, as well as local NMR relaxation times. These methods are all based on the Single-Point Imaging (SPI) method. The development of these new methods will be described, and the techniques validated using phantom studies. The study of one-dimensional moisture transport in drying concrete was performed using SPI. This work examined the effect of initial mixture proportions and hydration time on the drying behaviour of concrete, over a period of three months. Studies of drying concrete were also performed using spatial mapping of the spin-lattice (T1) and effective spin-spin (T2*) relaxation times, thereby permitting the observation of changes in the water occupied pore surface-to-volume ratio (S/V) as a function of drying. Results of this work demonstrated changes in the S/V due to drying, hydration and drying induced microcracking. Three-dimensional MRI of concrete was performed using SPRITE (Single-Point Ramped Imaging with T1 Enhancement) and turboSPI (turbo Single Point Imaging). While SPRITE allows for weighting of MR images using T 1 and T2*, turboSPI allows T2 weighting of the resulting images. Using relaxation weighting it was shown to be possible to discriminate between water contained within a hydrated cement matrix, and water in highly porous aggregates, used to produce low-density concrete. Three dimensional experiments performed using SPRITE and turboSPI examined the role of self-dessication, drying, initial aggregate saturation and initial mixture conditions on the transport of moisture between porous aggregates and the hydrated matrix. The results demonstrate that water is both added and removed from the aggregates, depending upon the physical conditions. The images also appear to show an influx of cement products into cracks in the solid aggregate. (Abstract shortened by UMI.)

  4. Optimization of super-resolution processing using incomplete image sets in PET imaging.

    PubMed

    Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

    2008-12-01

    Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure.

  5. Preclinical evaluation and intraoperative human retinal imaging with a high-resolution microscope-integrated spectral domain optical coherence tomography device.

    PubMed

    Hahn, Paul; Migacz, Justin; O'Donnell, Rachelle; Day, Shelley; Lee, Annie; Lin, Phoebe; Vann, Robin; Kuo, Anthony; Fekrat, Sharon; Mruthyunjaya, Prithvi; Postel, Eric A; Izatt, Joseph A; Toth, Cynthia A

    2013-01-01

    The authors have recently developed a high-resolution microscope-integrated spectral domain optical coherence tomography (MIOCT) device designed to enable OCT acquisition simultaneous with surgical maneuvers. The purpose of this report is to describe translation of this device from preclinical testing into human intraoperative imaging. Before human imaging, surgical conditions were fully simulated for extensive preclinical MIOCT evaluation in a custom model eye system. Microscope-integrated spectral domain OCT images were then acquired in normal human volunteers and during vitreoretinal surgery in patients who consented to participate in a prospective institutional review board-approved study. Microscope-integrated spectral domain OCT images were obtained before and at pauses in surgical maneuvers and were compared based on predetermined diagnostic criteria to images obtained with a high-resolution spectral domain research handheld OCT system (HHOCT; Bioptigen, Inc) at the same time point. Cohorts of five consecutive patients were imaged. Successful end points were predefined, including ≥80% correlation in identification of pathology between MIOCT and HHOCT in ≥80% of the patients. Microscope-integrated spectral domain OCT was favorably evaluated by study surgeons and scrub nurses, all of whom responded that they would consider participating in human intraoperative imaging trials. The preclinical evaluation identified significant improvements that were made before MIOCT use during human surgery. The MIOCT transition into clinical human research was smooth. Microscope-integrated spectral domain OCT imaging in normal human volunteers demonstrated high resolution comparable to tabletop scanners. In the operating room, after an initial learning curve, surgeons successfully acquired human macular MIOCT images before and after surgical maneuvers. Microscope-integrated spectral domain OCT imaging confirmed preoperative diagnoses, such as full-thickness macular hole and vitreomacular traction, and demonstrated postsurgical changes in retinal morphology. Two cohorts of five patients were imaged. In the second cohort, the predefined end points were exceeded with ≥80% correlation between microscope-mounted OCT and HHOCT imaging in 100% of the patients. This report describes high-resolution MIOCT imaging using the prototype device in human eyes during vitreoretinal surgery, with successful achievement of predefined end points for imaging. Further refinements and investigations will be directed toward fully integrating MIOCT with vitreoretinal and other ocular surgery to image surgical maneuvers in real time.

  6. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  7. Modifications in SIFT-based 3D reconstruction from image sequence

    NASA Astrophysics Data System (ADS)

    Wei, Zhenzhong; Ding, Boshen; Wang, Wei

    2014-11-01

    In this paper, we aim to reconstruct 3D points of the scene from related images. Scale Invariant Feature Transform( SIFT) as a feature extraction and matching algorithm has been proposed and improved for years and has been widely used in image alignment and stitching, image recognition and 3D reconstruction. Because of the robustness and reliability of the SIFT's feature extracting and matching algorithm, we use it to find correspondences between images. Hence, we describe a SIFT-based method to reconstruct 3D sparse points from ordered images. In the process of matching, we make a modification in the process of finding the correct correspondences, and obtain a satisfying matching result. By rejecting the "questioned" points before initial matching could make the final matching more reliable. Given SIFT's attribute of being invariant to the image scale, rotation, and variable changes in environment, we propose a way to delete the multiple reconstructed points occurred in sequential reconstruction procedure, which improves the accuracy of the reconstruction. By removing the duplicated points, we avoid the possible collapsed situation caused by the inexactly initialization or the error accumulation. The limitation of some cases that all reprojected points are visible at all times also does not exist in our situation. "The small precision" could make a big change when the number of images increases. The paper shows the contrast between the modified algorithm and not. Moreover, we present an approach to evaluate the reconstruction by comparing the reconstructed angle and length ratio with actual value by using a calibration target in the scene. The proposed evaluation method is easy to be carried out and with a great applicable value. Even without the Internet image datasets, we could evaluate our own results. In this paper, the whole algorithm has been tested on several image sequences both on the internet and in our shots.

  8. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  9. SU-E-T-362: Automatic Catheter Reconstruction of Flap Applicators in HDR Surface Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzurovic, I; Devlin, P; Hansen, J

    2014-06-01

    Purpose: Catheter reconstruction is crucial for the accurate delivery of radiation dose in HDR brachytherapy. The process becomes complicated and time-consuming for large superficial clinical targets with a complex topology. A novel method for the automatic catheter reconstruction of flap applicators is proposed in this study. Methods: We have developed a program package capable of image manipulation, using C++class libraries of The-Visualization-Toolkit(VTK) software system. The workflow for automatic catheter reconstruction is: a)an anchor point is placed in 3D or in the axial view of the first slice at the tip of the first, last and middle points for the curvedmore » surface; b)similar points are placed on the last slice of the image set; c)the surface detection algorithm automatically registers the points to the images and applies the surface reconstruction filter; d)then a structured grid surface is generated through the center of the treatment catheters placed at a distance of 5mm from the patient's skin. As a result, a mesh-style plane is generated with the reconstructed catheters placed 10mm apart. To demonstrate automatic catheter reconstruction, we used CT images of patients diagnosed with cutaneous T-cell-lymphoma and imaged with Freiburg-Flap-Applicators (Nucletron™-Elekta, Netherlands). The coordinates for each catheter were generated and compared to the control points selected during the manual reconstruction for 16catheters and 368control point Results: The variation of the catheter tip positions between the automatically and manually reconstructed catheters was 0.17mm(SD=0.23mm). The position difference between the manually selected catheter control points and the corresponding points obtained automatically was 0.17mm in the x-direction (SD=0.23mm), 0.13mm in the y-direction (SD=0.22mm), and 0.14mm in the z-direction (SD=0.24mm). Conclusion: This study shows the feasibility of the automatic catheter reconstruction of flap applicators with a high level of positioning accuracy. Implementation of this technique has potential to decrease the planning time and may improve overall quality in superficial brachytherapy.« less

  10. Terrain modeling for real-time simulation

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; McArthur, Donald E.

    1993-10-01

    There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.

  11. Open-loop measurement of data sampling point for SPM

    NASA Astrophysics Data System (ADS)

    Wang, Yueyu; Zhao, Xuezeng

    2006-03-01

    SPM (Scanning Probe Microscope) provides "three-dimensional images" with nanometer level resolution, and some of them can be used as metrology tools. However, SPM's images are commonly distorted by non-ideal properties of SPM's piezoelectric scanner, which reduces metrological accuracy and data repeatability. In order to eliminate this limit, an "open-loop sampling" method is presented. In this method, the positional values of sampling points in all three directions on the surface of the sample are measured by the position sensor and recorded in SPM's image file, which is used to replace the image file from a conventional SPM. Because the positions in X and Y directions are measured at the same time of sampling height information in Z direction, the image distortion caused by scanner locating error can be reduced by proper image processing algorithm.

  12. High-speed 3D imaging of cellular activity in the brain using axially-extended beams and light sheets.

    PubMed

    Hillman, Elizabeth Mc; Voleti, Venkatakaushik; Patel, Kripa; Li, Wenze; Yu, Hang; Perez-Campos, Citlali; Benezra, Sam E; Bruno, Randy M; Galwaduge, Pubudu T

    2018-06-01

    As optical reporters and modulators of cellular activity have become increasingly sophisticated, the amount that can be learned about the brain via high-speed cellular imaging has increased dramatically. However, despite fervent innovation, point-scanning microscopy is facing a fundamental limit in achievable 3D imaging speeds and fields of view. A range of alternative approaches are emerging, some of which are moving away from point-scanning to use axially-extended beams or sheets of light, for example swept confocally aligned planar excitation (SCAPE) microscopy. These methods are proving effective for high-speed volumetric imaging of the nervous system of small organisms such as Drosophila (fruit fly) and D. Rerio (Zebrafish), and are showing promise for imaging activity in the living mammalian brain using both single and two-photon excitation. This article describes these approaches and presents a simple model that demonstrates key advantages of axially-extended illumination over point-scanning strategies for high-speed volumetric imaging, including longer integration times per voxel, improved photon efficiency and reduced photodamage. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Contour-based image warping

    NASA Astrophysics Data System (ADS)

    Chan, Kwai H.; Lau, Rynson W.

    1996-09-01

    Image warping concerns about transforming an image from one spatial coordinate to another. It is widely used for the vidual effect of deforming and morphing images in the film industry. A number of warping techniques have been introduced, which are mainly based on the corresponding pair mapping of feature points, feature vectors or feature patches (mostly triangular or quadrilateral). However, very often warping of an image object with an arbitrary shape is required. This requires a warping technique which is based on boundary contour instead of feature points or feature line-vectors. In addition, when feature point or feature vector based techniques are used, approximation of the object boundary by using point or vectors is required. In this case, the matching process of the corresponding pairs will be very time consuming if a fine approximation is required. In this paper, we propose a contour-based warping technique for warping image objects with arbitrary shapes. The novel idea of the new method is the introduction of mathematical morphology to allow a more flexible control of image warping. Two morphological operators are used as contour determinators. The erosion operator is used to warp image contents which are inside a user specified contour while the dilation operation is used to warp image contents which are outside of the contour. This new method is proposed to assist further development of a semi-automatic motion morphing system when accompanied with robust feature extractors such as deformable template or active contour model.

  14. Infrared image background modeling based on improved Susan filtering

    NASA Astrophysics Data System (ADS)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  15. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  16. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  17. Motion estimation accuracy for visible-light/gamma-ray imaging fusion for portable portal monitoring

    NASA Astrophysics Data System (ADS)

    Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Gee, Timothy F.

    2010-01-01

    The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Portable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest. We have constructed a prototype, rapid-deployment portal gamma-ray imaging portal monitor that uses machine vision and gamma-ray imaging to monitor multiple lanes of traffic. Vehicles are detected and tracked by using point detection and optical flow methods as implemented in the OpenCV software library. Points are clustered together but imperfections in the detected points and tracks cause errors in the accuracy of the vehicle position estimates. The resulting errors cause a "blurring" effect in the gamma image of the vehicle. To minimize these errors, we have compared a variety of motion estimation techniques including an estimate using the median of the clustered points, a "best-track" filtering algorithm, and a constant velocity motion estimation model. The accuracy of these methods are contrasted and compared to a manually verified ground-truth measurement by quantifying the rootmean- square differences in the times the vehicles cross the gamma-ray image pixel boundaries compared with a groundtruth manual measurement.

  18. Automatic detection of anatomical regions in frontal x-ray images: comparing convolutional neural networks to random forest

    NASA Astrophysics Data System (ADS)

    Olory Agomma, R.; Vázquez, C.; Cresson, T.; De Guise, J.

    2018-02-01

    Most algorithms to detect and identify anatomical structures in medical images require either to be initialized close to the target structure, or to know that the structure is present in the image, or to be trained on a homogeneous database (e.g. all full body or all lower limbs). Detecting these structures when there is no guarantee that the structure is present in the image, or when the image database is heterogeneous (mixed configurations), is a challenge for automatic algorithms. In this work we compared two state-of-the-art machine learning techniques in order to determine which one is the most appropriate for predicting targets locations based on image patches. By knowing the position of thirteen landmarks points, labelled by an expert in EOS frontal radiography, we learn the displacement between salient points detected in the image and these thirteen landmarks. The learning step is carried out with a machine learning approach by exploring two methods: Convolutional Neural Network (CNN) and Random Forest (RF). The automatic detection of the thirteen landmarks points in a new image is then obtained by averaging the positions of each one of these thirteen landmarks estimated from all the salient points in the new image. We respectively obtain for CNN and RF, an average prediction error (both mean and standard deviation in mm) of 29 +/-18 and 30 +/- 21 for the thirteen landmarks points, indicating the approximate location of anatomical regions. On the other hand, the learning time is 9 days for CNN versus 80 minutes for RF. We provide a comparison of the results between the two machine learning approaches.

  19. Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking

    PubMed Central

    Hamahashi, Shugo; Onami, Shuichi; Kitano, Hiroaki

    2005-01-01

    Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D) Nomarski differential interference contrast (DIC) microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos. PMID:15910690

  20. A Scanning Quantum Cryogenic Atom Microscope

    NASA Astrophysics Data System (ADS)

    Lev, Benjamin

    Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.

  1. Pulse-Echo Ultrasonic Imaging Method for Eliminating Sample Thickness Variation Effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1997-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material which accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer and automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjustments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  2. Studies on design of 351  nm focal plane diagnostic system prototype and focusing characteristic of SGII-upgraded facility at half achievable energy performance.

    PubMed

    Liu, Chong; Ji, Lailin; Yang, Lin; Zhao, Dongfeng; Zhang, Yanfeng; Liu, Dong; Zhu, Baoqiang; Lin, Zunqi

    2016-04-01

    In order to obtain the intensity distribution of a 351 nm focal spot and smoothing by spectral dispersion (SSD) focal plane profile of a SGII-upgraded facility, a type of off-axis imaging system with three spherical mirrors, suitable for a finite distance source point to be imaged near the diffraction limit has been designed. The quality factor of the image system is 1.6 times of the diffraction limit tested by a 1053 nm point source. Because of the absence of a 351 nm point source, we can use a Collins diffraction imaging integral with respect to λ=351  nm, corresponding to a quality factor that is 3.8 times the diffraction limit at 351 nm. The calibration results show that at least the range of ±10  mrad of view field angle and ±50  mm along the axial direction around the optimum object distance can be satisfied with near diffraction limited image that is consistent with the design value. Using this image system, the No. 2 beam of the SGII-upgraded facility has been tested. The test result of the focal spot of final optics assembly (FOA) at 351 nm indicates that about 80% of energy is encompassed in 14.1 times the diffraction limit, while the output energy of the No. 2 beam is 908 J at 1053 nm. According to convolution theorem, the true value of a 351 nm focal spot of FOA is about 12 times the diffraction limit because of the influence of the quality factor. Further experimental studies indicate that the RMS value along the smoothing direction is less than 15.98% in the SSD spot test experiment. Computer simulations show that the quality factor of the image system used in the experiment has almost no effect on the SSD focal spot test. The image system can remarkably distort the SSD focal spot distribution under the circumstance of the quality factor 15 times worse than the diffraction limit. The distorted image shows a steep slope in the contour of the SSD focal spot along the smoothing direction that otherwise has a relatively flat top region around the focal spot center.

  3. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  4. Breaking the acoustic diffraction barrier with localization optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2018-02-01

    Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.

  5. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  6. High-frequency spectral ultrasound imaging (SUSI) visualizes early post-traumatic heterotopic ossification (HO) in a mouse model.

    PubMed

    Ranganathan, Kavitha; Hong, Xiaowei; Cholok, David; Habbouche, Joe; Priest, Caitlin; Breuler, Christopher; Chung, Michael; Li, John; Kaura, Arminder; Hsieh, Hsiao Hsin Sung; Butts, Jonathan; Ucer, Serra; Schwartz, Ean; Buchman, Steven R; Stegemann, Jan P; Deng, Cheri X; Levi, Benjamin

    2018-04-01

    Early treatment of heterotopic ossification (HO) is currently limited by delayed diagnosis due to limited visualization at early time points. In this study, we validate the use of spectral ultrasound imaging (SUSI) in an animal model to detect HO as early as one week after burn tenotomy. Concurrent SUSI, micro CT, and histology at 1, 2, 4, and 9weeks post-injury were used to follow the progression of HO after an Achilles tenotomy and 30% total body surface area burn (n=3-5 limbs per time point). To compare the use of SUSI in different types of injury models, mice (n=5 per group) underwent either burn/tenotomy or skin incision injury and were imaged using a 55MHz probe on VisualSonics VEVO 770 system at one week post injury to evaluate the ability of SUSI to distinguish between edema and HO. Average acoustic concentration (AAC) and average scatterer diameter (ASD) were calculated for each ultrasound image frame. Micro CT was used to calculate the total volume of HO. Histology was used to confirm bone formation. Using SUSI, HO was visualized as early as 1week after injury. HO was visualized earliest by 4weeks after injury by micro CT. The average acoustic concentration of HO was 33% more than that of the control limb (n=5). Spectroscopic foci of HO present at 1week that persisted throughout all time points correlated with the HO present at 9weeks on micro CT imaging. SUSI visualizes HO as early as one week after injury in an animal model. SUSI represents a new imaging modality with promise for early diagnosis of HO. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Changes in Regional Ventilation During Treatment and Dosimetric Advantages of CT Ventilation Image Guided Radiation Therapy for Locally Advanced Lung Cancer.

    PubMed

    Yamamoto, Tokihiro; Kabus, Sven; Bal, Matthieu; Bzdusek, Karl; Keall, Paul J; Wright, Cari; Benedict, Stanley H; Daly, Megan E

    2018-05-04

    Lung functional image guided radiation therapy (RT) that avoids irradiating highly functional regions has potential to reduce pulmonary toxicity following RT. Tumor regression during RT is common, leading to recovery of lung function. We hypothesized that computed tomography (CT) ventilation image-guided treatment planning reduces the functional lung dose compared to standard anatomic image-guided planning in 2 different scenarios with or without plan adaptation. CT scans were acquired before RT and during RT at 2 time points (16-20 Gy and 30-34 Gy) for 14 patients with locally advanced lung cancer. Ventilation images were calculated by deformable image registration of four-dimensional CT image data sets and image analysis. We created 4 treatment plans at each time point for each patient: functional adapted, anatomic adapted, functional unadapted, and anatomic unadapted plans. Adaptation was performed at 2 time points. Deformable image registration was used for accumulating dose and calculating a composite of dose-weighted ventilation used to quantify the lung accumulated dose-function metrics. The functional plans were compared with the anatomic plans for each scenario separately to investigate the hypothesis at a significance level of 0.05. Tumor volume was significantly reduced by 20% after 16 to 20 Gy (P = .02) and by 32% after 30 to 34 Gy (P < .01) on average. In both scenarios, the lung accumulated dose-function metrics were significantly lower in the functional plans than in the anatomic plans without compromising target volume coverage and adherence to constraints to critical structures. For example, functional planning significantly reduced the functional mean lung dose by 5.0% (P < .01) compared to anatomic planning in the adapted scenario and by 3.6% (P = .03) in the unadapted scenario. This study demonstrated significant reductions in the accumulated dose to the functional lung with CT ventilation image-guided planning compared to anatomic image-guided planning for patients showing tumor regression and changes in regional ventilation during RT. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Registration of 4D cardiac CT sequences under trajectory constraints with multichannel diffeomorphic demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Xu, Chenyang; Ayache, Nicholas

    2010-07-01

    We propose a framework for the nonlinear spatiotemporal registration of 4D time-series of images based on the Diffeomorphic Demons (DD) algorithm. In this framework, the 4D spatiotemporal registration is decoupled into a 4D temporal registration, defined as mapping physiological states, and a 4D spatial registration, defined as mapping trajectories of physical points. Our contribution focuses more specifically on the 4D spatial registration that should be consistent over time as opposed to 3D registration that solely aims at mapping homologous points at a given time-point. First, we estimate in each sequence the motion displacement field, which is a dense representation of the point trajectories we want to register. Then, we perform simultaneously 3D registrations of corresponding time-points with the constraints to map the same physical points over time called the trajectory constraints. Under these constraints, we show that the 4D spatial registration can be formulated as a multichannel registration of 3D images. To solve it, we propose a novel version of the Diffeomorphic Demons (DD) algorithm extended to vector-valued 3D images, the Multichannel Diffeomorphic Demons (MDD). For evaluation, this framework is applied to the registration of 4D cardiac computed tomography (CT) sequences and compared to other standard methods with real patient data and synthetic data simulated from a physiologically realistic electromechanical cardiac model. Results show that the trajectory constraints act as a temporal regularization consistent with motion whereas the multichannel registration acts as a spatial regularization. Finally, using these trajectory constraints with multichannel registration yields the best compromise between registration accuracy, temporal and spatial smoothness, and computation times. A prospective example of application is also presented with the spatiotemporal registration of 4D cardiac CT sequences of the same patient before and after radiofrequency ablation (RFA) in case of atrial fibrillation (AF). The intersequence spatial transformations over a cardiac cycle allow to analyze and quantify the regression of left ventricular hypertrophy and its impact on the cardiac function.

  9. A mitral annulus tracking approach for navigation of off-pump beating heart mitral valve repair.

    PubMed

    Li, Feng P; Rajchl, Martin; Moore, John; Peters, Terry M

    2015-01-01

    To develop and validate a real-time mitral valve annulus (MVA) tracking approach based on biplane transesophageal echocardiogram (TEE) data and magnetic tracking systems (MTS) to be used in minimally invasive off-pump beating heart mitral valve repair (MVR). The authors' guidance system consists of three major components: TEE, magnetic tracking system, and an image guidance software platform. TEE provides real-time intraoperative images to show the cardiac motion and intracardiac surgical tools. The magnetic tracking system tracks the TEE probe and the surgical tools. The software platform integrates the TEE image planes and the virtual model of the tools and the MVA model on the screen. The authors' MVA tracking approach, which aims to update the MVA model in near real-time, comprises of three steps: image based gating, predictive reinitialization, and registration based MVA tracking. The image based gating step uses a small patch centered at each MVA point in the TEE images to identify images at optimal cardiac phases for updating the position of the MVA. The predictive reinitialization step uses the position and orientation of the TEE probe provided by the magnetic tracking system to predict the position of the MVA points in the TEE images and uses them for the initialization of the registration component. The registration based MVA tracking step aims to locate the MVA points in the images selected by the image based gating component by performing image based registration. The validation of the MVA tracking approach was performed in a phantom study and a retrospective study on porcine data. In the phantom study, controlled translations were applied to the phantom and the tracked MVA was compared to its "true" position estimated based on a magnetic sensor attached to the phantom. The MVA tracking accuracy was 1.29 ± 0.58 mm when the translation distance is about 1 cm, and increased to 2.85 ± 1.19 mm when the translation distance is about 3 cm. In the study on porcine data, the authors compared the tracked MVA to a manually segmented MVA. The overall accuracy is 2.37 ± 1.67 mm for single plane images and 2.35 ± 1.55 mm for biplane images. The interoperator variation in manual segmentation was 2.32 ± 1.24 mm for single plane images and 1.73 ± 1.18 mm for biplane images. The computational efficiency of the algorithm on a desktop computer with an Intel(®) Xeon(®) CPU @3.47 GHz and an NVIDIA GeForce 690 graphic card is such that the time required for registering four MVA points was about 60 ms. The authors developed a rapid MVA tracking algorithm for use in the guidance of off-pump beating heart transapical mitral valve repair. This approach uses 2D biplane TEE images and was tested on a dynamic heart phantom and interventional porcine image data. Results regarding the accuracy and efficiency of the authors' MVA tracking algorithm are promising, and fulfill the requirements for surgical navigation.

  10. The Magellan Telescopes

    NASA Astrophysics Data System (ADS)

    Shectman, Stephen A.; Johns, Matthew

    2003-02-01

    Commissioning of the two 6.5-meter Magellan telescopes is nearing completion at the Las Campanas Observatory in Chile. The Magellan 1 primary mirror was successfully aluminized at Las Campanas in August 2000. Science operations at Magellan 1 began in February 2001. The second Nasmyth focus on Magellan 1 went into operation in September 2001. Science operations on Magellan 2 are scheduled to begin shortly. The ability to deliver high-quality images is maintained at all times by the simultaneous operation of the primary mirror support system, the primary mirror thermal control system, and a real-time active optics system, based on a Shack-Hartmann image analyzer. Residual aberrations in the delivered image (including focus) are typically 0.10-0.15" fwhm, and real images as good as 0.25" fwhm have been obtained at optical wavelengths. The mount points reliably to 2" rms over the entire sky, using a pointing model which is stable from year to year. The tracking error under typical wind conditions is better than 0.03" rms, although some degradation is observed under high wind conditions when the dome is pointed in an unfavorable direction. Instruments used at Magellan 1 during the first year of operation include two spectrographs previously used at other telescopes (B&C, LDSS-2), a mid-infrared imager (MIRAC) and an optical imager (MAGIC, the first Magellan-specific facility instrument). Two facility spectrographs are scheduled to be installed shortly: IMACS, a wide-field spectrograph, and MIKE, a double echelle spectrograph.

  11. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  12. Adaptive marker-free registration using a multiple point strategy for real-time and robust endoscope electromagnetic navigation.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku

    2015-02-01

    Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58 mm to 4.71 mm in dynamic phantom validation compared to current available methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  14. Multi-modality PET-CT imaging of breast cancer in an animal model using nanoparticle x-ray contrast agent and 18F-FDG

    NASA Astrophysics Data System (ADS)

    Badea, C. T.; Ghaghada, K.; Espinosa, G.; Strong, L.; Annapragada, A.

    2011-03-01

    Multi-modality PET-CT imaging is playing an important role in the field of oncology. While PET imaging facilitates functional interrogation of tumor status, the use of CT imaging is primarily limited to anatomical reference. In an attempt to extract comprehensive information about tumor cells and its microenvironment, we used a nanoparticle xray contrast agent to image tumor vasculature and vessel 'leakiness' and 18F-FDG to investigate the metabolic status of tumor cells. In vivo PET/CT studies were performed in mice implanted with 4T1 mammary breast cancer cells.Early-phase micro-CT imaging enabled visualization 3D vascular architecture of the tumors whereas delayedphase micro-CT demonstrated highly permeable vessels as evident by nanoparticle accumulation within the tumor. Both imaging modalities demonstrated the presence of a necrotic core as indicated by a hypo-enhanced region in the center of the tumor. At early time-points, the CT-derived fractional blood volume did not correlate with 18F-FDG uptake. At delayed time-points, the tumor enhancement in 18F-FDG micro-PET images correlated with the delayed signal enhanced due to nanoparticle extravasation seen in CT images. The proposed hybrid imaging approach could be used to better understand tumor angiogenesis and to be the basis for monitoring and evaluating anti-angiogenic and nano-chemotherapies.

  15. Mirrored pyramidal wells for simultaneous multiple vantage point microscopy

    PubMed Central

    Seale, K.T.; Reiserer, R.S.; Markov, D.A.; Ges, I.A.; Wright, C.; Janetopoulos, C.; Wikswo, J.P.

    2013-01-01

    Summary We report a novel method for obtaining simultaneous images from multiple vantage points of a microscopic specimen using size-matched microscopic mirrors created from anisotropically etched silicon. The resulting pyramidal wells enable bright-field and fluorescent side-view images, and when combined with z-sectioning, provide additional information for 3D reconstructions of the specimen. We have demonstrated the 3D localization and tracking over time of the centrosome of a live Dictyostelium discoideum. The simultaneous acquisition of images from multiple perspectives also provides a five-fold increase in the theoretical collection efficiency of emitted photons, a property which may be useful for low-light imaging modalities such as bioluminescence, or low abundance surface-marker labelling. PMID:19017196

  16. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    NASA Astrophysics Data System (ADS)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  17. Intervention Planning Using a Laser Navigation System for CT-Guided Interventions: A Phantom and Patient Study

    PubMed Central

    Lee, Clara; Bolck, Jan; Naguib, Nagy N.N.; Schulz, Boris; Eichler, Katrin; Aschenbach, Rene; Wichmann, Julian L.; Vogl, Thomas. J.; Zangos, Stephan

    2015-01-01

    Objective To investigate the accuracy, efficiency and radiation dose of a novel laser navigation system (LNS) compared to those of free-handed punctures on computed tomography (CT). Materials and Methods Sixty punctures were performed using a phantom body to compare accuracy, timely effort, and radiation dose of the conventional free-handed procedure to those of the LNS-guided method. An additional 20 LNS-guided interventions were performed on another phantom to confirm accuracy. Ten patients subsequently underwent LNS-guided punctures. Results The phantom 1-LNS group showed a target point accuracy of 4.0 ± 2.7 mm (freehand, 6.3 ± 3.6 mm; p = 0.008), entrance point accuracy of 0.8 ± 0.6 mm (freehand, 6.1 ± 4.7 mm), needle angulation accuracy of 1.3 ± 0.9° (freehand, 3.4 ± 3.1°; p < 0.001), intervention time of 7.03 ± 5.18 minutes (freehand, 8.38 ± 4.09 minutes; p = 0.006), and 4.2 ± 3.6 CT images (freehand, 7.9 ± 5.1; p < 0.001). These results show significant improvement in 60 punctures compared to freehand. The phantom 2-LNS group showed a target point accuracy of 3.6 ± 2.5 mm, entrance point accuracy of 1.4 ± 2.0 mm, needle angulation accuracy of 1.0 ± 1.2°, intervention time of 1.44 ± 0.22 minutes, and 3.4 ± 1.7 CT images. The LNS group achieved target point accuracy of 5.0 ± 1.2 mm, entrance point accuracy of 2.0 ± 1.5 mm, needle angulation accuracy of 1.5 ± 0.3°, intervention time of 12.08 ± 3.07 minutes, and used 5.7 ± 1.6 CT-images for the first experience with patients. Conclusion Laser navigation system improved accuracy, duration of intervention, and radiation dose of CT-guided interventions. PMID:26175571

  18. Image Capture and Display Based on Embedded Linux

    NASA Astrophysics Data System (ADS)

    Weigong, Zhang; Suran, Di; Yongxiang, Zhang; Liming, Li

    For the requirement of building a highly reliable communication system, SpaceWire was selected in the integrated electronic system. There was a need to test the performance of SpaceWire. As part of the testing work, the goal of this paper is to transmit image data from CMOS camera through SpaceWire and display real-time images on the graphical user interface with Qt in the embedded development platform of Linux & ARM. A point-to-point mode of transmission was chosen; the running result showed the two communication ends basically reach a consensus picture in succession. It suggests that the SpaceWire can transmit the data reliably.

  19. New theory on the reverberation of rooms. [considering sound wave travel time

    NASA Technical Reports Server (NTRS)

    Pujolle, J.

    1974-01-01

    The inadequacy of the various theories which have been proposed for finding the reverberation time of rooms can be explained by an attempt to examine what might occur at a listening point when image sources of determined acoustic power are added to the actual source. The number and locations of the image sources are stipulated. The intensity of sound at the listening point can be calculated by means of approximations whose conditions for validity are given. This leads to the proposal of a new expression for the reverberation time, yielding results which fall between those obtained through use of the Eyring and Millington formulae; these results are made to depend on the shape of the room by means of a new definition of the mean free path.

  20. 'EPIC' View of Africa and Europe from a Million Miles Away

    NASA Image and Video Library

    2015-07-29

    Africa is front and center in this image of Earth taken by a NASA camera on the Deep Space Climate Observatory (DSCOVR) satellite. The image, taken July 6 from a vantage point one million miles from Earth, was one of the first taken by NASA’s Earth Polychromatic Imaging Camera (EPIC). Central Europe is toward the top of the image with the Sahara Desert to the south, showing the Nile River flowing to the Mediterranean Sea through Egypt. The photographic-quality color image was generated by combining three separate images of the entire Earth taken a few minutes apart. The camera takes a series of 10 images using different narrowband filters -- from ultraviolet to near infrared -- to produce a variety of science products. The red, green and blue channel images are used in these Earth images. The DSCOVR mission is a partnership between NASA, the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Air Force, with the primary objective to maintain the nation’s real-time solar wind monitoring capabilities, which are critical to the accuracy and lead time of space weather alerts and forecasts from NOAA. DSCOVR was launched in February to its planned orbit at the first Lagrange point or L1, about one million miles from Earth toward the sun. It’s from that unique vantage point that the EPIC instrument is acquiring images of the entire sunlit face of Earth. Data from EPIC will be used to measure ozone and aerosol levels in Earth’s atmosphere, cloud height, vegetation properties and a variety of other features. Image Credit: NASA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  2. Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.

    PubMed

    Okuno, Masanari; Hamaguchi, Hiro-o

    2010-12-15

    We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.

  3. Imaging performance of a LaBr3-based PET scanner

    PubMed Central

    Daube-Witherspoon, M E; Surti, S; Perkins, A; Kyba, C C M; Wiener, R; Werner, M E; Kulp, R; Karp, J S

    2010-01-01

    A prototype time-of-flight (TOF) PET scanner based on cerium-doped lanthanum bromide [LaBr3 (5% Ce)] has been developed. LaBr3 has high light output, excellent energy resolution, and fast timing properties that have been predicted to lead to good image quality. Intrinsic performance measurements of spatial resolution, sensitivity, and scatter fraction demonstrate good conventional PET performance; the results agree with previous simulation studies. Phantom measurements show the excellent image quality achievable with the prototype system. Phantom measurements and corresponding simulations show a faster and more uniform convergence rate, as well as more uniform quantification, for TOF reconstruction of the data, which have 375-ps intrinsic timing resolution, compared to non-TOF images. Measurements and simulations of a hot and cold sphere phantom show that the 7% energy resolution helps to mitigate residual errors in the scatter estimate because a high energy threshold (>480 keV) can be used to restrict the amount of scatter accepted without a loss of true events. Preliminary results with incorporation of a model of detector blurring in the iterative reconstruction algorithm show improved contrast recovery but also point out the importance of an accurate resolution model of the tails of LaBr3’s point spread function. The LaBr3 TOF-PET scanner has demonstrated the impact of superior timing and energy resolutions on image quality. PMID:19949259

  4. Time-of-Flight Microwave Camera.

    PubMed

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-05

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  5. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.

  6. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    PubMed

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Using street view imagery for 3-D survey of rock slope failures

    NASA Astrophysics Data System (ADS)

    Voumard, Jérémie; Abellán, Antonio; Nicolet, Pierrick; Penna, Ivanna; Chanut, Marie-Aurélie; Derron, Marc-Henri; Jaboyedoff, Michel

    2017-12-01

    We discuss here different challenges and limitations of surveying rock slope failures using 3-D reconstruction from image sets acquired from street view imagery (SVI). We show how rock slope surveying can be performed using two or more image sets using online imagery with photographs from the same site but acquired at different instances. Three sites in the French alps were selected as pilot study areas: (1) a cliff beside a road where a protective wall collapsed, consisting of two image sets (60 and 50 images in each set) captured within a 6-year time frame; (2) a large-scale active landslide located on a slope at 250 m from the road, using seven image sets (50 to 80 images per set) from five different time periods with three image sets for one period; (3) a cliff over a tunnel which has collapsed, using two image sets captured in a 4-year time frame. The analysis include the use of different structure from motion (SfM) programs and a comparison between the extracted photogrammetric point clouds and a lidar-derived mesh that was used as a ground truth. Results show that both landslide deformation and estimation of fallen volumes were clearly identified in the different point clouds. Results are site- and software-dependent, as a function of the image set and number of images, with model accuracies ranging between 0.2 and 3.8 m in the best and worst scenario, respectively. Although some limitations derived from the generation of 3-D models from SVI were observed, this approach allowed us to obtain preliminary 3-D models of an area without on-field images, allowing extraction of the pre-failure topography that would not be available otherwise.

  8. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  9. The Nature and Extent of Body Image Concerns Among Surgically Treated Patients with Head and Neck Cancer

    PubMed Central

    Fingeret, Michelle Cororve; Yuan, Ying; Urbauer, Diana; Weston, June; Nipomnick, Summer; Weber, Randal

    2016-01-01

    Objective The purpose of this study was to describe body image concerns for surgically treated patients with head and neck cancer and evaluate the relationship between body image concerns and quality of life outcomes. Methods Data were obtained from 280 patients undergoing surgical treatment for head and neck cancer. We used a cross-sectional design and obtained data from individuals at different time points relative to initiation of surgical treatment. Participants completed the Body Image Scale, the Functional Assessment of Cancer Therapy scale – Head and Neck version, and a survey designed for this study to evaluate disease-specific body image issues, satisfaction with care regarding body image issues, and interest in psychosocial intervention. Results Body image concerns were prevalent in the majority of participants with 75% acknowledging concerns or embarrassment about one or more types of bodily changes at some point during treatment. Significant associations were found between body image concerns and all major domains of quality of life. Age, gender, cancer type, time since surgery, and body image variables were significantly associated with psychosocial outcomes. A clear subset of participants expressed dissatisfaction with care received about body image issues and/or indicated they would have liked additional resources to help them cope with body image changes. Conclusions These data provide useful information to document wide-ranging body image difficulties for this population and provide important targets for the development of relevant psychosocial interventions. PMID:21706673

  10. Implementation of total focusing method for phased array ultrasonic imaging on FPGA

    NASA Astrophysics Data System (ADS)

    Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2015-02-01

    This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.

  11. Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm

    PubMed Central

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2010-01-01

    With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443

  12. HORN-6 special-purpose clustered computing system for electroholography.

    PubMed

    Ichihashi, Yasuyuki; Nakayama, Hirotaka; Ito, Tomoyoshi; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Shiraki, Atsushi; Sugie, Takashige

    2009-08-03

    We developed the HORN-6 special-purpose computer for holography. We designed and constructed the HORN-6 board to handle an object image composed of one million points and constructed a cluster system composed of 16 HORN-6 boards. Using this HORN-6 cluster system, we succeeded in creating a computer-generated hologram of a three-dimensional image composed of 1,000,000 points at a rate of 1 frame per second, and a computer-generated hologram of an image composed of 100,000 points at a rate of 10 frames per second, which is near video rate, when the size of a computer-generated hologram is 1,920 x 1,080. The calculation speed is approximately 4,600 times faster than that of a personal computer with an Intel 3.4-GHz Pentium 4 CPU.

  13. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  14. The utility of polarized heliospheric imaging for space weather monitoring.

    PubMed

    DeForest, C E; Howard, T A; Webb, D F; Davies, J A

    2016-01-01

    A polarizing heliospheric imager is a critical next generation tool for space weather monitoring and prediction. Heliospheric imagers can track coronal mass ejections (CMEs) as they cross the solar system, using sunlight scattered by electrons in the CME. This tracking has been demonstrated to improve the forecasting of impact probability and arrival time for Earth-directed CMEs. Polarized imaging allows locating CMEs in three dimensions from a single vantage point. Recent advances in heliospheric imaging have demonstrated that a polarized imager is feasible with current component technology.Developing this technology to a high technology readiness level is critical for space weather relevant imaging from either a near-Earth or deep-space mission. In this primarily technical review, we developpreliminary hardware requirements for a space weather polarizing heliospheric imager system and outline possible ways to flight qualify and ultimately deploy the technology operationally on upcoming specific missions. We consider deployment as an instrument on NOAA's Deep Space Climate Observatory follow-on near the Sun-Earth L1 Lagrange point, as a stand-alone constellation of smallsats in low Earth orbit, or as an instrument located at the Sun-Earth L5 Lagrange point. The critical first step is the demonstration of the technology, in either a science or prototype operational mission context.

  15. NDE of hybrid armor structures using acoustography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, Jaswinder S.; Pergantis, Charles G.

    2011-06-23

    The US Army is investigating the use of composite materials to deliver lightweight and more effective armor protection systems to soldiers and other army assets. However, widespread use of such hybrid armor will require a reliable but fast NDE methodology to ensure integrity of these components during manufacturing and while in service. Traditional ultrasonic inspection of such hybrid armor structures may prove to be very effective, but point-by-point ultrasonic scanning is inherently time-consuming and manufacturing slowdowns could develop in high-volume production of such armor systems. In this paper, we report on the application of acoustography for the NDE of hybridmore » armor structures. Acoustography differs from conventional ultrasonic testing in that test objects are inspected in full field, analogously to real time x-ray imaging. The approach uses a novel, super high resolution large area acousto-optic (AO) sensor, which allows image formation through simple ultrasound shadow casting, analogous to x-ray image formation. This NDE approach offers significant inspection speed advantage over conventional point-by-point ultrasonic scanning procedures and is well-suited for high volume production. We will report initial results on a number of hybrid armor plate specimens employing composite materials that are being investigated by the US Army. Acoustography NDE results will also be verified using other complimentary NDE methods.« less

  16. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  17. Pulse-echo ultrasonic imaging method for eliminating sample thickness variation effects

    NASA Technical Reports Server (NTRS)

    Roth, Don J. (Inventor)

    1995-01-01

    A pulse-echo, immersion method for ultrasonic evaluation of a material is discussed. It accounts for and eliminates nonlevelness in the equipment set-up and sample thickness variation effects employs a single transducer, automatic scanning and digital imaging to obtain an image of a property of the material, such as pore fraction. The nonlevelness and thickness variation effects are accounted for by pre-scan adjusments of the time window to insure that the echoes received at each scan point are gated in the center of the window. This information is input into the scan file so that, during the automatic scanning for the material evaluation, each received echo is centered in its time window. A cross-correlation function calculates the velocity at each scan point, which is then proportionalized to a color or grey scale and displayed on a video screen.

  18. Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET

    NASA Astrophysics Data System (ADS)

    Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan

    2013-06-01

    TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.

  19. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  20. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  1. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  2. Time-frequency analysis of backscattered signals from diffuse radar targets

    NASA Astrophysics Data System (ADS)

    Kenny, O. P.; Boashash, B.

    1993-06-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.

  3. Visualization of time-varying MRI data for MS lesion analysis

    NASA Astrophysics Data System (ADS)

    Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella

    2001-05-01

    Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.

  4. Hydrodynamic interaction of two particles in confined linear shear flow at finite Reynolds number

    NASA Astrophysics Data System (ADS)

    Yan, Yiguang; Morris, Jeffrey F.; Koplik, Joel

    2007-11-01

    We discuss the hydrodynamic interactions of two solid bodies placed in linear shear flow between parallel plane walls in a periodic geometry at finite Reynolds number. The computations are based on the lattice Boltzmann method for particulate flow, validated here by comparison to previous results for a single particle. Most of our results pertain to cylinders in two dimensions but some examples are given for spheres in three dimensions. Either one mobile and one fixed particle or else two mobile particles are studied. The motion of a mobile particle is qualitatively similar in both cases at early times, exhibiting either trajectory reversal or bypass, depending upon the initial vector separation of the pair. At longer times, if a mobile particle does not approach a periodic image of the second, its trajectory tends to a stable limit point on the symmetry axis. The effect of interactions with periodic images is to produce nonconstant asymptotic long-time trajectories. For one free particle interacting with a fixed second particle within the unit cell, the free particle may either move to a fixed point or take up a limit cycle. Pairs of mobile particles starting from symmetric initial conditions are shown to asymptotically reach either fixed points, or mirror image limit cycles within the unit cell, or to bypass one another (and periodic images) indefinitely on a streamwise periodic trajectory. The limit cycle possibility requires finite Reynolds number and arises as a consequence of streamwise periodicity when the system length is sufficiently short.

  5. The influence of biological and technical factors on quantitative analysis of amyloid PET: Points to consider and recommendations for controlling variability in longitudinal data.

    PubMed

    Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William

    2015-09-01

    In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Automatic Georeferencing of Astronaut Auroral Photography: Providing a New Dataset for Space Physics

    NASA Astrophysics Data System (ADS)

    Riechert, Maik; Walsh, Andrew P.; Taylor, Matt

    2014-05-01

    Astronauts aboard the International Space Station (ISS) have taken tens of thousands of photographs showing the aurora in high temporal and spatial resolution. The use of these images in research though is limited as they often miss accurate pointing and scale information. In this work we develop techniques and software libraries to automatically georeference such images, and provide a time and location-searchable database and website of those images. Aurora photographs very often include a visible starfield due to the necessarily long camera exposure times. We extend on the proof-of-concept of Walsh et al. (2012) who used starfield recognition software, Astrometry.net, to reconstruct the pointing and scale information. Previously a manual pre-processing step, the starfield can now in most cases be separated from earth and spacecraft structures successfully using image recognition. Once the pointing and scale of an image are known, latitudes and longitudes can be calculated for each pixel corner for an assumed auroral emission height. As part of this work, an open-source Python library is developed which automates the georeferencing process and aids in visualization tasks. The library facilitates the resampling of the resulting data from an irregular to a regular coordinate grid in a given pixel per degree density, it supports the export of data in CDF and NetCDF formats, and it generates polygons for drawing graphs and stereographic maps. In addition, the THEMIS all-sky imager web archive has been included as a first transparently accessible imaging source which in this case is useful when drawing maps of ISS passes over North America. The database and website are in development and will use the Python library as their base. Through this work, georeferenced auroral ISS photography is made available as a continously extended and easily accessible dataset. This provides potential not only for new studies on the aurora australis, as there are few all-sky imagers in the southern hemisphere, but also for multi-point observations of the aurora borealis by combining with THEMIS and other imager arrays.

  7. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  8. D Building Reconstruction by Multiview Images and the Integrated Application with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Hwang, Jin-Tsong; Chu, Ting-Chen

    2016-10-01

    This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D) point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV) is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR). Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.

  9. Magnetic resonance imaging retinal oximetry: a quantitative physiological biomarker for early diabetic retinopathy?

    PubMed

    Yang, Y; Zhu, X R; Xu, Q G; Metcalfe, H; Wang, Z C; Yang, J K

    2012-04-01

    To assess the efficacy of using magnetic resonance imaging measurements of retinal oxygenation response to detect early diabetic retinopathy in patients with Type 2 diabetes. Magnetic resonance imaging was conducted during 100% oxygen inhalation in patients with Type 2 diabetes with either no diabetic retinopathy (n = 12) or mild to moderate background diabetic retinopathy (n = 12), as well as in healthy control subjects (n = 12). Meanwhile, changes in retinal oxygenation response were measured. In the healthy control group, levels of retinal oxygenation response increased slowly during 100% oxygen inhalation. In contrast, they increased more quickly and attained homeostasis much earlier in the groups with background diabetic retinopathy (at the 20-min time point) and with no diabetic retinopathy (at the 25-min time point) than in the healthy control group (at the 42-min time point). Furthermore, levels of retinal oxygenation response in the group with background diabetic retinopathy increased more than that of the group with no diabetic retinopathy, which in turn increased more than that of the healthy control group. There are statistically significant differences between the group with background diabetic retinopathy and the healthy control group at 6-, 8-, 10-, 15-, 20- and 25-min time points (P < 0.05). According to the normal range of the healthy control group by setting fundus photography results as 'gold standard' in our research, the sensitivity, specificity, positive predictive value, negative predictive value and receiver operating characteristic area for reporting the early indications of utility of diabetic retinopathy were 83.33%, 58.33%, 50%, 87.5% and 0.774, respectively. The results indicate that magnetic resonance imaging is a potential screening method and probably a quantitative physiological biomarker to find early diabetic retinopathy in patients with Type 2 diabetes. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.

  10. Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans

    NASA Astrophysics Data System (ADS)

    Boussaha, M.; Vallet, B.; Rives, P.

    2018-05-01

    The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  11. BRMS1 Suppresses Breast Cancer Metastasis to Bone via Its Regulation of microRNA-125b and Downstream Attenuation of TNF-Alpha and HER2 Signaling Pathways

    DTIC Science & Technology

    2014-04-01

    cytoskeleton genes and genes regulating focal adhesion assembly, such as a5 integrin, Tenascin C, Talin-1, Profilin 1, and Actinin [35]. Intravital ...and allowed to adhere for time indicated, at which point cells were fixed and stainedwith crystal violet. Representative images for times 5, 10, 15...matrix milieu and imaged by time-lapse microscopy for 1 h or fixed and stained with crystal violet at times indicated. As shown in Figure 1C and

  12. Details on Silica-Rich Elk Target near Marias Pass

    NASA Image and Video Library

    2015-12-17

    This image from the Chemistry and Camera (ChemCam) instrument on NASA's Curiosity Mars rover shows detailed texture of a rock target called "Elk" on Mars' Mount Sharp, revealing laminations that are present in much of the Murray Formation geological unit of lower Mount Sharp. Researchers also used ChemCam's laser and spectrometers to assess Elk's composition and found it to be rich in silica. The image covers a patch of rock surface about 2.8 inches (7 centimeters) across. It was taken on May 22, 2015, during the mission's 992nd Martian day, or sol. ChemCam's Remote Micro-Imager camera, on top of Curiosity's mast, captured the image from a distance of about 9 feet (2.75 meters). Annotations in red identify five points on Elk that were hit with ChemCam's laser. Each of the highlighted points is a location where ChemCam fired its laser 30 times to ablate a tiny amount of target material. By analyzing the light emitted from this laser-ablation, researchers can deduce the composition of that point. For some purposes, composition is presented as a combination of the information from multiple points on the same rock. However, using the points individually can track fine-scale variations in targets. http://photojournal.jpl.nasa.gov/catalog/PIA20267

  13. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  14. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  15. 3D time-lapse analysis of Rab11/FIP5 complex: spatiotemporal dynamics during apical lumen formation.

    PubMed

    Mangan, Anthony; Prekeris, Rytis

    2015-01-01

    Fluorescent imaging of fixed cells grown in two-dimensional (2D) cultures is one of the most widely used techniques for observing protein localization and distribution within cells. Although this technique can also be applied to polarized epithelial cells that form three-dimensional (3D) cysts when grown in a Matrigel matrix suspension, there are still significant limitations in imaging cells fixed at a particular point in time. Here, we describe the use of 3D time-lapse imaging of live cells to observe the dynamics of apical membrane initiation site (AMIS) formation and lumen expansion in polarized epithelial cells.

  16. Quantitative assessment of dynamic PET imaging data in cancer imaging.

    PubMed

    Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E

    2012-11-01

    Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Comparison of a multimedia simulator to a human model for teaching FAST exam image interpretation and image acquisition.

    PubMed

    Damewood, Sara; Jeanmonod, Donald; Cadigan, Beth

    2011-04-01

    This study compared the effectiveness of a multimedia ultrasound (US) simulator to normal human models during the practical portion of a course designed to teach the skills of both image acquisition and image interpretation for the Focused Assessment with Sonography for Trauma (FAST) exam. This was a prospective, blinded, controlled education study using medical students as an US-naïve population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Four outcome measures were then assessed: image interpretation of prerecorded FAST exams, adequacy of image acquisition on a standardized normal patient, perceived confidence of image adequacy, and time to image acquisition. Ninety-two students were enrolled and separated into two groups, a multimedia simulator group (n = 44), and a human model group (n = 48). Bonferroni adjustment factor determined the level of significance to be p = 0.0125. There was no difference between those trained on the multimedia simulator and those trained on a human model in image interpretation (median 80 of 100 points, interquartile range [IQR] 71-87, vs. median 78, IQR 62-86; p = 0.16), image acquisition (median 18 of 24 points, IQR 12-18 points, vs. median 16, IQR 14-20; p = 0.95), trainee's confidence in obtaining images on a 1-10 visual analog scale (median 5, IQR 4.1-6.5, vs. median 5, IQR 3.7-6.0; p = 0.36), or time to acquire images (median 3.8 minutes, IQR 2.7-5.4 minutes, vs. median = 4.5 minutes, IQR = 3.4-5.9 minutes; p = 0.044). There was no difference in teaching the skills of image acquisition and interpretation to novice FAST examiners using the multimedia simulator or normal human models. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models. © 2011 by the Society for Academic Emergency Medicine.

  18. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting traffic signs with the incorporation of the 3D planar constraint of their Lidar points. It is promising for the robust and large-scale survey of most transportation infrastructure with the application of MMS.

  19. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  20. Analysis of short single rest/activation epoch fMRI by self-organizing map neural network

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Dietrich, Thomas; Kemeny, Stefan; Krings, Timo; Willmes, Klaus; Thron, Armin; Oberschelp, Walter

    2000-04-01

    Functional magnet resonance imaging (fMRI) has become a standard non invasive brain imaging technique delivering high spatial resolution. Brain activation is determined by magnetic susceptibility of the blood oxygen level (BOLD effect) during an activation task, e.g. motor, auditory and visual tasks. Usually box-car paradigms have 2 - 4 rest/activation epochs with at least an overall of 50 volumes per scan in the time domain. Statistical test based analysis methods need a large amount of repetitively acquired brain volumes to gain statistical power, like Student's t-test. The introduced technique based on a self-organizing neural network (SOM) makes use of the intrinsic features of the condition change between rest and activation epoch and demonstrated to differentiate between the conditions with less time points having only one rest and one activation epoch. The method reduces scan and analysis time and the probability of possible motion artifacts from the relaxation of the patients head. Functional magnet resonance imaging (fMRI) of patients for pre-surgical evaluation and volunteers were acquired with motor (hand clenching and finger tapping), sensory (ice application), auditory (phonological and semantic word recognition task) and visual paradigms (mental rotation). For imaging we used different BOLD contrast sensitive Gradient Echo Planar Imaging (GE-EPI) single-shot pulse sequences (TR 2000 and 4000, 64 X 64 and 128 X 128, 15 - 40 slices) on a Philips Gyroscan NT 1.5 Tesla MR imager. All paradigms were RARARA (R equals rest, A equals activation) with an epoch width of 11 time points each. We used the self-organizing neural network implementation described by T. Kohonen with a 4 X 2 2D neuron map. The presented time course vectors were clustered by similar features in the 2D neuron map. Three neural networks were trained and used for labeling with the time course vectors of one, two and all three on/off epochs. The results were also compared by using a Kolmogorov-Smirnov statistical test of all 66 time points. To remove non- periodical time courses from training an auto-correlation function and bandwidth limiting Fourier filtering in combination with Gauss temporal smoothing was used. None of the trained maps, with one, two and three epochs, were significantly different which indicates that the feature space of only one on/off epoch is sufficient to differentiate between the rest and task condition. We found, that without pre-processing of the data no meaningful results can be achieved because of the huge amount of the non-activated and background voxels represents the majority of the features and is therefore learned by the SOM. Thus it is crucial to remove unnecessary capacity load of the neural network by selection of the training input, using auto-correlation function and/or Fourier spectrum analysis. However by reducing the time points to one rest and one activation epoch either strong auto- correlation or a precise periodical frequency is vanishing. Self-organizing maps can be used to separate rest and activation epochs of with only a 1/3 of the usually acquired time points. Because of the nature of the SOM technique, the pattern or feature separation, only the presence of a state change between the conditions is necessary for differentiation. Also the variance of the individual hemodynamic response function (HRF) and the variance of the spatial different regional cerebral blood flow (rCBF) is learned from the subject and not compared with a fixed model done by statistical evaluation. We found that reducing the information to only a few time points around the BOLD effect was not successful due to delays of rCBF and the insufficient extension of the BOLD feature in the time space. Especially for patient routine observation and pre-surgical planing a reduced scan time is of interest.

  1. Geometric registration of images by similarity transformation using two reference points

    NASA Technical Reports Server (NTRS)

    Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)

    2011-01-01

    A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.

  2. SEARCHING FOR COMETS ON THE WORLD WIDE WEB: THE ORBIT OF 17P/HOLMES FROM THE BEHAVIOR OF PHOTOGRAPHERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Dustin; Hogg, David W., E-mail: dstn@astro.princeton.edu

    We performed an image search for 'Comet Holmes', using the Yahoo{exclamation_point} Web search engine, on 2010 April 1. Thousands of images were returned. We astrometrically calibrated-and therefore vetted-the images using the Astrometry.net system. The calibrated image pointings form a set of data points to which we can fit a test-particle orbit in the solar system, marginalizing over image dates and detecting outliers. The approach is Bayesian and the model is, in essence, a model of how comet astrophotographers point their instruments. In this work, we do not measure the position of the comet within each image, but rather use themore » celestial position of the whole image to infer the orbit. We find very strong probabilistic constraints on the orbit, although slightly off the Jet Propulsion Lab ephemeris, probably due to limitations of our model. Hyperparameters of the model constrain the reliability of date meta-data and where in the image astrophotographers place the comet; we find that {approx}70% of the meta-data are correct and that the comet typically appears in the central third of the image footprint. This project demonstrates that discoveries and measurements can be made using data of extreme heterogeneity and unknown provenance. As the size and diversity of astronomical data sets continues to grow, approaches like ours will become more essential. This project also demonstrates that the Web is an enormous repository of astronomical information, and that if an object has been given a name and photographed thousands of times by observers who post their images on the Web, we can (re-)discover it and infer its dynamical properties.« less

  3. First NAC Image Obtained in Mercury Orbit

    NASA Image and Video Library

    2017-12-08

    NASA image acquired: March 29, 2011 This is the first image of Mercury taken from orbit with MESSENGER’s Narrow Angle Camera (NAC). MESSENGER’s camera system, the Mercury Dual Imaging System (MDIS), has two cameras: the Narrow Angle Camera and the Wide Angle Camera (WAC). Comparison of this image with MESSENGER’s first WAC image of the same region shows the substantial difference between the fields of view of the two cameras. At 1.5°, the field of view of the NAC is seven times smaller than the 10.5° field of view of the WAC. This image was taken using MDIS’s pivot. MDIS is mounted on a pivoting platform and is the only instrument in MESSENGER’s payload capable of movement independent of the spacecraft. The other instruments are fixed in place, and most point down the spacecraft’s boresight at all times, relying solely on the guidance and control system for pointing. The 90° range of motion of the pivot gives MDIS a much-needed extra degree of freedom, allowing MDIS to image the planet’s surface at times when spacecraft geometry would normally prevent it from doing so. The pivot also gives MDIS additional imaging opportunities by allowing it to view more of the surface than that at which the boresight-aligned instruments are pointed at any given time. On March 17, 2011 (March 18, 2011, UTC), MESSENGER became the first spacecraft ever to orbit the planet Mercury. The mission is currently in the commissioning phase, during which spacecraft and instrument performance are verified through a series of specially designed checkout activities. In the course of the one-year primary mission, the spacecraft's seven scientific instruments and radio science investigation will unravel the history and evolution of the Solar System's innermost planet. Visit the Why Mercury? section of this website to learn more about the science questions that the MESSENGER mission has set out to answer. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  4. Multiscale study on stochastic reconstructions of shale samples

    NASA Astrophysics Data System (ADS)

    Lili, J.; Lin, M.; Jiang, W. B.

    2016-12-01

    Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared. The result indicates that the agreement between the original 3D sample and the generated stochastic realizations is excellent. This work is supported by "973" Program (2014CB239004), the Key Instrument Developing Project of the CAS (ZDYZ2012-1-08-02) and the National Natural Science Foundation of China (Grant No. 41574129).

  5. Modelling of microcracks image treated with fluorescent dye

    NASA Astrophysics Data System (ADS)

    Glebov, Victor; Lashmanov, Oleg U.

    2015-06-01

    The main reasons of catastrophes and accidents are high level of wear of equipment and violation of the production technology. The methods of nondestructive testing are designed to find out defects timely and to prevent break down of aggregates. These methods allow determining compliance of object parameters with technical requirements without destroying it. This work will discuss dye penetrant inspection or liquid penetrant inspection (DPI or LPI) methods and computer model of microcracks image treated with fluorescent dye. Usually cracks on image look like broken extended lines with small width (about 1 to 10 pixels) and ragged edges. The used method of inspection allows to detect microcracks with depth about 10 or more micrometers. During the work the mathematical model of image of randomly located microcracks treated with fluorescent dye was created in MATLAB environment. Background noises and distortions introduced by the optical systems are considered in the model. The factors that have influence on the image are listed below: 1. Background noise. Background noise is caused by the bright light from external sources and it reduces contrast on the objects edges. 2. Noises on the image sensor. Digital noise manifests itself in the form of randomly located points that are differing in their brightness and color. 3. Distortions caused by aberrations of optical system. After passing through the real optical system the homocentricity of the bundle of rays is violated or homocentricity remains but rays intersect at the point that doesn't coincide with the point of the ideal image. The stronger the influence of the above-listed factors, the worse the image quality and therefore the analysis of the image for control of the item finds difficulty. The mathematical model is created using the following algorithm: at the beginning the number of cracks that will be modeled is entered from keyboard. Then the point with random position is choosing on the matrix whose size is 1024x1024 pixels (result image size). This random pixel and two adjacent points are painted with random brightness, the points, located at the edges have lower brightness than the central pixel. The width of the paintbrush is 3 pixels. Further one of the eight possible directions is chosen and the painting continues in this direction. The number of `steps' is also entered at the beginning of the program. This method of cracks simulating is based on theory A.N. Galybin and A.V. Dyskin, which describe cracks propagation as random walk process. These operations are repeated as many times as many cracks it's necessary to simulate. After that background noises and Gaussian blur (for simulating bad focusing of optical system) are applied.

  6. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  7. Research on the range side lobe suppression method for modulated stepped frequency radar signals

    NASA Astrophysics Data System (ADS)

    Liu, Yinkai; Shan, Tao; Feng, Yuan

    2018-05-01

    The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.

  8. Mass Spectrometry Using Nanomechanical Systems: Beyond the Point-Mass Approximation.

    PubMed

    Sader, John E; Hanay, M Selim; Neumann, Adam P; Roukes, Michael L

    2018-03-14

    The mass measurement of single molecules, in real time, is performed routinely using resonant nanomechanical devices. This approach models the molecules as point particles. A recent development now allows the spatial extent (and, indeed, image) of the adsorbate to be characterized using multimode measurements ( Hanay , M. S. , Nature Nanotechnol. , 10 , 2015 , pp 339 - 344 ). This "inertial imaging" capability is achieved through virtual re-engineering of the resonator's vibrating modes, by linear superposition of their measured frequency shifts. Here, we present a complementary and simplified methodology for the analysis of these inertial imaging measurements that exhibits similar performance while streamlining implementation. This development, together with the software that we provide, enables the broad implementation of inertial imaging that opens the door to a range of novel characterization studies of nanoscale adsorbates.

  9. An Approach of Registration between Remote Sensing Image and Electronic Chart Based on Coastal Line

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yu, Shuiming; Li, Chuanlong

    Remote sensing plays an important role marine oil spill emergency. In order to implement a timely and effective countermeasure, it is important to provide exact position of oil spills. Therefore it is necessary to match remote sensing image and electronic chart properly. Variance ordinarily exists between oil spill image and electronic chart, although geometric correction is applied to remote sensing image. It is difficult to find the steady control points on sea to make exact rectification of remote sensing image. An improved relaxation algorithm was developed for finding the control points along the coastline since oil spills occurs generally near the coast. A conversion function is created with the least square, and remote sensing image can be registered with the vector map based on this function. SAR image was used as the remote sensing data and shape format map as the electronic chart data. The results show that this approach can guarantee the precision of the registration, which is essential for oil spill monitoring.

  10. Wide-Field Raman Imaging of Dental Lesions

    PubMed Central

    Yang, Shan; Li, Bolan; Akkus, Anna; Akkus, Ozan; Lang, Lisa

    2014-01-01

    Detection of dental caries at the onset remains as a great challenge in dentistry. Raman spectroscopy could be successfully applied towards detecting caries since it is sensitive to the amount of the Raman active mineral crystals, the most abundant component of enamel. Effective diagnosis requires full examination of a tooth surface via a Raman mapping. Point-scan Raman mapping is not clinically relevant (feasible) due to lengthy data acquisition time. In this work, a wide-field Raman imaging system was assembled based on a high-sensitivity 2D CCD camera for imaging the mineralization status of teeth with lesions. Wide-field images indicated some lesions to be hypomineralized and others to be hypermineralized. The observations of wide-field Raman imaging were in agreement with point-scan Raman mapping. Therefore, sound enamel and lesions can be discriminated by Raman imaging of the mineral content. In conclusion, wide-field Raman imaging is a potentially useful tool for visualization of dental lesions in the clinic. PMID:24781363

  11. Development of Dynamic Spatial Video Camera (DSVC) for 4D observation, analysis and modeling of human body locomotion.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hayashibe, Mitsuhiro; Suzuki, Shigeyuki; Otake, Yoshito

    2003-01-01

    We have developed an imaging system for free and quantitative observation of human locomotion in a time-spatial domain by way of real time imaging. The system is equipped with 60 computer controlled video cameras to film human locomotion from all angles simultaneously. Images are installed into the main graphic workstation and translated into a 2D image matrix. Observation of the subject from optional directions is able to be performed by selecting the view point from the optimum image sequence in this image matrix. This system also possesses a function to reconstruct 4D models of the subject's moving human body by using 60 images taken from all directions at one particular time. And this system also has the capability to visualize inner structures such as the skeletal or muscular systems of the subject by compositing computer graphics reconstructed from the MRI data set. We are planning to apply this imaging system to clinical observation in the area of orthopedics, rehabilitation and sports science.

  12. Fusion of magnetic resonance angiography and magnetic resonance imaging for surgical planning for meningioma--technical note.

    PubMed

    Kashimura, Hiroshi; Ogasawara, Kuniaki; Arai, Hiroshi; Beppu, Takaaki; Inoue, Takashi; Takahashi, Tsutomu; Matsuda, Koichi; Takahashi, Yujiro; Fujiwara, Shunrou; Ogawa, Akira

    2008-09-01

    A fusion technique for magnetic resonance (MR) angiography and MR imaging was developed to help assess the peritumoral angioarchitecture during surgical planning for meningioma. Three-dimensional time-of-flight (3D-TOF) and 3D-spoiled gradient recalled (SPGR) datasets were obtained from 10 patients with intracranial meningioma, and fused using newly developed volume registration and visualization software. Maximum intensity projection (MIP) images from 3D-TOF MR angiography and axial SPGR MR imaging were displayed at the same time on the monitor. Selecting a vessel on the real-time MIP image indicated the corresponding points on the axial image automatically. Fusion images showed displacement of the anterior cerebral or middle cerebral artery in 7 patients and encasement of the anterior cerebral arteries in 1 patient, with no relationship between the main arterial trunk and tumor in 2 patients. Fusion of MR angiography and MR imaging can clarify relationships between the intracranial vasculature and meningioma, and may be helpful for surgical planning for meningioma.

  13. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions.

    PubMed

    Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R

    2017-06-01

    To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.

  14. Mild cognitive impairment: baseline and longitudinal structural MR imaging measures improve predictive prognosis.

    PubMed

    McEvoy, Linda K; Holland, Dominic; Hagler, Donald J; Fennema-Notestine, Christine; Brewer, James B; Dale, Anders M

    2011-06-01

    To assess whether single-time-point and longitudinal volumetric magnetic resonance (MR) imaging measures provide predictive prognostic information in patients with amnestic mild cognitive impairment (MCI). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Written informed consent was obtained from all participants or the participants' legal guardians. Cross-validated discriminant analyses of MR imaging measures were performed to differentiate 164 Alzheimer disease (AD) cases from 203 healthy control cases. Separate analyses were performed by using data from MR images obtained at one time point or by combining single-time-point measures with 1-year change measures. Resulting discriminant functions were applied to 317 MCI cases to derive individual patient risk scores. Risk of conversion to AD was estimated as a continuous function of risk score percentile. Kaplan-Meier survival curves were computed for risk score quartiles. Odds ratios (ORs) for the conversion to AD were computed between the highest and lowest quartile scores. Individualized risk estimates from baseline MR examinations indicated that the 1-year risk of conversion to AD ranged from 3% to 40% (average group risk, 17%; OR, 7.2 for highest vs lowest score quartiles). Including measures of 1-year change in global and regional volumes significantly improved risk estimates (P = 001), with the risk of conversion to AD in the subsequent year ranging from 3% to 69% (average group risk, 27%; OR, 12.0 for highest vs lowest score quartiles). Relative to the risk of conversion to AD conferred by the clinical diagnosis of MCI alone, MR imaging measures yield substantially more informative patient-specific risk estimates. Such predictive prognostic information will be critical if disease-modifying therapies become available. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101975/-/DC1. RSNA, 2011

  15. Effective image differencing with convolutional neural networks for real-time transient hunting

    NASA Astrophysics Data System (ADS)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  16. 3D Surface Reconstruction of Rills in a Spanish Olive Grove

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.

    2016-04-01

    The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.

  17. Streaming Multiframe Deconvolutions on GPUs

    NASA Astrophysics Data System (ADS)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  18. Image-guided automatic triggering of a fractional CO2 laser in aesthetic procedures.

    PubMed

    Wilczyński, Sławomir; Koprowski, Robert; Wiernek, Barbara K; Błońska-Fajfrowska, Barbara

    2016-09-01

    Laser procedures in dermatology and aesthetic medicine are associated with the need for manual laser triggering. This leads to pulse overlapping and side effects. Automatic laser triggering based on image analysis can provide a secure fit to each successive doses of radiation. A fractional CO2 laser was used in the study. 500 images of the human skin of healthy subjects were acquired. Automatic triggering was initiated by an application together with a camera which tracks and analyses the skin in visible light. The tracking algorithm uses the methods of image analysis to overlap images. After locating the characteristic points in analysed adjacent areas, the correspondence of graphs is found. The point coordinates derived from the images are the vertices of graphs with respect to which isomorphism is sought. When the correspondence of graphs is found, it is possible to overlap the neighbouring parts of the image. The proposed method of laser triggering owing to the automatic image fitting method allows for 100% repeatability. To meet this requirement, there must be at least 13 graph vertices obtained from the image. For this number of vertices, the time of analysis of a single image is less than 0.5s. The proposed method, applied in practice, may help reduce the number of side effects during dermatological laser procedures resulting from laser pulse overlapping. In addition, it reduces treatment time and enables to propose new techniques of treatment through controlled, precise laser pulse overlapping. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Optical resonance imaging: An optical analog to MRI with sub-diffraction-limited capabilities.

    PubMed

    Allodi, Marco A; Dahlberg, Peter D; Mazuski, Richard J; Davis, Hunter C; Otto, John P; Engel, Gregory S

    2016-12-21

    We propose here optical resonance imaging (ORI), a direct optical analog to magnetic resonance imaging (MRI). The proposed pulse sequence for ORI maps space to time and recovers an image from a heterodyne-detected third-order nonlinear photon echo measurement. As opposed to traditional photon echo measurements, the third pulse in the ORI pulse sequence has significant pulse-front tilt that acts as a temporal gradient. This gradient couples space to time by stimulating the emission of a photon echo signal from different lateral spatial locations of a sample at different times, providing a widefield ultrafast microscopy. We circumvent the diffraction limit of the optics by mapping the lateral spatial coordinate of the sample with the emission time of the signal, which can be measured to high precision using interferometric heterodyne detection. This technique is thus an optical analog of MRI, where magnetic-field gradients are used to localize the spin-echo emission to a point below the diffraction limit of the radio-frequency wave used. We calculate the expected ORI signal using 15 fs pulses and 87° of pulse-front tilt, collected using f /2 optics and find a two-point resolution 275 nm using 800 nm light that satisfies the Rayleigh criterion. We also derive a general equation for resolution in optical resonance imaging that indicates that there is a possibility of superresolution imaging using this technique. The photon echo sequence also enables spectroscopic determination of the input and output energy. The technique thus correlates the input energy with the final position and energy of the exciton.

  20. Acoustic holography: Problems associated with construction and reconstruction techniques

    NASA Technical Reports Server (NTRS)

    Singh, J. J.

    1978-01-01

    The implications of the difference between the inspecting and interrogating radiations are discussed. For real-time, distortionless, sound viewing, it is recommended that infrared radiation of wavelength comparable to the inspecting sound waves be used. The infrared images can be viewed with (IR visible) converter phosphors. The real-time display of the visible image of the acoustically-inspected object at low sound levels such as are used in medical diagnosis is evaluated. In this connection attention is drawn to the need for a phosphor screen which is such that its optical transmission at any point is directly related to the incident electron beam intensity at that point. Such a screen, coupled with an acoustical camera, can enable instantaneous sound wave reconstruction.

  1. Computational photoacoustic imaging with sparsity-based optimization of the initial pressure distribution

    NASA Astrophysics Data System (ADS)

    Shang, Ruibo; Archibald, Richard; Gelb, Anne; Luke, Geoffrey P.

    2018-02-01

    In photoacoustic (PA) imaging, the optical absorption can be acquired from the initial pressure distribution (IPD). An accurate reconstruction of the IPD will be very helpful for the reconstruction of the optical absorption. However, the image quality of PA imaging in scattering media is deteriorated by the acoustic diffraction, imaging artifacts, and weak PA signals. In this paper, we propose a sparsity-based optimization approach that improves the reconstruction of the IPD in PA imaging. A linear imaging forward model was set up based on time-and-delay method with the assumption that the point spread function (PSF) is spatial invariant. Then, an optimization equation was proposed with a regularization term to denote the sparsity of the IPD in a certain domain to solve this inverse problem. As a proof of principle, the approach was applied to reconstructing point objects and blood vessel phantoms. The resolution and signal-to-noise ratio (SNR) were compared between conventional back-projection and our proposed approach. Overall these results show that computational imaging can leverage the sparsity of PA images to improve the estimation of the IPD.

  2. Vectorial point spread function and optical transfer function in oblique plane imaging.

    PubMed

    Kim, Jeongmin; Li, Tongcang; Wang, Yuan; Zhang, Xiang

    2014-05-05

    Oblique plane imaging, using remote focusing with a tilted mirror, enables direct two-dimensional (2D) imaging of any inclined plane of interest in three-dimensional (3D) specimens. It can image real-time dynamics of a living sample that changes rapidly or evolves its structure along arbitrary orientations. It also allows direct observations of any tilted target plane in an object of which orientational information is inaccessible during sample preparation. In this work, we study the optical resolution of this innovative wide-field imaging method. Using the vectorial diffraction theory, we formulate the vectorial point spread function (PSF) of direct oblique plane imaging. The anisotropic lateral resolving power caused by light clipping from the tilted mirror is theoretically analyzed for all oblique angles. We show that the 2D PSF in oblique plane imaging is conceptually different from the inclined 2D slice of the 3D PSF in conventional lateral imaging. Vectorial optical transfer function (OTF) of oblique plane imaging is also calculated by the fast Fourier transform (FFT) method to study effects of oblique angles on frequency responses.

  3. Digital Correction of Motion Artifacts in Microscopy Image Sequences Collected from Living Animals Using Rigid and Non-Rigid Registration

    PubMed Central

    Lorenz, Kevin S.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2013-01-01

    Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail, and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artifacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and non-rigid components. The rigid registration component corrects global image translations, while the non-rigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung, and salivary gland of living rodents. PMID:22092443

  4. Extracting Information about the Rotator Cuff from Magnetic Resonance Images Using Deterministic and Random Techniques

    PubMed Central

    De Los Ríos, F. A.; Paluszny, M.

    2015-01-01

    We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281

  5. Visualization and imaging methods for flames in microgravity

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1993-01-01

    The visualization and imaging of flames has long been acknowledged as the starting point for learning about and understanding combustion phenomena. It provides an essential overall picture of the time and length scales of processes and guides the application of other diagnostics. It is perhaps even more important in microgravity combustion studies, where it is often the only non-intrusive diagnostic measurement easily implemented. Imaging also aids in the interpretation of single-point measurements, such as temperature, provided by thermocouples, and velocity, by hot-wire anemometers. This paper outlines the efforts of the Microgravity Combustion Diagnostics staff at NASA Lewis Research Center in the area of visualization and imaging of flames, concentrating on methods applicable for reduced-gravity experimentation. Several techniques are under development: intensified array camera imaging, and two-dimensional temperature and species concentrations measurements. A brief summary of results in these areas is presented and future plans mentioned.

  6. First Point-Spread Function and X-Ray Phase Contrast Imaging Results with an 88-mm Diameter Single Crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.

    In this study, we report initial demonstrations of the use of single crystals in indirect x-ray imaging with a benchtop implementation of propagation-based (PB) x-ray phase contrast imaging. Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point-spread function (PSF) with the 50-μm thick single crystal scintillators than with the reference polycrystalline phosphor/scintillator. Fiber-optic plate depth-of-focus and Al reflective-coating aspects are also elucidated. Guided by the results from the 25-mm diameter crystal samples, we report additionally the first results with a unique 88-mm diameter single crystal bonded to a fiber optic platemore » and coupled to the large format CCD. Both PSF and x-ray phase contrast imaging data are quantified and presented.« less

  7. Alcohol consumption during adolescence is associated with reduced grey matter volumes.

    PubMed

    Heikkinen, Noora; Niskanen, Eini; Könönen, Mervi; Tolmunen, Tommi; Kekkonen, Virve; Kivimäki, Petri; Tanila, Heikki; Laukkanen, Eila; Vanninen, Ritva

    2017-04-01

    Cognitive impairment has been associated with excessive alcohol use, but its neural basis is poorly understood. Chronic excessive alcohol use in adolescence may lead to neuronal loss and volumetric changes in the brain. Our objective was to compare the grey matter volumes of heavy- and light-drinking adolescents. This was a longitudinal study: heavy-drinking adolescents without an alcohol use disorder and their light-drinking controls were followed-up for 10 years using questionnaires at three time-points. Magnetic resonance imaging was conducted at the last time-point. The area near Kuopio University Hospital, Finland. The 62 participants were aged 22-28 years and included 35 alcohol users and 27 controls who had been followed-up for approximately 10 years. Alcohol use was measured by the Alcohol Use Disorders Identification Test (AUDIT)-C at three time-points during 10 years. Participants were selected based on their AUDIT-C score. Magnetic resonance imaging was conducted at the last time-point. Grey matter volume was determined and compared between heavy- and light-drinking groups using voxel-based morphometry on three-dimensional T1-weighted magnetic resonance images using predefined regions of interest and a threshold of P < 0.05, with small volume correction applied on cluster level. Grey matter volumes were significantly smaller among heavy-drinking participants in the bilateral anterior cingulate cortex, right orbitofrontal and frontopolar cortex, right superior temporal gyrus and right insular cortex compared to the control group (P < 0.05, family-wise error-corrected cluster level). Excessive alcohol use during adolescence appears to be associated with an abnormal development of the brain grey matter. Moreover, the structural changes detected in the insula of alcohol users may reflect a reduced sensitivity to alcohol's negative subjective effects. © 2016 Society for the Study of Addiction.

  8. Metabolic effects of pulmonary obstruction on myocardial functioning: a pilot study using multiple time-point 18F-FDG-PET imaging.

    PubMed

    Choi, Grace G; Han, Yuchi; Weston, Brian; Ciftci, Esra; Werner, Thomas J; Torigian, Drew; Salavati, Ali; Alavi, Abass

    2015-01-01

    The aim of this study was to evaluate fluorine-18 fluorodeoxyglucose (18F-FDG) uptake in the right ventricle (RV) of patients with chronic obstructive pulmonary disease (COPD) and to characterize the variability of 18F-FDG uptake in the RV at different time points following radiotracer administration using PET/computerized tomography (CT). Impaired RV systolic function, RV hypertrophy, and RV dilation are associated with increases in mean pulmonary arterial pressure in patients with COPD. Metabolic changes in the RV using 18F-FDG-PET images 2 and 3 h after tracer injection have not yet been investigated. Twenty-five patients with clinical suspicion of lung cancer underwent 18F-FDG-PET/CT imaging at 1, 2, and 3 h after tracer injection. Standardized uptake values (SUVs) and volumes of RV were recorded from transaxial sections to quantify the metabolic activity. The SUV of RV was higher in patients with COPD stages 1-3 as compared with that in patients with COPD stage 0. RV SUV was inversely correlated with FEV1/FVC pack-years of smoking at 1 h after 18F-FDG injection. In the majority of patients, 18F-FDG activity in RV decreased over time. There was no significant difference in the RV myocardial free wall and chamber volume on the basis of COPD status. The severity of lung obstruction and pack-years of smoking correlate with the level of 18F-FDG uptake in the RV myocardium, suggesting that there may be metabolic changes in the RV associated with lung obstruction that can be detected noninvasively using 18F-FDG-PET/CT. Multiple time-point images of the RV did not yield any additional value in this study.

  9. Imaging inflammatory acne: lesion detection and tracking

    NASA Astrophysics Data System (ADS)

    Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos

    2010-02-01

    It is known that effectiveness of acne treatment increases when the lesions are detected earlier, before they could progress into mature wound-like lesions, which lead to scarring and discoloration. However, little is known about the evolution of acne from early signs until after the lesion heals. In this work we computationally characterize the evolution of inflammatory acne lesions, based on analyzing cross-polarized images that document acne-prone facial skin over time. Taking skin images over time, and being able to follow skin features in these images present serious challenges, due to change in the appearance of skin, difficulty in repositioning the subject, involuntary movement such as breathing. A computational technique for automatic detection of lesions by separating the background normal skin from the acne lesions, based on fitting Gaussian distributions to the intensity histograms, is presented. In order to track and quantify the evolution of lesions, in terms of the degree of progress or regress, we designed a study to capture facial skin images from an acne-prone young individual, followed over the course of 3 different time points. Based on the behavior of the lesions between two consecutive time points, the automatically detected lesions are classified in four categories: new lesions, resolved lesions (i.e. lesions that disappear completely), lesions that are progressing, and lesions that are regressing (i.e. lesions in the process of healing). The classification our methods achieve correlates well with visual inspection of a trained human grader.

  10. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  11. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  12. A novel mesh processing based technique for 3D plant analysis

    PubMed Central

    2012-01-01

    Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969

  13. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique.

    PubMed

    Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-10-01

    To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option.

  14. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique

    PubMed Central

    Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-01-01

    Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option. PMID:26234823

  15. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  16. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  17. Blind deconvolution of astronomical images with band limitation determined by optical system parameters

    NASA Astrophysics Data System (ADS)

    Luo, L.; Fan, M.; Shen, M. Z.

    2007-07-01

    Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.

  18. Corrosion/erosion detection of boiler tubes utilizing pulsed infrared imaging

    NASA Astrophysics Data System (ADS)

    Bales, Maurice J.; Bishop, Chip C.

    1995-05-01

    This paper discusses a new technique for locating and detecting wall thickness reduction in boiler tubes caused by erosion/corrosion. Traditional means for this type of defect detection utilizes ultrasonics (UT) to perform a point by point measurement at given intervals of the tube length, which requires extensive and costly shutdown or `outage' time to complete the inspection, and has led to thin areas going undetected simply because they were located in between the sampling points. Pulsed infrared imaging (PII) can provide nearly 100% inspection of the tubes in a fraction of the time needed for UT. The IR system and heat source used in this study do not require any special access or fixed scaffolding, and can be remotely operated from a distance of up to 100 feet. This technique has been tried experimentally in a laboratory environment and verified in an actual field application. Since PII is a non-contact technique, considerable time and cost savings should be realized as well as the ability to predict failures rather than repairing them once they have occurred.

  19. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  20. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  1. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  2. Using high-content imaging data from ToxCast to analyze toxicological tipping points (TDS)

    EPA Science Inventory

    Translating results obtained from high-throughput screening to risk assessment is vital for reducing dependence on animal testing. We studied the effects of 976 chemicals (ToxCast Phase I and II) in HepG2 cells using high-content imaging (HCI) to measure dose and time-depende...

  3. The Benefits and Limitations of Using Ultrasonography to Supplement Anatomical Understanding

    ERIC Educational Resources Information Center

    Sweetman, Greg M.; Crawford, Gail; Hird, Kathryn; Fear, Mark W.

    2013-01-01

    Anatomical understanding is critical to medical education. With reduced teaching time and limited cadaver availability, it is important to investigate how best to utilize in vivo imaging to supplement anatomical understanding and better prepare medical graduates for the proliferation of point-of-care imaging in the future. To investigate whether…

  4. Monitoring Bridge Dynamic Deformation in Vibration by Digital Photography

    NASA Astrophysics Data System (ADS)

    Yu, Chengxin; Zhang, Guojian; Liu, Xiaodong; Fan, Li; Hai, Hua

    2018-01-01

    This study adopts digital photography to monitor bridge dynamic deformation in vibration. Digital photography in this study is based on PST-TBPM (photographing scale transformation-time baseline parallax method). Firstly, we monitor the bridge in static as a zero image. Then, we continuously monitor the bridge in vibration as the successive images. Based on the reference points on each image, PST-TBPM is used to calculate the images to obtain the dynamic deformation values of these deformation points. Results show that the average measurement accuracies are 0.685 pixels (0.51mm) and 0.635 pixels (0.47mm) in X and Z direction, respectively. The maximal deformations in X and Z direction of the bridge are 4.53 pixels and 5.21 pixels, respectively. PST-TBPM is valid in solving the problem that the photographing direction is not perpendicular to the bridge. Digital photography in this study can be used to assess bridge health through monitoring the dynamic deformation of a bridge in vibration. The deformation trend curves also can warn the possible dangers over time.

  5. Real-time advanced spinal surgery via visible patient model and augmented reality system.

    PubMed

    Wu, Jing-Ren; Wang, Min-Liang; Liu, Kai-Che; Hu, Ming-Hsien; Lee, Pei-Yuan

    2014-03-01

    This paper presents an advanced augmented reality system for spinal surgery assistance, and develops entry-point guidance prior to vertebroplasty spinal surgery. Based on image-based marker detection and tracking, the proposed camera-projector system superimposes pre-operative 3-D images onto patients. The patients' preoperative 3-D image model is registered by projecting it onto the patient such that the synthetic 3-D model merges with the real patient image, enabling the surgeon to see through the patients' anatomy. The proposed method is much simpler than heavy and computationally challenging navigation systems, and also reduces radiation exposure. The system is experimentally tested on a preoperative 3D model, dummy patient model and animal cadaver model. The feasibility and accuracy of the proposed system is verified on three patients undergoing spinal surgery in the operating theater. The results of these clinical trials are extremely promising, with surgeons reporting favorably on the reduced time of finding a suitable entry point and reduced radiation dose to patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  7. Topological photonic crystal with ideal Weyl points

    NASA Astrophysics Data System (ADS)

    Wang, Luyang; Jian, Shao-Kai; Yao, Hong

    Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on symmetry analysis, we show that a minimal number of symmetry-related Weyl points can be realized in time-reversal invariant photonic crystals. We propose to realize these ``ideal'' Weyl points in modified double-gyroid photonic crystals, which is confirmed by our first-principle photonic band-structure calculations. Photonic crystals with ideal Weyl points are qualitatively advantageous in applications such as angular and frequency selectivity, broadband invisibility cloaking, and broadband 3D-imaging.

  8. Within-subject template estimation for unbiased longitudinal image analysis.

    PubMed

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Laser vision seam tracking system based on image processing and continuous convolution operator tracker

    NASA Astrophysics Data System (ADS)

    Zou, Yanbiao; Chen, Tao

    2018-06-01

    To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.

  10. Time-of-Flight Microwave Camera

    PubMed Central

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-01-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598

  11. Time-of-Flight Microwave Camera

    NASA Astrophysics Data System (ADS)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  12. Automatic Tie Pointer for In-Situ Pointing Correction

    NASA Technical Reports Server (NTRS)

    Deen, Robert G/

    2011-01-01

    The MARSAUTOTIE program generates tie points for use with the Mars pointing correction software "In-Situ Pointing Correction and Rover Microlocalization," (NPO-46696) Soft ware Tech Briefs, Vol. 34, No. 9 (September 2010), page 18, in a completely automated manner, with no operator intervention. It takes the place of MARSTIE, although MARSTIE can be used to interactively edit the tie points afterwards. These tie points are used to create a mosaic whose seams (boundaries of input images) have been geometrically corrected to reduce or eliminate errors and mis-registrations. The methods used to find appropriate tie points for use in creating a mosaic are unique, having been designed to work in concert with the "MARSNAV" program to be most effective in reducing or eliminating geometric seams in a mosaic. The program takes the input images and finds overlaps according to the nominal pointing. It then finds the most interesting areas using a scene activity metric. Points with higher scene activity are more likely to correlate successfully in the next step. It then uses correlation techniques to find matching points in the overlapped image. Finally, it performs a series of steps to reduce the number of tie points to a manageable level. These steps incorporate a number of heuristics that have been devised using experience gathered by tie pointing mosaics manually during MER operations. The software makes use of the PIG library as described in "Planetary Image Geometry Library" (NPO-46658), NASA Tech Briefs, Vol. 34, No. 12 (December 2010), page 30, so it is multi-mission, applicable without change to any in-situ mission supported by PIG. The MARSAUTOTIE algorithm is automated, so it requires no user intervention. Although at the time of this reporting it has not been done, this program should be suitable for integration into a fully automated mosaic production pipeline.

  13. Neuroprotective effect of agmatine in rats with transient cerebral ischemia using MR imaging and histopathologic evaluation.

    PubMed

    Huang, Y C; Tzeng, W S; Wang, C C; Cheng, B C; Chang, Y K; Chen, H H; Lin, P C; Huang, T Y; Chuang, T J; Lin, J W; Chang, C P

    2013-09-01

    This study aimed to further investigate the effects of agmatine on brain edema in the rats with middle cerebral artery occlusion (MCAO) injury using magnetic resonance imaging (MRI) monitoring and biochemical and histopathologic evaluation. Following surgical induction of MCAO for 90min, agmatine was injected 5min after beginning of reperfusion and again once daily for the next 3 post-operative days. The events during ischemia and reperfusion were investigated by T2-weighted images (T2WI), serial diffusion-weighted images (DWI), calculated apparent diffusion coefficient (ADC) maps and contrast-enhanced T1-weighted images (CE-T1WI) during 3h-72h in a 1.5T Siemens MAGNETON Avanto Scanner. Lesion volumes were analyzed in a blinded and randomized manner. Triphenyltetrazolium chloride (TTC), Nissl, and Evans Blue stainings were performed at the corresponding sections. Increased lesion volumes derived from T2WI, DWI, ADC, CE-T1WI, and TTC all were noted at 3h and peaked at 24h-48h after MCAO injury. TTC-derived infarct volumes were not significantly different from the T2WI, DWI-, and CE-T1WI-derived lesion volumes at the last imaging time (72h) point except for significantly smaller ADC lesions in the MCAO model (P<0.05). Volumetric calculation based on TTC-derived infarct also correlated significantly stronger to volumetric calculation based on last imaging time point derived on T2WI, DWI or CE-T1WI than ADC (P<0.05). At the last imaging time point, a significant increase in Evans Blue extravasation and a significant decrease in Nissl-positive cells numbers were noted in the vehicle-treated MCAO injured animals. The lesion volumes derived from T2WI, DWI, CE-T1WI, and Evans blue extravasation as well as the reduced numbers of Nissl-positive cells were all significantly attenuated in the agmatine-treated rats compared with the control ischemia rats (P<0.05). Our results suggest that agmatine has neuroprotective effects against brain edema on a reperfusion model after transient cerebral ischemia. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Use of routine clinical multimodality imaging in a rabbit model of osteoarthritis--part I.

    PubMed

    Bouchgua, M; Alexander, K; d'Anjou, M André; Girard, C A; Carmel, E Norman; Beauchamp, G; Richard, H; Laverty, S

    2009-02-01

    To evaluate in vivo the evolution of osteoarthritis (OA) lesions temporally in a rabbit model of OA with clinically available imaging modalities: computed radiography (CR), helical single-slice computed tomography (CT), and 1.5 tesla (T) magnetic resonance imaging (MRI). Imaging was performed on knees of anesthetized rabbits [10 anterior cruciate ligament transection (ACLT) and contralateral sham joints and six control rabbits] at baseline and at intervals up to 12 weeks post-surgery. Osteophytosis, subchondral bone sclerosis, bone marrow lesions (BMLs), femoropatellar effusion and articular cartilage were assessed. CT had the highest sensitivity (90%) and specificity (91%) to detect osteophytes. A significant increase in total joint osteophyte score occurred at all time-points post-operatively in the ACLT group alone. BMLs were identified and occurred most commonly in the lateral femoral condyle of the ACLT joints and were not identified in the tibia. A significant increase in joint effusion was present in the ACLT joints until 8 weeks after surgery. Bone sclerosis or cartilage defects were not reliably assessed with the selected imaging modalities. Combined, clinically available CT and 1.5 T MRI allowed the assessment of most of the characteristic lesions of OA and at early time-points in the development of the disease. However, the selected 1.5 T MRI sequences and acquisition times did not permit the detection of cartilage lesions in this rabbit OA model.

  15. Automated Estimation of the Orbital Parameters of Jupiter's Moons

    NASA Astrophysics Data System (ADS)

    Western, Emma; Ruch, Gerald T.

    2016-01-01

    Every semester the Physics Department at the University of St. Thomas has the Physics 104 class complete a Jupiter lab. This involves taking around twenty images of Jupiter and its moons with the telescope at the University of St. Thomas Observatory over the course of a few nights. The students then take each image and find the distance from each moon to Jupiter and plot the distances versus the elapsed time for the corresponding image. Students use the plot to fit four sinusoidal curves of the moons of Jupiter. I created a script that automates this process for the professor. It takes the list of images and creates a region file used by the students to measure the distance from the moons to Jupiter, a png image that is the graph of all the data points and the fitted curves of the four moons, and a csv file that contains the list of images, the date and time each image was taken, the elapsed time since the first image, and the distances to Jupiter for Io, Europa, Ganymede, and Callisto. This is important because it lets the professor spend more time working with the students and answering questions as opposed to spending time fitting the curves of the moons on the graph, which can be time consuming.

  16. From Relativistic Electrons to X-ray Phase Contrast Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.

    2017-10-09

    We report the initial demonstrations of the use of single crystals in indirect x-ray imaging for x-ray phase contrast imaging at the Washington University in St. Louis Computational Bioimaging Laboratory (CBL). Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point spread function (21 μm (FWHM)) with the 25-mm diameter single crystals than the reference polycrystalline phosphor’s 80-μm value. Potential fiber-optic plate depth-of-focus aspects and 33-μm diameter carbon fiber imaging are also addressed.

  17. [Color Doppler ultrasonography--a new imaging procedure in maxillofacial surgery].

    PubMed

    Reinert, S; Lentrodt, J

    1991-01-01

    Colour Doppler ultrasonography shows blood flow in real time and colour by combining the features of real time B mode ultrasound and Doppler. At each point in the image the returning signal is interrogated for both amplitude and frequency information. The resulting image shows all non-moving structures in shades of gray and moving structures in shades of red or blue depending on direction and velocity. The technique of colour Doppler ultrasonography and our experiences in 63 examinations are described. The clinical application of this new simple non-invasive method in maxillo-facial surgery is discussed.

  18. ARC-1994-AC94-0353-2B

    NASA Image and Video Library

    1994-07-01

    Photo Artwork composite by JPL This depiction of comet Shoemaker-Levy 9 impacting Jupiter is shown from several perspectives. IMAGE B shows the perspective from Galileo spacecraft which can observe the impact point directly. For visual appeal, most of the large cometary fragments are shown close to one another in this image. At the time of Jupiter impact, the fragments will be separated from one another by serveral times the distances shown. This image was created by D.A. Seal of JPL's Mission Design Section using orbital computations provIded by P.W. Chodas and D.K. Yeomans of JPL's Navigation Section.

  19. Characterization of lens based photoacoustic imaging system.

    PubMed

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  20. An Improved Method for Real-Time 3D Construction of DTM

    NASA Astrophysics Data System (ADS)

    Wei, Yi

    This paper discusses the real-time optimal construction of DTM by two measures. One is to improve coordinate transformation of discrete points acquired from lidar, after processing a total number of 10000 data points, the formula calculation for transformation costs 0.810s, while the table look-up method for transformation costs 0.188s, indicating that the latter is superior to the former. The other one is to adjust the density of the point cloud acquired from lidar, the certain amount of the data points are used for 3D construction in proper proportion in order to meet different needs for 3D imaging, and ultimately increase efficiency of DTM construction while saving system resources.

  1. Nonrigid motion compensation in B-mode and contrast enhanced ultrasound image sequences of the carotid artery

    NASA Astrophysics Data System (ADS)

    Carvalho, Diego D. B.; Akkus, Zeynettin; Bosch, Johan G.; van den Oord, Stijn C. H.; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In this work, we investigate nonrigid motion compensation in simultaneously acquired (side-by-side) B-mode ultrasound (BMUS) and contrast enhanced ultrasound (CEUS) image sequences of the carotid artery. These images are acquired to study the presence of intraplaque neovascularization (IPN), which is a marker of plaque vulnerability. IPN quantification is visualized by performing the maximum intensity projection (MIP) on the CEUS image sequence over time. As carotid images contain considerable motion, accurate global nonrigid motion compensation (GNMC) is required prior to the MIP. Moreover, we demonstrate that an improved lumen and plaque differentiation can be obtained by averaging the motion compensated BMUS images over time. We propose to use a previously published 2D+t nonrigid registration method, which is based on minimization of pixel intensity variance over time, using a spatially and temporally smooth B-spline deformation model. The validation compares displacements of plaque points with manual trackings by 3 experts in 11 carotids. The average (+/- standard deviation) root mean square error (RMSE) was 99+/-74μm for longitudinal and 47+/-18μm for radial displacements. These results were comparable with the interobserver variability, and with results of a local rigid registration technique based on speckle tracking, which estimates motion in a single point, whereas our approach applies motion compensation to the entire image. In conclusion, we evaluated that the GNMC technique produces reliable results. Since this technique tracks global deformations, it can aid in the quantification of IPN and the delineation of lumen and plaque contours.

  2. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  3. Pc-Based Floating Point Imaging Workstation

    NASA Astrophysics Data System (ADS)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnello, A.; et al.

    We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epochmore » $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $$\\approx0.8$$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($$R_{\\rm E}\\approx0.2$$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $$\\approx 6\\times10^{11}M_{\\odot},$$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $$\\approx 85$$ (resp. $$\\approx125$$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.« less

  5. Models of the strongly lensed quasar DES J0408−5354

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnello, A.; et al.

    We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epochmore » $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $$\\approx0.8$$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($$R_{\\rm E}\\approx0.2$$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $$\\approx 6\\times10^{11}M_{\\odot},$$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $$\\approx 85$$ (resp. $$\\approx125$$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.« less

  6. A 4DCT imaging-based breathing lung model with relative hysteresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.

    To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less

  7. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  8. Soot Volume Fraction Imaging

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Ku, Jerry C.

    1994-01-01

    A new technique is described for the full-field determination of soot volume fractions via laser extinction measurements. This technique differs from previously reported point-wise methods in that a two-dimensional array (i.e., image) of data is acquired simultaneously. In this fashion, the net data rate is increased, allowing the study of time-dependent phenomena and the investigation of spatial and temporal correlations. A telecentric imaging configuration is employed to provide depth-invariant magnification and to permit the specification of the collection angle for scattered light. To improve the threshold measurement sensitivity, a method is employed to suppress undesirable coherent imaging effects. A discussion of the tomographic inversion process is provided, including the results obtained from numerical simulation. Results obtained with this method from an ethylene diffusion flame are shown to be in close agreement with those previously obtained by sequential point-wise interrogation.

  9. BRMS1 Suppresses Breast Cancer Metastasis to Bone via Its Regulation of MicroRNA-125b and Downstream Attenuation of TNF-alpha and HER2 Signaling Pathways

    DTIC Science & Technology

    2013-10-01

    genes regulating focal adhesion assembly, such as a5 integrin, Tenascin C, Talin-1, Profilin 1, and Actinin [35]. Intravital microscopy had shown...adhere for time indicated, at which point cells were fixed and stainedwith crystal violet. Representative images for times 5, 10, 15, 30, and 60 min... imaged by time-lapse microscopy for 1 h or fixed and stained with crystal violet at times indicated. As shown in Figure 1C and quantified in Figure 1D

  10. Space-time measurements of oceanic sea states

    NASA Astrophysics Data System (ADS)

    Fedele, Francesco; Benetazzo, Alvise; Gallego, Guillermo; Shih, Ping-Chang; Yezzi, Anthony; Barbariol, Francesco; Ardhuin, Fabrice

    2013-10-01

    Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. We present an application of the Wave Acquisition Stereo System (WASS) for the analysis of offshore video measurements of gravity waves in the Northern Adriatic Sea and near the southern seashore of the Crimean peninsula, in the Black Sea. We use classical epipolar techniques to reconstruct the sea surface from the stereo pairs sequentially in time, viz. a sequence of spatial snapshots. We also present a variational approach that exploits the entire data image set providing a global space-time imaging of the sea surface, viz. simultaneous reconstruction of several spatial snapshots of the surface in order to guarantee continuity of the sea surface both in space and time. Analysis of the WASS measurements show that the sea surface can be accurately estimated in space and time together, yielding associated directional spectra and wave statistics at a point in time that agrees well with probabilistic models. In particular, WASS stereo imaging is able to capture typical features of the wave surface, especially the crest-to-trough asymmetry due to second order nonlinearities, and the observed shape of large waves are fairly described by theoretical models based on the theory of quasi-determinism (Boccotti, 2000). Further, we investigate space-time extremes of the observed stationary sea states, viz. the largest surface wave heights expected over a given area during the sea state duration. The WASS analysis provides the first experimental proof that a space-time extreme is generally larger than that observed in time via point measurements, in agreement with the predictions based on stochastic theories for global maxima of Gaussian fields.

  11. Damage imaging in a laminated composite plate using an air-coupled time reversal mirror

    DOE PAGES

    Le Bas, P. -Y.; Remillieux, M. C.; Pieczonka, L.; ...

    2015-11-03

    We demonstrate the possibility of selectively imaging the features of a barely visible impact damage in a laminated composite plate by using an air-coupled time reversal mirror. The mirror consists of a number of piezoelectric transducers affixed to wedges of power law profiles, which act as unconventional matching layers. The transducers are enclosed in a hollow reverberant cavity with an opening to allow progressive emission of the ultrasonic wave field towards the composite plate. The principle of time reversal is used to focus elastic waves at each point of a scanning grid spanning the surface of the plate, thus allowingmore » localized inspection at each of these points. The proposed device and signal processing removes the need to be in direct contact with the plate and reveals the same features as vibrothermography and more features than a C-scan. More importantly, this device can decouple the features of the defect according to their orientation, by selectively focusing vector components of motion into the object, through air. For instance, a delamination can be imaged in one experiment using out-of-plane focusing, whereas a crack can be imaged in a separate experiment using in-plane focusing. As a result, this capability, inherited from the principle of time reversal, cannot be found in conventional air-coupled transducers.« less

  12. Automated planning of ablation targets in atrial fibrillation treatment

    NASA Astrophysics Data System (ADS)

    Keustermans, Johannes; De Buck, Stijn; Heidbüchel, Hein; Suetens, Paul

    2011-03-01

    Catheter based radio-frequency ablation is used as an invasive treatment of atrial fibrillation. This procedure is often guided by the use of 3D anatomical models obtained from CT, MRI or rotational angiography. During the intervention the operator accurately guides the catheter to prespecified target ablation lines. The planning stage, however, can be time consuming and operator dependent which is suboptimal both from a cost and health perspective. Therefore, we present a novel statistical model-based algorithm for locating ablation targets from 3D rotational angiography images. Based on a training data set of 20 patients, consisting of 3D rotational angiography images with 30 manually indicated ablation points, a statistical local appearance and shape model is built. The local appearance model is based on local image descriptors to capture the intensity patterns around each ablation point. The local shape model is constructed by embedding the ablation points in an undirected graph and imposing that each ablation point only interacts with its neighbors. Identifying the ablation points on a new 3D rotational angiography image is performed by proposing a set of possible candidate locations for each ablation point, as such, converting the problem into a labeling problem. The algorithm is validated using a leave-one-out-approach on the training data set, by computing the distance between the ablation lines obtained by the algorithm and the manually identified ablation points. The distance error is equal to 3.8+/-2.9 mm. As ablation lesion size is around 5-7 mm, automated planning of ablation targets by the presented approach is sufficiently accurate.

  13. Diagnostics of underwater electrical wire explosion through a time- and space-resolved hard x-ray source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheftman, D.; Shafer, D.; Efimov, S.

    2012-10-15

    A time- and space-resolved hard x-ray source was developed as a diagnostic tool for imaging underwater exploding wires. A {approx}4 ns width pulse of hard x-rays with energies of up to 100 keV was obtained from the discharge in a vacuum diode consisting of point-shaped tungsten electrodes. To improve contrast and image quality, an external pulsed magnetic field produced by Helmholtz coils was used. High resolution x-ray images of an underwater exploding wire were obtained using a sensitive x-ray CCD detector, and were compared to optical fast framing images. Future developments and application of this diagnostic technique are discussed.

  14. Diagnostics of underwater electrical wire explosion through a time- and space-resolved hard x-ray source.

    PubMed

    Sheftman, D; Shafer, D; Efimov, S; Gruzinsky, K; Gleizer, S; Krasik, Ya E

    2012-10-01

    A time- and space-resolved hard x-ray source was developed as a diagnostic tool for imaging underwater exploding wires. A ~4 ns width pulse of hard x-rays with energies of up to 100 keV was obtained from the discharge in a vacuum diode consisting of point-shaped tungsten electrodes. To improve contrast and image quality, an external pulsed magnetic field produced by Helmholtz coils was used. High resolution x-ray images of an underwater exploding wire were obtained using a sensitive x-ray CCD detector, and were compared to optical fast framing images. Future developments and application of this diagnostic technique are discussed.

  15. Three-point Dixon method enables whole-body water and fat imaging of obese subjects.

    PubMed

    Berglund, Johan; Johansson, Lars; Ahlström, Håkan; Kullberg, Joel

    2010-06-01

    Dixon imaging techniques derive chemical shift-separated water and fat images, enabling the quantification of fat content and forming an alternative to fat suppression. Whole-body Dixon imaging is of interest in studies of obesity and the metabolic syndrome, and possibly in oncology. A three-point Dixon method is proposed where two solutions are found analytically in each voxel. The true solution is identified by a multiseed three-dimensional region-growing scheme with a dynamic path, allowing confident regions to be solved before unconfident regions, such as background noise. 2 pi-Phase unwrapping is not required. Whole-body datasets (256 x 184 x 252 voxels) were collected from 39 subjects (body mass index 19.8-45.4 kg/m(2)), in a mean scan time of 5 min 15 sec. Water and fat images were reconstructed offline, using the proposed method and two reference methods. The resulting images were subjectively graded on a four-grade scale by two radiologists, blinded to the method used. The proposed method was found superior to the reference methods. It exclusively received the two highest grades, implying that only mild reconstruction failures were found. The computation time for a whole-body dataset was 1 min 51.5 sec +/- 3.0 sec. It was concluded that whole-body water and fat imaging is feasible even for obese subjects, using the proposed method. (c) 2010 Wiley-Liss, Inc.

  16. Astatine-211 imaging by a Compton camera for targeted radiotherapy.

    PubMed

    Nagao, Yuto; Yamaguchi, Mitsutaka; Watanabe, Shigeki; Ishioka, Noriko S; Kawachi, Naoki; Watabe, Hiroshi

    2018-05-24

    Astatine-211 is a promising radionuclide for targeted radiotherapy. It is required to image the distribution of targeted radiotherapeutic agents in a patient's body for optimization of treatment strategies. We proposed to image 211 At with high-energy photons to overcome some problems in conventional planar or single-photon emission computed tomography imaging. We performed an imaging experiment of a point-like 211 At source using a Compton camera, and demonstrated the capability of imaging 211 At with the high-energy photons for the first time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  18. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  19. Imaging through atmospheric turbulence for laser based C-RAM systems: an analytical approach

    NASA Astrophysics Data System (ADS)

    Buske, Ivo; Riede, Wolfgang; Zoz, Jürgen

    2013-10-01

    High Energy Laser weapons (HEL) have unique attributes which distinguish them from limitations of kinetic energy weapons. HEL weapons engagement process typical starts with identifying the target and selecting the aim point on the target through a high magnification telescope. One scenario for such a HEL system is the countermeasure against rockets, artillery or mortar (RAM) objects to protect ships, camps or other infrastructure from terrorist attacks. For target identification and especially to resolve the aim point it is significant to ensure high resolution imaging of RAM objects. During the whole ballistic flight phase the knowledge about the expectable imaging quality is important to estimate and evaluate the countermeasure system performance. Hereby image quality is mainly influenced by unavoidable atmospheric turbulence. Analytical calculations have been taken to analyze and evaluate image quality parameters during an approaching RAM object. In general, Kolmogorov turbulence theory was implemented to determine atmospheric coherence length and isoplanatic angle. The image acquisition is distinguishing between long and short exposure times to characterize tip/tilt image shift and the impact of high order turbulence fluctuations. Two different observer positions are considered to show the influence of the selected sensor site. Furthermore two different turbulence strengths are investigated to point out the effect of climate or weather condition. It is well known that atmospheric turbulence degenerates image sharpness and creates blurred images. Investigations are done to estimate the effectiveness of simple tip/tilt systems or low order adaptive optics for laser based C-RAM systems.

  20. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  1. Solution for the nonuniformity correction of infrared focal plane arrays.

    PubMed

    Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao

    2005-05-20

    Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.

  2. Ion photon emission microscope

    DOEpatents

    Doyle, Barney L.

    2003-04-22

    An ion beam analysis system that creates microscopic multidimensional image maps of the effects of high energy ions from an unfocussed source upon a sample by correlating the exact entry point of an ion into a sample by projection imaging of the ion-induced photons emitted at that point with a signal from a detector that measures the interaction of that ion within the sample. The emitted photons are collected in the lens system of a conventional optical microscope, and projected on the image plane of a high resolution single photon position sensitive detector. Position signals from this photon detector are then correlated in time with electrical effects, including the malfunction of digital circuits, detected within the sample that were caused by the individual ion that created these photons initially.

  3. High-throughput ultraviolet photoacoustic microscopy with multifocal excitation

    NASA Astrophysics Data System (ADS)

    Imai, Toru; Shi, Junhui; Wong, Terence T. W.; Li, Lei; Zhu, Liren; Wang, Lihong V.

    2018-03-01

    Ultraviolet photoacoustic microscopy (UV-PAM) is a promising intraoperative tool for surgical margin assessment (SMA), one that can provide label-free histology-like images with high resolution. In this study, using a microlens array and a one-dimensional (1-D) array ultrasonic transducer, we developed a high-throughput multifocal UV-PAM (MF-UV-PAM). Our new system achieved a 1.6 ± 0.2 μm lateral resolution and produced images 40 times faster than the previously developed point-by-point scanning UV-PAM. MF-UV-PAM provided a readily comprehensible photoacoustic image of a mouse brain slice with specific absorption contrast in ˜16 min, highlighting cell nuclei. Individual cell nuclei could be clearly resolved, showing its practical potential for intraoperative SMA.

  4. Segmenting lung fields in serial chest radiographs using both population-based and patient-specific shape statistics.

    PubMed

    Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D

    2008-04-01

    This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.

  5. Validation of early image acquisitions following Tc-99 m sestamibi injection using a semiconductors camera of cadmium-zinc-telluride.

    PubMed

    Meyer, Celine; Weinmann, Pierre

    2017-08-01

    Cadmium-zinc-telluride (CZT) cameras allow to decrease significantly the acquisition time of myocardial perfusion imaging (MPI), but the duration of the examination is still long. Therefore, this study was performed to test the feasibility of early imaging following injection of Tc-99 m sestamibi using a CZT camera. Seventy patients underwent both an early and a delayed image acquisition after exercise stress test (n = 30), dipyridamole stress test (n = 20), and at rest (n = 20). After injection of Tc-99 m sestamibi, the early image acquisition started on average within 5 minutes for the exercise and rest groups, and 3 minutes 30 seconds for the dipyridamole group. Two independent observers evaluated image quality and extracardiac uptake on four-point scales. The difference between early and later images for each patient was scored on a five-point scale. The image quality and extracardiac uptake of early and delayed image acquisitions were not different for the three groups (P > .05). There was no significant difference between early and delayed image acquisitions in the exercise, dipyridamole, and rest groups, respectively, in 63%, 40%, and 80% of cases. In the exercise group and rest group, a defect was only present in early MPI, respectively, in 13% and 20% of cases. A defect was only present in delayed images in 10% of cases in the exercise group and in 45% of cases in the dipyridamole group. There was no difference between early and later image acquisitions in terms of quality. This protocol reduces the length of the procedure for the patient. Beginning with early image acquisitions may help to overcome the artifacts that are observed at the delayed time.

  6. Simulating living organisms with populations of point vortices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmieder, R.W.

    1995-07-01

    The author has found that time-averaged images of small populations of point vortices can exhibit motions suggestive of the behavior of individual organisms. As an example, the author shows that collections of point vortices confined in a box and subjected to heating can generate patterns that are broadly similar to interspecies defense in certain sea anemones. It is speculated that other simple dynamical systems can be found to produce similar complex organism-like behavior.

  7. Distributed decision making in action: diagnostic imaging investigations within the bigger picture.

    PubMed

    Makanjee, Chandra R; Bergh, Anne-Marie; Hoffmann, Willem A

    2018-03-01

    Decision making in the health care system - specifically with regard to diagnostic imaging investigations - occurs at multiple levels. Professional role players from various backgrounds are involved in making these decisions, from the point of referral to the outcomes of the imaging investigation. The aim of this study was to map the decision-making processes and pathways involved when patients are referred for diagnostic imaging investigations and to explore distributed decision-making events at the points of contact with patients within a health care system. A two-phased qualitative study was conducted in an academic public health complex with the district hospital as entry point. The first phase included case studies of 24 conveniently selected patients, and the second phase involved 12 focus group interviews with health care providers. Data analysis was based on Rapley's interpretation of decision making as being distributed across time, situations and actions, and including different role players and technologies. Clinical decisions incorporating imaging investigations are distributed across the three vital points of contact or decision-making events, namely the initial patient consultation, the diagnostic imaging investigation and the post-investigation consultation. Each of these decision-making events is made up of a sequence of discrete decision-making moments based on the transfer of retrospective, current and prospective information and its transformation into knowledge. This paper contributes to the understanding of the microstructural processes (the 'when' and 'where') involved in the distribution of decisions related to imaging investigations. It also highlights the interdependency in decision-making events of medical and non-medical providers within a single medical encounter. © 2017 The Authors. Journal of Medical Radiation Sciences published by John Wiley & Sons Australia, Ltd on behalf of Australian Society of Medical Imaging and Radiation Therapy and New Zealand Institute of Medical Radiation Technology.

  8. Effect of multiple circular holes Fraunhofer diffraction for the infrared optical imaging

    NASA Astrophysics Data System (ADS)

    Lu, Chunlian; Lv, He; Cao, Yang; Cai, Zhisong; Tan, Xiaojun

    2014-11-01

    With the development of infrared optics, infrared optical imaging systems play an increasingly important role in modern optical imaging systems. Infrared optical imaging is used in industry, agriculture, medical, military and transportation. But in terms of infrared optical imaging systems which are exposed for a long time, some contaminations will affect the infrared optical imaging. When the contamination contaminate on the lens surface of the optical system, it would affect diffraction. The lens can be seen as complementary multiple circular holes screen happen Fraunhofer diffraction. According to Babinet principle, you can get the diffraction of the imaging system. Therefore, by studying the multiple circular holes Fraunhofer diffraction, conclusions can be drawn about the effect of infrared imaging. This paper mainly studies the effect of multiple circular holes Fraunhofer diffraction for the optical imaging. Firstly, we introduce the theory of Fraunhofer diffraction and Point Spread Function. Point Spread Function is a basic tool to evaluate the image quality of the optical system. Fraunhofer diffraction will affect Point Spread Function. Then, the results of multiple circular holes Fraunhofer diffraction are given for different hole size and hole spacing. We choose the hole size from 0.1mm to 1mm and hole spacing from 0.3mm to 0.8mm. The infrared wavebands of optical imaging are chosen from 1μm to 5μm. We use the MATLAB to simulate light intensity distribution of multiple circular holes Fraunhofer diffraction. Finally, three-dimensional diffraction maps of light intensity are given to contrast.

  9. Comparison of DSMs acquired by terrestrial laser scanning, UAV-based aerial images and ground-based optical images at the Super-Sauze landslide

    NASA Astrophysics Data System (ADS)

    Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred

    2013-04-01

    In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.

  10. Muscle segmentation in time series images of Drosophila metamorphosis.

    PubMed

    Yadav, Kuleesha; Lin, Feng; Wasser, Martin

    2015-01-01

    In order to study genes associated with muscular disorders, we characterize the phenotypic changes in Drosophila muscle cells during metamorphosis caused by genetic perturbations. We collect in vivo images of muscle fibers during remodeling of larval to adult muscles. In this paper, we focus on the new image processing pipeline designed to quantify the changes in shape and size of muscles. We propose a new two-step approach to muscle segmentation in time series images. First, we implement a watershed algorithm to divide the image into edge-preserving regions, and then, we classify these regions into muscle and non-muscle classes on the basis of shape and intensity. The advantage of our method is two-fold: First, better results are obtained because classification of regions is constrained by the shape of muscle cell from previous time point; and secondly, minimal user intervention results in faster processing time. The segmentation results are used to compare the changes in cell size between controls and reduction of the autophagy related gene Atg 9 during Drosophila metamorphosis.

  11. A novel in vitro image-based assay identifies new drug leads for giardiasis.

    PubMed

    Hart, Christopher J S; Munro, Taylah; Andrews, Katherine T; Ryan, John H; Riches, Andrew G; Skinner-Adams, Tina S

    2017-04-01

    Giardia duodenalis is an intestinal parasite that causes giardiasis, a widespread human gastrointestinal disease. Treatment of giardiasis relies on a small arsenal of compounds that can suffer from limitations including side-effects, variable treatment efficacy and parasite drug resistance. Thus new anti-Giardia drug leads are required. The search for new compounds with anti-Giardia activity currently depends on assays that can be labour-intensive, expensive and restricted to measuring activity at a single time-point. Here we describe a new in vitro assay to assess anti-Giardia activity. This image-based assay utilizes the Perkin-Elmer Operetta ® and permits automated assessment of parasite growth at multiple time points without cell-staining. Using this new approach, we assessed the "Malaria Box" compound set for anti-Giardia activity. Three compounds with sub-μM activity (IC 50 0.6-0.9 μM) were identified as potential starting points for giardiasis drug discovery. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. The influence of shrinkage-cracking on the drying behaviour of White Portland cement using Single-Point Imaging (SPI).

    PubMed

    Beyea, S D; Balcom, B J; Bremner, T W; Prado, P J; Cross, A R; Armstrong, R L; Grattan-Bellew, P E

    1998-11-01

    The removal of water from pores in hardened cement paste smaller than 50 nm results in cracking of the cement matrix due to the tensile stresses induced by drying shrinkage. Cracks in the matrix fundamentally alter the permeability of the material, and therefore directly affect the drying behaviour. Using Single-Point Imaging (SPI), we obtain one-dimensional moisture profiles of hydrated White Portland cement cylinders as a function of drying time. The drying behaviour of White Portland cement, is distinctly different from the drying behaviour of related concrete materials containing aggregates.

  13. Impact of dual-time-point F-18 FDG PET/CT in the assessment of pleural effusion in patients with non-small-cell lung cancer.

    PubMed

    Alkhawaldeh, Khaled; Biersack, Hans-J; Henke, Anna; Ezziddin, Samer

    2011-06-01

    The aim of this study was to assess the utility of dual-time-point F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) in differentiating benign from malignant pleural disease, in patients with non-small-cell lung cancer. A total of 61 patients with non-small-cell lung cancer and pleural effusion were included in this retrospective study. All patients had whole-body FDG PET/CT imaging at 60 ± 10 minutes post-FDG injection, whereas 31 patients had second-time delayed imaging repeated at 90 ± 10 minutes for the chest. Maximum standardized uptake values (SUV(max)) and the average percent change in SUV(max) (%SUV) between time point 1 and time point 2 were calculated. Malignancy was defined using the following criteria: (1) visual assessment using 3-points grading scale; (2) SUV(max) ≥2.4; (3) %SUV ≥ +9; and (4) SUV(max) ≥2.4 and/or %SUV ≥ +9. Analysis of variance test and receiver operating characteristic analysis were used in statistical analysis. P < 0.05 was considered significant. Follow-up revealed 29 patient with malignant pleural disease and 31 patients with benign pleural effusion. The average SUV(max) in malignant effusions was 6.5 ± 4 versus 2.2 ± 0.9 in benign effusions (P < 0.0001). The average %SUV in malignant effusions was +13 ± 10 versus -8 ± 11 in benign effusions (P < 0.0004). Sensitivity, specificity, and accuracy for the 5 criteria were as follows: (1) 86%, 72%, and 79%; (2) 93%, 72%, and 82%; (3) 67%, 94%, and 81%; (4) 100%, 94%, and 97%. Dual-time-point F-18 FDG PET can improve the diagnostic accuracy in differentiating benign from malignant pleural disease, with high sensitivity and good specificity.

  14. Image restoration using aberration taken by a Hartmann wavefront sensor on extended object, towards real-time deconvolution

    NASA Astrophysics Data System (ADS)

    Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza

    2015-05-01

    In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.

  15. Probabilistic model for quick detection of dissimilar binary images

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2015-09-01

    We present a quick method to detect dissimilar binary images. The method is based on a "probabilistic matching model" for image matching. The matching model is used to predict the probability of occurrence of distinct-dissimilar image pairs (completely different images) when matching one image to another. Based on this model, distinct-dissimilar images can be detected by matching only a few points between two images with high confidence, namely 11 points for a 99.9% successful detection rate. For image pairs that are dissimilar but not distinct-dissimilar, more points need to be mapped. The number of points required to attain a certain successful detection rate or confidence depends on the amount of similarity between the compared images. As this similarity increases, more points are required. For example, images that differ by 1% can be detected by mapping fewer than 70 points on average. More importantly, the model is image size invariant; so, images of any sizes will produce high confidence levels with a limited number of matched points. As a result, this method does not suffer from the image size handicap that impedes current methods. We report on extensive tests conducted on real images of different sizes.

  16. Diffraction and geometrical optical transfer functions: calculation time comparison

    NASA Astrophysics Data System (ADS)

    Díaz, José Antonio; Mahajan, Virendra N.

    2017-08-01

    In a recent paper, we compared the diffraction and geometrical optical transfer functions (OTFs) of an optical imaging system, and showed that the GOTF approximates the DOTF within 10% when a primary aberration is about two waves or larger [Appl. Opt., 55, 3241-3250 (2016)]. In this paper, we determine and compare the times to calculate the DOTF by autocorrelation or digital autocorrelation of the pupil function, and by a Fourier transform (FT) of the point-spread function (PSF); and the GOTF by a FT of the geometrical PSF and its approximation, the spot diagram. Our starting point for calculating the DOTF is the wave aberrations of the system in its pupil plane, and the ray aberrations in the image plane for the GOTF. The numerical results for primary aberrations and a typical imaging system show that the direct integrations are slow, but the calculation of the DOTF by a FT of the PSF is generally faster than the GOTF calculation by a FT of the spot diagram.

  17. Clinical Impact of Time-of-Flight and Point Response Modeling in PET Reconstructions: A Lesion Detection Study

    PubMed Central

    Schaefferkoetter, Joshua; Casey, Michael; Townsend, David; Fakhri, Georges El

    2013-01-01

    Time-of-flight (TOF) and point spread function (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF+PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic (LROC). Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF+PSF. These findings suggest a large potential benefit of TOF+PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients. PMID:23403399

  18. Clinical impact of time-of-flight and point response modeling in PET reconstructions: a lesion detection study

    NASA Astrophysics Data System (ADS)

    Schaefferkoetter, Joshua; Casey, Michael; Townsend, David; El Fakhri, Georges

    2013-03-01

    Time-of-flight (TOF) and point spread function (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF + PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic. Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF + PSF. These findings suggest a large potential benefit of TOF + PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients.

  19. Accessing the exceptional points of parity-time symmetric acoustics

    PubMed Central

    Shi, Chengzhi; Dubois, Marc; Chen, Yun; Cheng, Lei; Ramezani, Hamidreza; Wang, Yuan; Zhang, Xiang

    2016-01-01

    Parity-time (PT) symmetric systems experience phase transition between PT exact and broken phases at exceptional point. These PT phase transitions contribute significantly to the design of single mode lasers, coherent perfect absorbers, isolators, and diodes. However, such exceptional points are extremely difficult to access in practice because of the dispersive behaviour of most loss and gain materials required in PT symmetric systems. Here we introduce a method to systematically tame these exceptional points and control PT phases. Our experimental demonstration hinges on an active acoustic element that realizes a complex-valued potential and simultaneously controls the multiple interference in the structure. The manipulation of exceptional points offers new routes to broaden applications for PT symmetric physics in acoustics, optics, microwaves and electronics, which are essential for sensing, communication and imaging. PMID:27025443

  20. Inferring Toxicological Responses of HepG2 Cells from ToxCast High Content Imaging Data (SOT)

    EPA Science Inventory

    Understanding the dynamic perturbation of cell states by chemicals can aid in for predicting their adverse effects. High-content imaging (HCI) was used to measure the state of HepG2 cells over three time points (1, 24, and 72 h) in response to 976 ToxCast chemicals for 10 differe...

  1. Learning Boolean Networks in HepG2 cells using ToxCast High-Content Imaging Data (SOT annual meeting)

    EPA Science Inventory

    Cells adapt to their environment via homeostatic processes that are regulated by complex molecular networks. Our objective was to learn key elements of these networks in HepG2 cells using ToxCast High-content imaging (HCI) measurements taken over three time points (1, 24, and 72h...

  2. Three-dimension imaging lidar

    NASA Technical Reports Server (NTRS)

    Degnan, John J. (Inventor)

    2007-01-01

    This invention is directed to a 3-dimensional imaging lidar, which utilizes modest power kHz rate lasers, array detectors, photon-counting multi-channel timing receivers, and dual wedge optical scanners with transmitter point-ahead correction to provide contiguous high spatial resolution mapping of surface features including ground, water, man-made objects, vegetation and submerged surfaces from an aircraft or a spacecraft.

  3. Serial MRI evaluation following arthroscopic rotator cuff repair in double-row technique.

    PubMed

    Stahnke, Katharina; Nikulka, Constanze; Diederichs, Gerd; Haneveld, Hendrik; Scheibel, Markus; Gerhardt, Christian

    2016-05-01

    So far, recurrent rotator cuff defects are described to occur in the early postoperative period after arthroscopic repair. The aim of this study was to evaluate the musculotendinous structure of the supraspinatus, as well as bone marrow edema or osteolysis after arthroscopic double-row repair. Therefore, magnetic resonance (MR) images were performed at defined intervals up to 2 years postoperatively. Case series; Level of evidence, 3. MR imaging was performed within 7 days, 3, 6, 12, 26, 52 and 108 weeks after surgery. All patients were operated using an arthroscopic modified suture bridge technique. Tendon integrity, tendon retraction ["foot-print-coverage" (FPC)], muscular atrophy and fatty infiltration (signal intensity analysis) were measured at all time points. Furthermore, postoperative bone marrow edema and signs of osteolysis were assessed. MR images of 13 non-consecutive patients (6f/7m, ∅ age 61.05 ± 7.7 years) could be evaluated at all time points until ∅ 108 weeks postoperatively. 5/6 patients with recurrent defect at final follow-up displayed a time of failure between 12 and 24 months after surgery. Predominant mode of failure was medial cuff failures in 4/6 cases. The initial FPC increased significantly up to 2 years follow-up (p = 0.004). Evaluations of muscular atrophy or fatty infiltration were not significant different comparing the results of all time points (p > 0.05). Postoperative bone marrow edema disappeared completely at 6 months after surgery, whereas signs of osteolysis appeared at 3 months follow-up and increased to final follow-up. Recurrent defects after arthroscopic reconstruction of supraspinatus tears in modified suture bridge technique seem to occur between 12 and 24 months after surgery. Serial MRI evaluation shows good muscle structure at all time points. Postoperative bone marrow edema disappears completely several months after surgery. Signs of osteolysis seem to appear caused by bio-absorbable anchor implantations.

  4. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  5. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    PubMed

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  6. In Vivo Imaging of the Human Retinal Pigment Epithelial Mosaic Using Adaptive Optics Enhanced Indocyanine Green Ophthalmoscopy

    PubMed Central

    Tam, Johnny; Liu, Jianfei; Dubra, Alfredo; Fariss, Robert

    2016-01-01

    Purpose The purpose of this study was to establish that retinal pigment epithelial (RPE) cells take up indocyanine green (ICG) dye following systemic injection and that adaptive optics enhanced indocyanine green ophthalmoscopy (AO-ICG) enables direct visualization of the RPE mosaic in the living human eye. Methods A customized adaptive optics scanning light ophthalmoscope (AOSLO) was used to acquire high-resolution retinal fluorescence images of residual ICG dye in human subjects after intravenous injection at the standard clinical dose. Simultaneously, multimodal AOSLO images were also acquired, which included confocal reflectance, nonconfocal split detection, and darkfield. Imaging was performed in 6 eyes of three healthy subjects with no history of ocular or systemic diseases. In addition, histologic studies in mice were carried out. Results The AO-ICG channel successfully resolved individual RPE cells in human subjects at various time points, including 20 minutes and 2 hours after dye administration. Adaptive optics-ICG images of RPE revealed detail which could be correlated with AO dark-field images of the same cells. Interestingly, there was a marked heterogeneity in the fluorescence of individual RPE cells. Confirmatory histologic studies in mice corroborated the specific uptake of ICG by the RPE layer at a late time point after systemic ICG injection. Conclusions Adaptive optics-enhanced imaging of ICG dye provides a novel way to visualize and assess the RPE mosaic in the living human eye alongside images of the overlying photoreceptors and other cells. PMID:27564519

  7. In Vivo Imaging of the Human Retinal Pigment Epithelial Mosaic Using Adaptive Optics Enhanced Indocyanine Green Ophthalmoscopy.

    PubMed

    Tam, Johnny; Liu, Jianfei; Dubra, Alfredo; Fariss, Robert

    2016-08-01

    The purpose of this study was to establish that retinal pigment epithelial (RPE) cells take up indocyanine green (ICG) dye following systemic injection and that adaptive optics enhanced indocyanine green ophthalmoscopy (AO-ICG) enables direct visualization of the RPE mosaic in the living human eye. A customized adaptive optics scanning light ophthalmoscope (AOSLO) was used to acquire high-resolution retinal fluorescence images of residual ICG dye in human subjects after intravenous injection at the standard clinical dose. Simultaneously, multimodal AOSLO images were also acquired, which included confocal reflectance, nonconfocal split detection, and darkfield. Imaging was performed in 6 eyes of three healthy subjects with no history of ocular or systemic diseases. In addition, histologic studies in mice were carried out. The AO-ICG channel successfully resolved individual RPE cells in human subjects at various time points, including 20 minutes and 2 hours after dye administration. Adaptive optics-ICG images of RPE revealed detail which could be correlated with AO dark-field images of the same cells. Interestingly, there was a marked heterogeneity in the fluorescence of individual RPE cells. Confirmatory histologic studies in mice corroborated the specific uptake of ICG by the RPE layer at a late time point after systemic ICG injection. Adaptive optics-enhanced imaging of ICG dye provides a novel way to visualize and assess the RPE mosaic in the living human eye alongside images of the overlying photoreceptors and other cells.

  8. Real-time three-dimensional optical coherence tomography image-guided core-needle biopsy system.

    PubMed

    Kuo, Wei-Cheng; Kim, Jongsik; Shemonski, Nathan D; Chaney, Eric J; Spillman, Darold R; Boppart, Stephen A

    2012-06-01

    Advances in optical imaging modalities, such as optical coherence tomography (OCT), enable us to observe tissue microstructure at high resolution and in real time. Currently, core-needle biopsies are guided by external imaging modalities such as ultrasound imaging and x-ray computed tomography (CT) for breast and lung masses, respectively. These image-guided procedures are frequently limited by spatial resolution when using ultrasound imaging, or by temporal resolution (rapid real-time feedback capabilities) when using x-ray CT. One feasible approach is to perform OCT within small gauge needles to optically image tissue microstructure. However, to date, no system or core-needle device has been developed that incorporates both three-dimensional OCT imaging and tissue biopsy within the same needle for true OCT-guided core-needle biopsy. We have developed and demonstrate an integrated core-needle biopsy system that utilizes catheter-based 3-D OCT for real-time image-guidance for target tissue localization, imaging of tissue immediately prior to physical biopsy, and subsequent OCT imaging of the biopsied specimen for immediate assessment at the point-of-care. OCT images of biopsied ex vivo tumor specimens acquired during core-needle placement are correlated with corresponding histology, and computational visualization of arbitrary planes within the 3-D OCT volumes enables feedback on specimen tissue type and biopsy quality. These results demonstrate the potential for using real-time 3-D OCT for needle biopsy guidance by imaging within the needle and tissue during biopsy procedures.

  9. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  10. Transcranial photoacoustic tomography of the monkey brain

    NASA Astrophysics Data System (ADS)

    Nie, Liming; Huang, Chao; Guo, Zijian; Anastasio, Mark; Wang, Lihong V.

    2012-02-01

    A photoacoustic tomography (PAT) system using a virtual point ultrasonic transducer was developed for transcranial imaging of monkey brains. The virtual point transducer provided a 10 times greater field-of-view (FOV) than finiteaperture unfocused transducers, which enables large primate imaging. The cerebral cortex of a monkey brain was accurately mapped transcranially, through up to two skulls ranging from 4 to 8 mm in thickness. The mass density and speed of sound distributions of the skull were estimated from adjunct X-ray CT image data and utilized with a timereversal algorithm to mitigate artifacts in the reconstructed image due to acoustic aberration. The oxygenation saturation (sO2) in blood phantoms through a monkey skull was also imaged and quantified, with results consistent with measurements by a gas analyzer. The oxygenation saturation (sO2) in blood phantoms through a monkey skull was also imaged and quantified, with results consistent with measurements by a gas analyzer. Our experimental results demonstrate that PAT can overcome the optical and ultrasound attenuation of a relatively thick skull, and the imaging aberration caused by skull can be corrected to a great extent.

  11. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  12. The positive impact of simultaneous implementation of the BD FocalPoint GS Imaging System and lean principles on the operation of gynecologic cytology.

    PubMed

    Wong, Rebecca; Levi, Angelique W; Harigopal, Malini; Schofield, Kevin; Chhieng, David C

    2012-02-01

    Our cytology laboratory, like many others, is under pressure to improve quality and provide test results faster while decreasing costs. We sought to address these issues by introducing new technology and lean principles. To determine the combined impact of the FocalPoint Guided Screener (GS) Imaging System (BD Diagnostics-TriPath, Burlington, North Carolina) and lean manufacturing principles on the turnaround time (TAT) and productivity of the gynecologic cytology operation. We established a baseline measure of the TAT for Papanicolaou tests. We then compared that to the performance after implementing the FocalPoint GS Imaging System and lean principles. The latter included value-stream mapping, workflow modification, and a first in-first out policy. The mean (SD) TAT for Papanicolaou tests before and after the implementation of FocalPoint GS Imaging System and lean principles was 4.38 (1.28) days and 3.20 (1.32) days, respectively. This represented a 27% improvement in the average TAT, which was statistically significant (P < .001). In addition, the productivity of staff improved 17%, as evidenced by the increase in slides screened from 8.85/h to 10.38/h. The false-negative fraction decreased from 1.4% to 0.9%, representing a 36% improvement. In our laboratory, the implementation of FocalPoint GS Imaging System in conjunction with lean principles resulted in a significant decrease in the average TAT for Papanicolaou tests and a substantial increase in the productivity of cytotechnologists while maintaining the diagnostic quality of gynecologic cytology.

  13. Grand Tour outer planet missions definition phase. Part 2: Minutes of meetings and official correspondence

    NASA Technical Reports Server (NTRS)

    Belton, M. J. S.; Aksnes, K.; Davies, M. E.; Hartmann, W. K.; Millis, R. L.; Owen, T. C.; Reilly, T. H.; Sagan, C.; Suomi, V. E.; Collins, S. A., Jr.

    1972-01-01

    A variety of imaging systems proposed for use aboard the Outer Planet Grand Tour Explorer are discussed and evaluated in terms of optimal resolution capability and efficient time utilization. It is pointed out that the planetary and satellite alignments at the time of encounter dictate a high degree of adaptability and versatility in order to provide sufficient image enhancement over earth-based techniques. Data compression methods are also evaluated according to the same criteria.

  14. Spin echo SPI methods for quantitative analysis of fluids in porous media.

    PubMed

    Li, Linqing; Han, Hui; Balcom, Bruce J

    2009-06-01

    Fluid density imaging is highly desirable in a wide variety of porous media measurements. The SPRITE class of MRI methods has proven to be robust and general in their ability to generate density images in porous media, however the short encoding times required, with correspondingly high magnetic field gradient strengths and filter widths, and low flip angle RF pulses, yield sub-optimal S/N images, especially at low static field strength. This paper explores two implementations of pure phase encode spin echo 1D imaging, with application to a proposed new petroleum reservoir core analysis measurement. In the first implementation of the pulse sequence, we modify the spin echo single point imaging (SE-SPI) technique to acquire the k-space origin data point, with a near zero evolution time, from the free induction decay (FID) following a 90 degrees excitation pulse. Subsequent k-space data points are acquired by separately phase encoding individual echoes in a multi-echo acquisition. T(2) attenuation of the echo train yields an image convolution which causes blurring. The T(2) blur effect is moderate for porous media with T(2) lifetime distributions longer than 5 ms. As a robust, high S/N, and fast 1D imaging method, this method will be highly complementary to SPRITE techniques for the quantitative analysis of fluid content in porous media. In the second implementation of the SE-SPI pulse sequence, modification of the basic measurement permits fast determination of spatially resolved T(2) distributions in porous media through separately phase encoding each echo in a multi-echo CPMG pulse train. An individual T(2) weighted image may be acquired from each echo. The echo time (TE) of each T(2) weighted image may be reduced to 500 micros or less. These profiles can be fit to extract a T(2) distribution from each pixel employing a variety of standard inverse Laplace transform methods. Fluid content 1D images are produced as an essential by product of determining the spatially resolved T(2) distribution. These 1D images do not suffer from a T(2) related blurring. The above SE-SPI measurements are combined to generate 1D images of the local saturation and T(2) distribution as a function of saturation, upon centrifugation of petroleum reservoir core samples. The logarithm mean T(2) is observed to shift linearly with water saturation. This new reservoir core analysis measurement may provide a valuable calibration of the Coates equation for irreducible water saturation, which has been widely implemented in NMR well logging measurements.

  15. Evaluation of Methods for Coregistration and Fusion of Rpas-Based 3d Point Clouds and Thermal Infrared Images

    NASA Astrophysics Data System (ADS)

    Hoegner, L.; Tuttas, S.; Xu, Y.; Eder, K.; Stilla, U.

    2016-06-01

    This paper discusses the automatic coregistration and fusion of 3d point clouds generated from aerial image sequences and corresponding thermal infrared (TIR) images. Both RGB and TIR images have been taken from a RPAS platform with a predefined flight path where every RGB image has a corresponding TIR image taken from the same position and with the same orientation with respect to the accuracy of the RPAS system and the inertial measurement unit. To remove remaining differences in the exterior orientation, different strategies for coregistering RGB and TIR images are discussed: (i) coregistration based on 2D line segments for every single TIR image and the corresponding RGB image. This method implies a mainly planar scene to avoid mismatches; (ii) coregistration of both the dense 3D point clouds from RGB images and from TIR images by coregistering 2D image projections of both point clouds; (iii) coregistration based on 2D line segments in every single TIR image and 3D line segments extracted from intersections of planes fitted in the segmented dense 3D point cloud; (iv) coregistration of both the dense 3D point clouds from RGB images and from TIR images using both ICP and an adapted version based on corresponding segmented planes; (v) coregistration of both image sets based on point features. The quality is measured by comparing the differences of the back projection of homologous points in both corrected RGB and TIR images.

  16. Topological photonic crystal with equifrequency Weyl points

    NASA Astrophysics Data System (ADS)

    Wang, Luyang; Jian, Shao-Kai; Yao, Hong

    2016-06-01

    Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on general symmetry analysis, we show that a minimal number of four symmetry-related (consequently equifrequency) Weyl points can be realized in time-reversal invariant photonic crystals. We further propose an experimentally feasible way to modify double-gyroid photonic crystals to realize four equifrequency Weyl points, which is explicitly confirmed by our first-principle photonic band-structure calculations. Remarkably, photonic crystals with equifrequency Weyl points are qualitatively advantageous in applications including angular selectivity, frequency selectivity, invisibility cloaking, and three-dimensional imaging.

  17. unWISE: Unblurred Coadds of the WISE Imaging

    NASA Astrophysics Data System (ADS)

    Lang, Dustin

    2014-05-01

    The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.

  18. Photogrammetric Analysis of Historical Image Repositories for Virtual Reconstruction in the Field of Digital Humanities

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2017-02-01

    Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  19. Optimizing the Attitude Control of Small Satellite Constellations for Rapid Response Imaging

    NASA Astrophysics Data System (ADS)

    Nag, S.; Li, A.

    2016-12-01

    Distributed Space Missions (DSMs) such as formation flight and constellations, are being recognized as important solutions to increase measurement samples over space and time. Given the increasingly accurate attitude control systems emerging in the commercial market, small spacecraft now have the ability to slew and point within few minutes of notice. In spite of hardware development in CubeSats at the payload (e.g. NASA InVEST) and subsystems (e.g. Blue Canyon Technologies), software development for tradespace analysis in constellation design (e.g. Goddard's TAT-C), planning and scheduling development in single spacecraft (e.g. GEO-CAPE) and aerial flight path optimizations for UAVs (e.g. NASA Sensor Web), there is a gap in open-source, open-access software tools for planning and scheduling distributed satellite operations in terms of pointing and observing targets. This paper will demonstrate results from a tool being developed for scheduling pointing operations of narrow field-of-view (FOV) sensors over mission lifetime to maximize metrics such as global coverage and revisit statistics. Past research has shown the need for at least fourteen satellites to cover the Earth globally everyday using a LandSat-like sensor. Increasing the FOV three times reduces the need to four satellites, however adds image distortion and BRDF complexities to the observed reflectance. If narrow FOV sensors on a small satellite constellation were commanded using robust algorithms to slew their sensor dynamically, they would be able to coordinately cover the global landmass much faster without compensating for spatial resolution or BRDF effects. Our algorithm to optimize constellation satellite pointing is based on a dynamic programming approach under the constraints of orbital mechanics and existing attitude control systems for small satellites. As a case study for our algorithm, we minimize the time required to cover the 17000 Landsat images with maximum signal to noise ratio fall-off and minimum image distortion among the satellites, using Landsat's specifications. Attitude-specific constraints such as power consumption, response time, and stability were factored into the optimality computations. The algorithm can integrate cloud cover predictions, specific ground and air assets and angular constraints.

  20. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  1. Method for mapping a natural gas leak

    DOEpatents

    Reichardt, Thomas A [Livermore, CA; Luong, Amy Khai [Dublin, CA; Kulp, Thomas J [Livermore, CA; Devdas, Sanjay [Albany, CA

    2009-02-03

    A system is described that is suitable for use in determining the location of leaks of gases having a background concentration. The system is a point-wise backscatter absorption gas measurement system that measures absorption and distance to each point of an image. The absorption measurement provides an indication of the total amount of a gas of interest, and the distance provides an estimate of the background concentration of gas. The distance is measured from the time-of-flight of laser pulse that is generated along with the absorption measurement light. The measurements are formatted into an image of the presence of gas in excess of the background. Alternatively, an image of the scene is superimposed on the image of the gas to aid in locating leaks. By further modeling excess gas as a plume having a known concentration profile, the present system provides an estimate of the maximum concentration of the gas of interest.

  2. Natural gas leak mapper

    DOEpatents

    Reichardt, Thomas A [Livermore, CA; Luong, Amy Khai [Dublin, CA; Kulp, Thomas J [Livermore, CA; Devdas, Sanjay [Albany, CA

    2008-05-20

    A system is described that is suitable for use in determining the location of leaks of gases having a background concentration. The system is a point-wise backscatter absorption gas measurement system that measures absorption and distance to each point of an image. The absorption measurement provides an indication of the total amount of a gas of interest, and the distance provides an estimate of the background concentration of gas. The distance is measured from the time-of-flight of laser pulse that is generated along with the absorption measurement light. The measurements are formated into an image of the presence of gas in excess of the background. Alternatively, an image of the scene is superimosed on the image of the gas to aid in locating leaks. By further modeling excess gas as a plume having a known concentration profile, the present system provides an estimate of the maximum concentration of the gas of interest.

  3. First imagery generated by near-field real-time aperture synthesis passive millimetre wave imagers at 94 GHz and 183 GHz

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.; Mason, Ian; Wilkinson, Peter; Taylor, Chris; Scicluna, Peter

    2010-10-01

    The first passive millimetre wave (PMMW) imagery is presented from two proof-of-concept aperture synthesis demonstrators, developed to investigate the use of aperture synthesis for personnel security screening and all weather flying at 94 GHz, and satellite based earth observation at 183 GHz [1]. Emission from point noise sources and discharge tubes are used to examine the coherence on system baselines and to measure the point spread functions, making comparisons with theory. Image quality is examined using near field aperture synthesis and G-matrix calibration imaging algorithms. The radiometric sensitivity is measured using the emission from absorbers at elevated temperatures acting as extended sources and compared with theory. Capabilities of the latest Field Programmable Gate Arrays (FPGA) technologies for aperture synthesis PMMW imaging in all-weather and security screening applications are examined.

  4. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  5. Super-resolution photon-efficient imaging by nanometric double-helix point spread function localization of emitters (SPINDLE)

    PubMed Central

    Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael

    2012-01-01

    Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521

  6. Registration of opthalmic images using control points

    NASA Astrophysics Data System (ADS)

    Heneghan, Conor; Maguire, Paul

    2003-03-01

    A method for registering pairs of digital ophthalmic images of the retina is presented using anatomical features as control points present in both images. The anatomical features chosen are blood vessel crossings and bifurcations. These control points are identified by a combination of local contrast enhancement, and morphological processing. In general, the matching between control points is unknown, however, so an automated algorithm is used to determine the matching pairs of control points in the two images as follows. Using two control points from each image, rigid global transform (RGT) coefficients are calculated for all possible combinations of control point pairs, and the set of RGT coefficients is identified. Once control point pairs are established, registration of two images can be achieved by using linear regression to optimize an RGT, bilinear or second order polynomial global transform. An example of cross-modal image registration using an optical image and a fluorescein angiogram of an eye is presented to illustrate the technique.

  7. Real-time simulation of thermal shadows with EMIT

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul

    2016-05-01

    Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.

  8. TDC-based readout electronics for real-time acquisition of high resolution PET bio-images

    NASA Astrophysics Data System (ADS)

    Marino, N.; Saponara, S.; Ambrosi, G.; Baronti, F.; Bisogni, M. G.; Cerello, P.,; Ciciriello, F.; Corsi, F.; Fanucci, L.; Ionica, M.; Licciulli, F.; Marzocca, C.; Morrocchi, M.; Pennazio, F.; Roncella, R.; Santoni, C.; Wheadon, R.; Del Guerra, A.

    2013-02-01

    Positron emission tomography (PET) is a clinical and research tool for in vivo metabolic imaging. The demand for better image quality entails continuous research to improve PET instrumentation. In clinical applications, PET image quality benefits from the time of flight (TOF) feature. Indeed, by measuring the photons arrival time on the detectors with a resolution less than 100 ps, the annihilation point can be estimated with centimeter resolution. This leads to better noise level, contrast and clarity of detail in the images either using analytical or iterative reconstruction algorithms. This work discusses a silicon photomultiplier (SiPM)-based magnetic-field compatible TOF-PET module with depth of interaction (DOI) correction. The detector features a 3D architecture with two tiles of SiPMs coupled to a single LYSO scintillator on both its faces. The real-time front-end electronics is based on a current-mode ASIC where a low input impedance, fast current buffer allows achieving the required time resolution. A pipelined time to digital converter (TDC) measures and digitizes the arrival time and the energy of the events with a timestamp of 100 ps and 400 ps, respectively. An FPGA clusters the data and evaluates the DOI, with a simulated z resolution of the PET image of 1.4 mm FWHM.

  9. Soaring Over Jupiter

    NASA Image and Video Library

    2017-09-21

    This striking image of Jupiter was captured by NASA's Juno spacecraft as it performed its eighth flyby of the gas giant planet. The image was taken on Sept. 1, 2017 at 2:58 p.m. PDT (5:58 p.m. EDT). At the time the image was taken, the spacecraft was 4,707 miles (7,576 kilometers) from the tops of the clouds of the planet at a latitude of about -17.4 degrees. Citizen scientist Gerald Eichstädt processed this image using data from the JunoCam imager. Points of interest are "Whale's Tail" and "Dan's Spot." https://photojournal.jpl.nasa.gov/catalog/PIA21966

  10. Towards real-time diffuse optical tomography for imaging brain functions cooperated with Kalman estimator

    NASA Astrophysics Data System (ADS)

    Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.

  11. Comparison of Uas-Based Photogrammetry Software for 3d Point Cloud Generation: a Survey Over a Historical Site

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2017-11-01

    Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.

  12. Automated corresponding point candidate selection for image registration using wavelet transformation neurla network with rotation invariant inputs and context information about neighboring candidates

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei

    2003-03-01

    An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.

  13. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  14. Speckle noise suppression method in holographic display using time multiplexing

    NASA Astrophysics Data System (ADS)

    Liu, Su-Juan; Wang, Di; Li, Song-Jie; Wang, Qiong-Hua

    2017-06-01

    We propose a method to suppress the speckle noise in holographic display using time multiplexing. The diffractive optical elements (DOEs) and the subcomputer-generated holograms (sub-CGHs) are generated, respectively. The final image is reconstructed using time multiplexing of the subimages and the final subimages. Meanwhile, the speckle noise of the final image is suppressed by reducing the coherence of the reconstructed light and separating the adjacent image points in space. Compared with the pixel separation method, the experiments demonstrate that the proposed method suppresses the speckle noise effectively with less calculation burden and lower demand for frame rate of the spatial light modulator. In addition, with increases of the DOEs and the sub-CGHs, the speckle noise is further suppressed.

  15. The Advanced Gamma-ray Imaging System (AGIS): A Nanosecond Time Scale Stereoscopic Array Trigger System.

    NASA Astrophysics Data System (ADS)

    Krennrich, Frank; Buckley, J.; Byrum, K.; Dawson, J.; Drake, G.; Horan, D.; Krawzcynski, H.; Schroedter, M.

    2008-04-01

    Imaging atmospheric Cherenkov telescope arrays (VERITAS, HESS) have shown unprecedented background suppression capabilities for reducing cosmic-ray induced air showers, muons and night sky background fluctuations. Next-generation arrays with on the order of 100 telescopes offer larger collection areas, provide the possibility to see the air shower from more view points on the ground, have the potential to improve the sensitivity and give additional background suppression. Here we discuss the design of a fast array trigger system that has the potential to perform a real time image analysis allowing substantially improved background rate suppression at the trigger level.

  16. Real time SAR processing

    NASA Technical Reports Server (NTRS)

    Premkumar, A. B.; Purviance, J. E.

    1990-01-01

    A simplified model for the SAR imaging problem is presented. The model is based on the geometry of the SAR system. Using this model an expression for the entire phase history of the received SAR signal is formulated. From the phase history, it is shown that the range and the azimuth coordinates for a point target image can be obtained by processing the phase information during the intrapulse and interpulse periods respectively. An architecture for a VLSI implementation for the SAR signal processor is presented which generates images in real time. The architecture uses a small number of chips, a new correlation processor, and an efficient azimuth correlation process.

  17. Robotics and dynamic image analysis for studies of gene expression in plant tissues.

    PubMed

    Hernandez-Garcia, Carlos M; Chiera, Joseph M; Finer, John J

    2010-05-05

    Gene expression in plant tissues is typically studied by destructive extraction of compounds from plant tissues for in vitro analyses. The methods presented here utilize the green fluorescent protein (gfp) gene for continual monitoring of gene expression in the same pieces of tissues, over time. The gfp gene was placed under regulatory control of different promoters and introduced into lima bean cotyledonary tissues via particle bombardment. Cotyledons were then placed on a robotic image collection system, which consisted of a fluorescence dissecting microscope with a digital camera and a 2-dimensional robotics platform custom-designed to allow secure attachment of culture dishes. Images were collected from cotyledonary tissues every hour for 100 hours to generate expression profiles for each promoter. Each collected series of 100 images was first subjected to manual image alignment using ImageReady to make certain that GFP-expressing foci were consistently retained within selected fields of analysis. Specific regions of the series measuring 300 x 400 pixels, were then selected for further analysis to provide GFP Intensity measurements using ImageJ software. Batch images were separated into the red, green and blue channels and GFP-expressing areas were identified using the threshold feature of ImageJ. After subtracting the background fluorescence (subtraction of gray values of non-expressing pixels from every pixel) in the respective red and green channels, GFP intensity was calculated by multiplying the mean grayscale value per pixel by the total number of GFP-expressing pixels in each channel, and then adding those values for both the red and green channels. GFP Intensity values were collected for all 100 time points to yield expression profiles. Variations in GFP expression profiles resulted from differences in factors such as promoter strength, presence of a silencing suppressor, or nature of the promoter. In addition to quantification of GFP intensity, the image series were also used to generate time-lapse animations using ImageReady. Time-lapse animations revealed that the clear majority of cells displayed a relatively rapid increase in GFP expression, followed by a slow decline. Some cells occasionally displayed a sudden loss of fluorescence, which may be associated with rapid cell death. Apparent transport of GFP across the membrane and cell wall to adjacent cells was also observed. Time lapse animations provided additional information that could not otherwise be obtained using GFP Intensity profiles or single time point image collections.

  18. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  19. Evaluation of Body Image and Sexual Satisfaction in Women Undergoing Female Genital Plastic/Cosmetic Surgery.

    PubMed

    Goodman, Michael P; Placik, Otto J; Matlock, David L; Simopoulos, Alex F; Dalton, Teresa A; Veale, David; Hardwick-Smith, Susan

    2016-10-01

    Little prospective data exists regarding the procedures constituting female genital plastic/cosmetic surgery (FGPS). To evaluate whether the procedures of labiaplasty and vaginoperineoplasty improve genital self image, and evaluate effects on sexual satisfaction. Prospective cohort case-controlled study of 120 subjects evaluated at baseline, 6, 12, and 24 months postoperative, paired with a demographically similar control group. Interventions include labiaplasty, clitoral hood reduction, and/or aesthetic vaginal tightening, defined as perineoplasty + "vaginoplasty" (aka "vaginal rejuvenation."). Outcome measures include body image, genital self-image, sexual satisfaction, and body esteem. As a group, study patients tested at baseline showing body dissatisfaction, negative genital self-image, and poorer indices of sexual satisfaction. Preoperative body image of study patients were in a range considered to be mild to moderately dysmorphic, but matched controls at one and two years; genital self-image scores at entry were considerably lower than controls, but by 2-year follow-up had surpassed control value at entry. Similarly, sexual satisfaction values, significantly lower at entry, equaled at one, and surpassed control values, at 2 years. Postoperatively, at all points in time, these differences in body image and genital self-image disappeared, and sexual satisfaction markedly improved. Overall body esteem did not differ between study and control groups, with the exception of the genital esteem quotient, which improved after surgery. Women requesting and completing FGPS, when tested by validated instruments, at entry report sexual dissatisfaction and negative genital self-image. When tested at several points in time after surgery up to two years, these findings were no longer present. When performed by an experienced surgeon, FGPS appears to provide sexual and genital self-image improvement. 2 Therapeutic. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  20. Three-dimensional fluorescence-enhanced optical tomography using a hand-held probe based imaging system

    PubMed Central

    Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha

    2008-01-01

    Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (∼650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1–2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1–2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms. PMID:18697559

  1. Three-dimensional fluorescence-enhanced optical tomography using a hand-held probe based imaging system.

    PubMed

    Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha

    2008-07-01

    Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5 x 10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (approximately 650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1-2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1-2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms.

  2. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  3. Noninvasive measurement of pharmacokinetics by near-infrared fluorescence imaging in the eye of mice

    NASA Astrophysics Data System (ADS)

    Dobosz, Michael; Strobel, Steffen; Stubenrauch, Kay-Gunnar; Osl, Franz; Scheuer, Werner

    2014-01-01

    Purpose: For generating preclinical pharmacokinetics (PKs) of compounds, blood is drawn at different time points and levels are quantified by different analytical methods. In order to receive statistically meaningful data, 3 to 5 animals are used for each time point to get serum peak-level and half-life of the compound. Both characteristics are determined by data interpolation, which may influence the accuracy of these values. We provide a method that allows continuous monitoring of blood levels noninvasively by measuring the fluorescence intensity of labeled compounds in the eye and other body regions of anesthetized mice. Procedures: The method evaluation was performed with four different fluorescent compounds: (i) indocyanine green, a nontargeting dye; (ii) OsteoSense750, a bone targeting agent; (iii) tumor targeting Trastuzumab-Alexa750; and (iv) its F(-alxea750 fragment. The latter was used for a direct comparison between fluorescence imaging and classical blood analysis using enzyme-linked immunosorbent assay (ELISA). Results: We found an excellent correlation between blood levels measured by noninvasive eye imaging with the results generated by classical methods. A strong correlation between eye imaging and ELISA was demonstrated for the F( fragment. Whole body imaging revealed a compound accumulation in the expected regions (e.g., liver, bone). Conclusions: The combination of eye and whole body fluorescence imaging enables the simultaneous measurement of blood PKs and biodistribution of fluorescent-labeled compounds.

  4. Space infrared telescope pointing control system. Infrared telescope tracking in the presence of target motion

    NASA Technical Reports Server (NTRS)

    Powell, J. D.; Schneider, J. B.

    1986-01-01

    The use of charge-coupled-devices, or CCD's, has been documented by a number of sources as an effective means of providing a measurement of spacecraft attitude with respect to the stars. A method exists of defocussing and interpolation of the resulting shape of a star image over a small subsection of a large CCD array. This yields an increase in the accuracy of the device by better than an order of magnitude over the case when the star image is focussed upon a single CCD pixel. This research examines the effect that image motion has upon the overall precision of this star sensor when applied to an orbiting infrared observatory. While CCD's collect energy within the visible spectrum of light, the targets of scientific interest may well have no appreciable visible emissions. Image motion has the effect of smearing the image of the star in the direction of motion during a particular sampling interval. The presence of image motion is incorporated into a Kalman filter for the system, and it is shown that the addition of a gyro command term is adequate to compensate for the effect of image motion in the measurement. The updated gyro model is included in this analysis, but has natural frequencies faster than the projected star tracker sample rate for dim stars. The system state equations are reduced by modelling gyro drift as a white noise process. There exists a tradeoff in selected star tracker sample time between the CCD, which has improved noise characteristics as sample time increases, and the gyro, which will potentially drift further between long attitude updates. A sample time which minimizes pointing estimation error exists for the random drift gyro model as well as for a random walk gyro model.

  5. Combined semantic and similarity search in medical image databases

    NASA Astrophysics Data System (ADS)

    Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin

    2011-03-01

    The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.

  6. STS-56 ESC Earth observation of New York City at night

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-56 electronic still camera (ESC) Earth observation image shows New York City at night as recorded on the 64th orbit of Discovery, Orbiter Vehicle (OV) 103. The image was recorded with an image intensifier on the Hand-held, Earth-oriented, Real-time, Cooperative, User-friendly, Location-targeting and Environmental System (HERCULES). HERCULES is a device that makes it simple for shuttle crewmembers to take pictures of Earth as they merely point a modified 35mm camera and shoot any interesting feature, whose latitude and longitude are automatically determined in real-time. Center coordinates on this image are 40.665 degrees north latitude and 74.048 degrees west longitude. (1/60 second exposure). Digital file name is ESC04034.IMG.

  7. Effect of intravenous gadolinium-DTPA on diffusion-weighted imaging of brain tumors: a short temporal interval assessment.

    PubMed

    Li, Xiang; Qu, Jin-Rong; Luo, Jun-Peng; Li, Jing; Zhang, Hong-Kai; Shao, Nan-Nan; Kwok, Keith; Zhang, Shou-Ning; Li, Yan-le; Liu, Cui-Cui; Zee, Chi-Shing; Li, Hai-Liang

    2014-09-01

    To determine the effect of intravenous administration of gadolinium (Gd) contrast medium (Gd-DTPA) on diffusion-weighted imaging (DWI) for the evaluation of normal brain parenchyma vs. brain tumor following a short temporal interval. Forty-four DWI studies using b values of 0 and 1000 s/mm(2) were performed before, immediately after, 1 min after, 3 min after, and 5 min after the administration of Gd-DTPA on 62 separate lesions including 15 meningioma, 17 glioma and 30 metastatic lesions. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and apparent diffusion coefficient (ADC) values of the brain tumor lesions and normal brain tissues were measured on pre- and postcontrast images. Statistical analysis using paired t-test between precontrast and postcontrast data were obtained on three brain tumors and normal brain tissue. The SNR and CNR of brain tumors and the SNR of normal brain tissue showed no statistical differences between pre- and postcontrast (P > 0.05). The ADC values on the three cases of brain tumors demonstrated significant initial increase on the immediate time point (P < 0.01) and decrease on following the 1 min time point (P < 0.01) after contrast. Significant decrease of ADC value was still found at 3min and 5min time point in the meningioma group (P < 0.01) with gradual normalization over time. The ADC values of normal brain tissues demonstrated significant initial elevation on the immediately postcontrast DWI sequence (P < 0.01). Contrast medium can cause a slight but statistically significant change on the ADC value within a short temporal interval after the contrast administration. The effect is both time and lesion-type dependent. © 2013 Wiley Periodicals, Inc.

  8. Three-dimensional reproducibility of natural head position.

    PubMed

    Weber, Diana W; Fallis, Drew W; Packer, Mark D

    2013-05-01

    Although natural head position has proven to be reliable in the sagittal plane, with an increasing interest in 3-dimensional craniofacial analysis, a determination of its reproducibility in the coronal and axial planes is essential. This study was designed to evaluate the reproducibility of natural head position over time in the sagittal, coronal, and axial planes of space with 3-dimensional imaging. Three-dimensional photographs were taken of 28 adult volunteers (ages, 18-40 years) in natural head position at 5 times: baseline, 4 hours, 8 hours, 24 hours, and 1 week. Using the true vertical and horizontal laser lines projected in an iCAT cone-beam computed tomography machine (Imaging Sciences International, Hatfield, Pa) for orientation, we recorded references for natural head position on the patient's face with semipermanent markers. By using a 3-dimensional camera system, photographs were taken at each time point to capture the orientation of the reference points. By superimposing each of the 5 photographs on stable anatomic surfaces, changes in the position of the markers were recorded and assessed for parallelism by using 3dMDvultus (3dMD, Atlanta, Ga) and software (Dolphin Imaging & Management Solutions, Chatsworth, Calif). No statistically significant differences were observed between the 5 time points in any of the 3 planes of space. However, a statistically significant difference was observed between the mean angular deviations of 3 reference planes, with a hierarchy of natural head position reproducibility established as coronal > axial > sagittal. Within the parameters of this study, natural head position was found to be reproducible in the sagittal, coronal, and axial planes of space. The coronal plane had the least variation over time, followed by the axial and sagittal planes. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  9. Macroscopic in vivo imaging of facial nerve regeneration in Thy1-GFP rats.

    PubMed

    Placheta, Eva; Wood, Matthew D; Lafontaine, Christine; Frey, Manfred; Gordon, Tessa; Borschel, Gregory H

    2015-01-01

    Facial nerve injury leads to severe functional and aesthetic deficits. The transgenic Thy1-GFP rat is a new model for facial nerve injury and reconstruction research that will help improve clinical outcomes through translational facial nerve injury research. To determine whether serial in vivo imaging of nerve regeneration in the transgenic rat model is possible, facial nerve regeneration was imaged under the main paradigms of facial nerve injury and reconstruction. Fifteen male Thy1-GFP rats, which express green fluorescent protein (GFP) in their neural structures, were divided into 3 groups in the laboratory: crush-injury, direct repair, and cross-face nerve grafting (30-mm graft length). The distal nerve stump or nerve graft was predegenerated for 2 weeks. The facial nerve of the transgenic rats was serially imaged at the time of operation and after 2, 4, and 8 weeks of regeneration. The imaging was performed under a GFP-MDS-96/BN excitation stand (BLS Ltd). Facial nerve injury. Optical fluorescence of regenerating facial nerve axons. Serial in vivo imaging of the regeneration of GFP-positive axons in the Thy1-GFP rat model is possible. All animals survived the short imaging procedures well, and nerve regeneration was followed over clinically relevant distances. The predegeneration of the distal nerve stump or the cross-face nerve graft was, however, necessary to image the regeneration front at early time points. Crush injury was not suitable to sufficiently predegenerate the nerve (and to allow for degradation of the GFP through Wallerian degeneration). After direct repair, axons regenerated over the coaptation site in between 2 and 4 weeks. The GFP-positive nerve fibers reached the distal end of the 30-mm-long cross-face nervegrafts after 4 to 8 weeks of regeneration. The time course of facial nerve regeneration was studied by serial in vivo imaging in the transgenic rat model. Nerve regeneration was followed over clinically relevant distances in a small number of experimental animals, as they were subsequently imaged at multiple time points. The Thy1-GFP rat model will help improve clinical outcomes of facial reanimation surgery through improving the knowledge of facial nerve regeneration after surgical procedures. NA.

  10. A single FPGA-based portable ultrasound imaging system for point-of-care applications.

    PubMed

    Kim, Gi-Duck; Yoon, Changhan; Kye, Sang-Bum; Lee, Youngbae; Kang, Jeeun; Yoo, Yangmo; Song, Tai-kyong

    2012-07-01

    We present a cost-effective portable ultrasound system based on a single field-programmable gate array (FPGA) for point-of-care applications. In the portable ultrasound system developed, all the ultrasound signal and image processing modules, including an effective 32-channel receive beamformer with pseudo-dynamic focusing, are embedded in an FPGA chip. For overall system control, a mobile processor running Linux at 667 MHz is used. The scan-converted ultrasound image data from the FPGA are directly transferred to the system controller via external direct memory access without a video processing unit. The potable ultrasound system developed can provide real-time B-mode imaging with a maximum frame rate of 30, and it has a battery life of approximately 1.5 h. These results indicate that the single FPGA-based portable ultrasound system developed is able to meet the processing requirements in medical ultrasound imaging while providing improved flexibility for adapting to emerging POC applications.

  11. In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function

    NASA Astrophysics Data System (ADS)

    Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir

    2018-03-01

    We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.

  12. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl

    PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less

  13. Sequential imaging of asymptomatic carotid atheroma using ultrasmall superparamagnetic iron oxide-enhanced magnetic resonance imaging: a feasibility study.

    PubMed

    Sadat, Umar; Howarth, Simon P S; Usman, Ammara; Tang, Tjun Y; Graves, Martin J; Gillard, Jonathan H

    2013-11-01

    Inflammation within atheromatous plaques is a known risk factor for plaque vulnerability. This can be detected in vivo on high-resolution magnetic resonance imaging (MRI) using ultrasmall superparamagnetic iron oxide (USPIO) contrast medium. The purpose of this study was to assess the feasibility of performing sequential USPIO studies over a 1-year period. Ten patients with moderate asymptomatic carotid stenosis underwent carotid MRI imaging both before and 36 hours after USPIO infusion at 0, 6, and 12 months. Images were manually segmented into quadrants, and the signal change per quadrant was calculated at these time points. A mixed repeated measures statistical model was used to determine signal change attributable to USPIO uptake over time. All patients remained asymptomatic during the study. The mixed model revealed no statistical difference in USPIO uptake between the 3 time points. Intraclass correlation coefficients revealed a good agreement of quadrant signal pre-USPIO infusion between 0 and 6 months (0.70) and 0 and 12 months (0.70). Good agreement of quadrant signal after USPIO infusion was shown between 0 and 6 months (0.68) and moderate agreement was shown between 0 and 12 months (0.33). USPIO-enhanced sequential MRI of atheromatous carotid plaques is clinically feasible. This may have important implications for future longitudinal studies involving pharmacologic intervention in large patient cohorts. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  14. Water and fat separation in real-time MRI of joint movement with phase-sensitive bSSFP.

    PubMed

    Mazzoli, Valentina; Nederveen, Aart J; Oudeman, Jos; Sprengers, Andre; Nicolay, Klaas; Strijkers, Gustav J; Verdonschot, Nico

    2017-07-01

    To introduce a method for obtaining fat-suppressed images in real-time MRI of moving joints at 3 Tesla (T) using a bSSFP sequence with phase detection to enhance visualization of soft tissue structures during motion. The wrist and knee of nine volunteers were imaged with a real-time bSSFP sequence while performing dynamic tasks. For appropriate choice of sequence timing parameters, water and fat pixels showed an out-of-phase behavior, which was exploited to reconstruct water and fat images. Additionally, a 2-point Dixon sequence was used for dynamic imaging of the joints, and resulting water and fat images were compared with our proposed method. The joints could be visualized with good water-fat separation and signal-to-noise ratio (SNR), while maintaining a relatively high temporal resolution (5 fps in knee imaging and 10 fps in wrist imaging). The proposed method produced images of moving joints with higher SNR and higher image quality when compared with the Dixon method. Water-fat separation is feasible in real-time MRI of moving knee and wrist at 3 T. PS-bSSFP offers movies with higher SNR and higher diagnostic quality when compared with Dixon scans. Magn Reson Med 78:58-68, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Keratocyte Apoptosis and Not Myofibroblast Differentiation Mark the Graft/Host Interface at Early Time-Points Post-DSAEK in a Cat Model

    PubMed Central

    Weis, Adam J.; Huxlin, Krystel R.; Callan, Christine L.; DeMagistris, Margaret A.; Hindman, Holly B.

    2013-01-01

    Purpose To evaluate myofibroblast differentiation as an etiology of haze at the graft-host interface in a cat model of Descemet’s Stripping Automated Endothelial Keratoplasty (DSAEK). Methods DSAEK was performed on 10 eyes of 5 adult domestic short-hair cats. In vivo corneal imaging with slit lamp, confocal, and optical coherence tomography (OCT) were performed twice weekly. Cats were sacrificed and corneas harvested 4 hours, and 2, 4, 6, and 9 days post-DSAEK. Corneal sections were stained with the TUNEL method and immunohistochemistry was performed for α-smooth muscle actin (α-SMA) and fibronectin with DAPI counterstain. Results At all in vivo imaging time-points, corneal OCT revealed an increase in backscatter of light and confocal imaging revealed an acellular zone at the graft-host interface. At all post-mortem time-points, immunohistochemistry revealed a complete absence of α-SMA staining at the graft-host interface. At 4 hours, extracellular fibronectin staining was identified along the graft-host interface and both fibronectin and TUNEL assay were positive within adjacent cells extending into the host stroma. By day 2, fibronectin and TUNEL staining diminished and a distinct acellular zone was present in the region of previously TUNEL-positive cells. Conclusions OCT imaging consistently showed increased reflectivity at the graft-host interface in cat corneas in the days post-DSAEK. This was not associated with myofibroblast differentiation at the graft-host interface, but rather with apoptosis and the development of a subsequent acellular zone. The roles of extracellular matrix changes and keratocyte cell death and repopulation should be investigated further as potential contributors to the interface optical changes. PMID:24098706

  16. Proof of Concept: Design and Initial Evaluation of a Device to Measure Gastrointestinal Transit Time.

    PubMed

    Wagner, Robert H; Savir-Baruch, Bital; Halama, James R; Venu, Mukund; Gabriel, Medhat S; Bova, Davide

    2017-09-01

    Chronic constipation and gastrointestinal motility disorders constitute a large part of a gastroenterology practice and have a significant impact on a patient's quality of life and lifestyle. In most cases, medications are prescribed to alleviate symptoms without there being an objective measurement of response. Commonly used investigations of gastrointestinal transit times are currently limited to radiopaque markers or electronic capsules. Repeated use of these techniques is limited because of the radiation exposure and the significant cost of the devices. We present the proof of concept for a new device to measure gastrointestinal transit time using commonly available and inexpensive materials with only a small amount of radiotracer. Methods: We assembled gelatin capsules containing a 67 Ga-citrate-radiolabeled grain of rice embedded in paraffin for use as a point-source transit device. It was tested for stability in vitro and subsequently was given orally to 4 healthy volunteers and 10 patients with constipation or diarrhea. Imaging was performed at regular intervals until the device was excreted. Results: The device remained intact and visible as a point source in all subjects until excretion. When used along with a diary of bowel movement times and dates, the device could determine the total transit time. The device could be visualized either alone or in combination with a barium small-bowel follow-through study or a gastric emptying study. Conclusion: The use of a point-source transit device for the determination of gastrointestinal transit time is a feasible alternative to other methods. The device is inexpensive and easy to assemble, requires only a small amount of radiotracer, and remains inert throughout the gastrointestinal tract, allowing for accurate determination of gastrointestinal transit time. Further investigation of the device is required to establish optimum imaging parameters and reference values. Measurements of gastrointestinal transit time may be useful in managing patients with dysmotility and in selecting the appropriate pharmaceutical treatment. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  17. Measurement of radioactivity concentration in blood by using newly developed ToT LuAG-APD based small animal PET tomograph.

    PubMed

    Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki

    2013-01-01

    In order to obtain plasma time activity curve (PTAC), input function for almost all quantitative PET studies, patient blood is sampled manually from the artery or vein which has various drawbacks. Recently a novel compact Time over Threshold (ToT) based Pr:LuAG-APD animal PET tomograph is developed in our laboratory which has 10% energy resolution, 4.2 ns time resolution and 1.76 mm spatial resolution. The measured value of spatial resolution shows much promise for imaging the blood vascular, i.e; artery of diameter 2.3-2.4mm, and hence, to measure PTAC for quantitative PET studies. To find the measurement time required to obtain reasonable counts for image reconstruction, the most important parameter is the sensitivity of the system. Usually small animal PET systems are characterized by using a point source in air. We used Electron Gamma Shower 5 (EGS5) code to simulate a point source at different positions inside the sensitive volume of tomograph and the axial and radial variations in the sensitivity are studied in air and phantom equivalent water cylinder. An average sensitivity difference of 34% in axial direction and 24.6% in radial direction is observed when point source is displaced inside water cylinder instead of air.

  18. Registration of angiographic image on real-time fluoroscopic image for image-guided percutaneous coronary intervention.

    PubMed

    Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha

    2018-02-01

    In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.

  19. Pole Photogrammetry with AN Action Camera for Fast and Accurate Surface Mapping

    NASA Astrophysics Data System (ADS)

    Gonçalves, J. A.; Moutinho, O. F.; Rodrigues, A. C.

    2016-06-01

    High resolution and high accuracy terrain mapping can provide height change detection for studies of erosion, subsidence or land slip. A UAV flying at a low altitude above the ground, with a compact camera, acquires images with resolution appropriate for these change detections. However, there may be situations where different approaches may be needed, either because higher resolution is required or the operation of a drone is not possible. Pole photogrammetry, where a camera is mounted on a pole, pointing to the ground, is an alternative. This paper describes a very simple system of this kind, created for topographic change detection, based on an action camera. These cameras have high quality and very flexible image capture. Although radial distortion is normally high, it can be treated in an auto-calibration process. The system is composed by a light aluminium pole, 4 meters long, with a 12 megapixel GoPro camera. Average ground sampling distance at the image centre is 2.3 mm. The user moves along a path, taking successive photos, with a time lapse of 0.5 or 1 second, and adjusting the speed in order to have an appropriate overlap, with enough redundancy for 3D coordinate extraction. Marked ground control points are surveyed with GNSS for precise georeferencing of the DSM and orthoimage that are created by structure from motion processing software. An average vertical accuracy of 1 cm could be achieved, which is enough for many applications, for example for soil erosion. The GNSS survey in RTK mode with permanent stations is now very fast (5 seconds per point), which results, together with the image collection, in a very fast field work. If an improved accuracy is needed, since image resolution is 1/4 cm, it can be achieved using a total station for the control point survey, although the field work time increases.

  20. Near-ultraviolet imaging of Jupiter's satellite Io with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Paresce, F.; Sartoretti, P.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1992-01-01

    The surface of Jupiter's Galilean satellite Io has been resolved for the first time in the near ultraviolet at 2850 A by the Faint Object Camera (FOC) on the Hubble Space Telescope (HST). The restored images reveal significant surface structure down to the resolution limit of the optical system corresponding to approximately 250 km at the sub-earth point.

  1. Interactive-cut: Real-time feedback segmentation for translational research.

    PubMed

    Egger, Jan; Lüddemann, Tobias; Schwarzenberg, Robert; Freisleben, Bernd; Nimsky, Christopher

    2014-06-01

    In this contribution, a scale-invariant image segmentation algorithm is introduced that "wraps" the algorithm's parameters for the user by its interactive behavior, avoiding the definition of "arbitrary" numbers that the user cannot really understand. Therefore, we designed a specific graph-based segmentation method that only requires a single seed-point inside the target-structure from the user and is thus particularly suitable for immediate processing and interactive, real-time adjustments by the user. In addition, color or gray value information that is needed for the approach can be automatically extracted around the user-defined seed point. Furthermore, the graph is constructed in such a way, so that a polynomial-time mincut computation can provide the segmentation result within a second on an up-to-date computer. The algorithm presented here has been evaluated with fixed seed points on 2D and 3D medical image data, such as brain tumors, cerebral aneurysms and vertebral bodies. Direct comparison of the obtained automatic segmentation results with costlier, manual slice-by-slice segmentations performed by trained physicians, suggest a strong medical relevance of this interactive approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Evaluation of a high framerate multi-exposure laser speckle contrast imaging setup

    NASA Astrophysics Data System (ADS)

    Hultman, Martin; Fredriksson, Ingemar; Strömberg, Tomas; Larsson, Marcus

    2018-02-01

    We present a first evaluation of a new multi-exposure laser speckle contrast imaging (MELSCI) system for assessing spatial variations in the microcirculatory perfusion. The MELSCI system is based on a 1000 frames per second 1-megapixel camera connected to a field programmable gate arrays (FPGA) capable of producing MELSCI data in realtime. The imaging system is evaluated against a single point laser Doppler flowmetry (LDF) system during occlusionrelease provocations of the arm in five subjects. Perfusion is calculated from MELSCI data using current state-of-the-art inverse models. The analysis displayed a good agreement between measured and modeled data, with an average error below 6%. This strongly indicates that the applied model is capable of accurately describing the MELSCI data and that the acquired data is of high quality. Comparing readings from the occlusion-release provocation showed that the MELSCI perfusion was significantly correlated (R=0.83) to the single point LDF perfusion, clearly outperforming perfusion estimations based on a single exposure time. We conclude that the MELSCI system provides blood flow images of enhanced quality, taking us one step closer to a system that accurately can monitor dynamic changes in skin perfusion over a large area in real-time.

  3. An MR-based Model for Cardio-Respiratory Motion Compensation of Overlays in X-Ray Fluoroscopy

    PubMed Central

    Fischer, Peter; Faranesh, Anthony; Pohl, Thomas; Maier, Andreas; Rogers, Toby; Ratnayaka, Kanishka; Lederman, Robert; Hornegger, Joachim

    2017-01-01

    In X-ray fluoroscopy, static overlays are used to visualize soft tissue. We propose a system for cardiac and respiratory motion compensation of these overlays. It consists of a 3-D motion model created from real-time MR imaging. Multiple sagittal slices are acquired and retrospectively stacked to consistent 3-D volumes. Slice stacking considers cardiac information derived from the ECG and respiratory information extracted from the images. Additionally, temporal smoothness of the stacking is enhanced. Motion is estimated from the MR volumes using deformable 3-D/3-D registration. The motion model itself is a linear direct correspondence model using the same surrogate signals as slice stacking. In X-ray fluoroscopy, only the surrogate signals need to be extracted to apply the motion model and animate the overlay in real time. For evaluation, points are manually annotated in oblique MR slices and in contrast-enhanced X-ray images. The 2-D Euclidean distance of these points is reduced from 3.85 mm to 2.75 mm in MR and from 3.0 mm to 1.8 mm in X-ray compared to the static baseline. Furthermore, the motion-compensated overlays are shown qualitatively as images and videos. PMID:28692969

  4. Teleneurosonology: a novel application of transcranial and carotid ultrasound.

    PubMed

    Rubin, Mark N; Barrett, Kevin M; Freeman, W David; Lee Iannotti, Joyce K; Channer, Dwight D; Rabinstein, Alejandro A; Demaerschalk, Bart M

    2015-03-01

    To demonstrate the technical feasibility of interfacing transcranial Doppler (TCD) and carotid "duplex" ultrasonography (CUS) peripherals with telemedicine end points to provide real-time spectral waveform and duplex imaging data for remote review and interpretation. We performed remote TCD and CUS examinations on a healthy, volunteer employee from our institution without known cerebrovascular disease. The telemedicine end point was stationed in our institution's hospital where the neurosonology examinations took place and the control station was in a dedicated telemedicine room in a separate building. The examinations were performed by a postgraduate level neurohospitalist trainee (M.N.R.) and interpreted by an attending vascular neurologist, both with experience in the performance and interpretation of TCD and CUS. Spectral waveform and duplex ultrasound data were successfully transmitted from TCD and CUS instruments through a telemedicine end point to a remote reviewer at a control station. Image quality was preserved in all cases, and technical failures were not encountered. This proof-of-concept study demonstrates the technical feasibility of interfacing TCD and CUS peripherals with a telemedicine end point to provide real-time spectral waveform and duplex imaging data for remote review and interpretation. Medical diagnostic and telemedicine devices should be equipped with interfaces that allow simple transmission of high-quality audio and video information from the medical devices to the telemedicine technology. Further study is encouraged to determine the clinical impact of teleneurosonology. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  5. Simplified Night Sky Display System

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  6. Method of orthogonally splitting imaging pose measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong

    2018-01-01

    In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.

  7. Confocal multispot microscope for fast and deep imaging in semicleared tissues

    NASA Astrophysics Data System (ADS)

    Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio

    2018-02-01

    Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.

  8. ARC-1994-AC94-0353-2

    NASA Image and Video Library

    1994-07-01

    Photo Artwork composite by JPL This depiction of comet Shoemaker-Levy 9 impacting Jupiter is shown from several perspectives. IMAGE A is shown from the perspective of Earth based observers. IMAGE B shows the perspective from Galileo spacecraft which can observe the impact point directly. IMAGE C is shown from the Voyager 2 spacecraft, which may observe the event from its unique position at the outer reaches of the solar system. IMAGE D depicts a generic view from Jupiter's south pole. For visual appeal, most of the large cometary fragments are shown close to one another in this image. At the time of Jupiter impact, the fragments will be separated from one another by serveral times the distances shown. This image was created by D.A. Seal of JPL's Mission Design Section using orbital computations provIded by P.W. Chodas and D.K. Yeomans of JPL's Navigation Section.

  9. Prolonged in vivo imaging of Xenopus laevis.

    PubMed

    Hamilton, Paul W; Henry, Jonathan J

    2014-08-01

    While live imaging of embryonic development over long periods of time is a well established method for embryos of the frog Xenopus laevis, once development has progressed to the swimming stages, continuous live imaging becomes more challenging because the tadpoles must be immobilized. Current imaging techniques for these advanced stages generally require bringing the tadpoles in and out of anesthesia for short imaging sessions at selected time points, severely limiting the resolution of the data. Here we demonstrate that creating a constant flow of diluted tricaine methanesulfonate (MS-222) over a tadpole greatly improves their survival under anesthesia. Based on this result, we describe a new method for imaging stage 48 to 65 X. laevis, by circulating the anesthetic using a peristaltic pump. This supports the animal during continuous live imaging sessions for at least 48 hr. The addition of a stable optical window allows for high quality imaging through the anesthetic solution. This automated imaging system provides for the first time a method for continuous observations of developmental and regenerative processes in advanced stages of Xenopus over 2 days. Developmental Dynamics 243:1011-1019, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  10. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2014-01-01

    Stereovision is an important intraoperative imaging technique that captures the exposed parenchymal surface noninvasively during open cranial surgery. Estimating cortical surface shift efficiently and accurately is critical to compensate for brain deformation in the operating room (OR). In this study, we present an automatic and robust registration technique based on optical flow (OF) motion tracking to compensate for cortical surface displacement throughout surgery. Stereo images of the cortical surface were acquired at multiple time points after dural opening to reconstruct three-dimensional (3D) texture intensity-encoded cortical surfaces. A local coordinate system was established with its z-axis parallel to the average surface normal direction of the reconstructed cortical surface immediately after dural opening in order to produce two-dimensional (2D) projection images. A dense displacement field between the two projection images was determined directly from OF motion tracking without the need for feature identification or tracking. The starting and end points of the displacement vectors on the two cortical surfaces were then obtained following spatial mapping inversion to produce the full 3D displacement of the exposed cortical surface. We evaluated the technique with images obtained from digital phantoms and 18 surgical cases – 10 of which involved independent measurements of feature locations acquired with a tracked stylus for accuracy comparisons, and 8 others of which 4 involved stereo image acquisitions at three or more time points during surgery to illustrate utility throughout a procedure. Results from the digital phantom images were very accurate (0.05 pixels). In the 10 surgical cases with independently digitized point locations, the average agreement between feature coordinates derived from the cortical surface reconstructions was 1.7–2.1 mm relative to those determined with the tracked stylus probe. The agreement in feature displacement tracking was also comparable to tracked probe data (difference in displacement magnitude was <1 mm on average). The average magnitude of cortical surface displacement was 7.9 ± 5.7 mm (range 0.3–24.4 mm) in all patient cases with the displacement components along gravity being 5.2 ± 6.0 mm relative to the lateral movement of 2.4 ± 1.6 mm. Thus, our technique appears to be sufficiently accurate and computationally efficiency (typically ~15 s), for applications in the OR. PMID:25077845

  11. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  12. Cross Validation on the Equality of Uav-Based and Contour-Based Dems

    NASA Astrophysics Data System (ADS)

    Ma, R.; Xu, Z.; Wu, L.; Liu, S.

    2018-04-01

    Unmanned Aerial Vehicles (UAV) have been widely used for Digital Elevation Model (DEM) generation in geographic applications. This paper proposes a novel framework of generating DEM from UAV images. It starts with the generation of the point clouds by image matching, where the flight control data are used as reference for searching for the corresponding images, leading to a significant time saving. Besides, a set of ground control points (GCP) obtained from field surveying are used to transform the point clouds to the user's coordinate system. Following that, we use a multi-feature based supervised classification method for discriminating non-ground points from ground ones. In the end, we generate DEM by constructing triangular irregular networks and rasterization. The experiments are conducted in the east of Jilin province in China, which has been suffered from soil erosion for several years. The quality of UAV based DEM (UAV-DEM) is compared with that generated from contour interpolation (Contour-DEM). The comparison shows a higher resolution, as well as higher accuracy of UAV-DEMs, which contains more geographic information. In addition, the RMSE errors of the UAV-DEMs generated from point clouds with and without GCPs are ±0.5 m and ±20 m, respectively.

  13. Real-time changes in corticospinal excitability related to motor imagery of a force control task.

    PubMed

    Tatemoto, Tsuyoshi; Tsuchiya, Junko; Numata, Atsuki; Osawa, Ryuji; Yamaguchi, Tomofumi; Tanabe, Shigeo; Kondo, Kunitsugu; Otaka, Yohei; Sugawara, Kenichi

    2017-09-29

    To investigate real-time excitability changes in corticospinal pathways related to motor imagery in a changing force control task, using transcranial magnetic stimulation (TMS). Ten healthy volunteers learnt to control the contractile force of isometric right wrist dorsiflexion in order to track an on-screen sine wave form. Participants performed the trained task 40 times with actual muscle contraction in order to construct the motor image. They were then instructed to execute the task without actual muscle contraction, but by imagining contraction of the right wrist in dorsiflexion. Motor evoked potentials (MEPs), induced by TMS in the right extensor carpi radialis muscle (ECR) and flexor carpi radialis muscle (FCR), were measured during motor imagery. MEPs were induced at five time points: prior to imagery, during the gradual generation of the imaged wrist dorsiflexion (Increasing phase), the peak value of the sine wave, during the gradual reduction (Decreasing phase), and after completion of the task. The MEP ratio, as the ratio of imaged MEPs to resting-state, was compared between pre- and post-training at each time point. In the ECR muscle, the MEP ratio significantly increased during the Increasing phase and at the peak force of dorsiflexion imagery after training. Moreover, the MEP ratio was significantly greater in the Increasing phase than in the Decreasing phase. In the FCR, there were no significant consistent changes. Corticospinal excitability during motor imagery in an isometric contraction task was modulated in relation to the phase of force control after image construction. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Time-elapsed screw insertion with microCT imaging.

    PubMed

    Ryan, M K; Mohtar, A A; Cleek, T M; Reynolds, K J

    2016-01-25

    Time-elapsed analysis of bone is an innovative technique that uses sequential image data to analyze bone mechanics under a given loading regime. This paper presents the development of a novel device capable of performing step-wise screw insertion into excised bone specimens, within the microCT environment, whilst simultaneously recording insertion torque, compression under the screw head and rotation angle. The system is computer controlled and screw insertion is performed in incremental steps of insertion torque. A series of screw insertion tests to failure were performed (n=21) to establish a relationship between the torque at head contact and stripping torque (R(2)=0.89). The test-device was then used to perform step-wise screw insertion, stopping at intervals of 20%, 40%, 60% and 80% between screw head contact and screw stripping. Image data-sets were acquired at each of these time-points as well as at head contact and post-failure. Examination of the image data revealed the trabecular deformation as a result of increased insertion torque was restricted to within 1mm of the outer diameter of the screw thread. Minimal deformation occurred prior to the step between the 80% time-point and post-failure. The device presented has allowed, for the first time, visualization of the micro-mechanical response in the peri-implant bone with increased tightening torque. Further testing on more samples is expected to increase our understanding of the effects of increased tightening torque at the micro-structural level, and the failure mechanisms of trabeculae. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Automated designation of tie-points for image-to-image coregistration.

    Treesearch

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  16. Bending the Rules: Widefield Microscopy and the Abbe Limit of Resolution

    PubMed Central

    Verdaasdonk, Jolien S.; Stephens, Andrew D.; Haase, Julian; Bloom, Kerry

    2014-01-01

    One of the most fundamental concepts of microscopy is that of resolution–the ability to clearly distinguish two objects as separate. Recent advances such as structured illumination microscopy (SIM) and point localization techniques including photoactivated localization microscopy (PALM), and stochastic optical reconstruction microscopy (STORM) strive to overcome the inherent limits of resolution of the modern light microscope. These techniques, however, are not always feasible or optimal for live cell imaging. Thus, in this review, we explore three techniques for extracting high resolution data from images acquired on a widefield microscope–deconvolution, model convolution, and Gaussian fitting. Deconvolution is a powerful tool for restoring a blurred image using knowledge of the point spread function (PSF) describing the blurring of light by the microscope, although care must be taken to ensure accuracy of subsequent quantitative analysis. The process of model convolution also requires knowledge of the PSF to blur a simulated image which can then be compared to the experimentally acquired data to reach conclusions regarding its geometry and fluorophore distribution. Gaussian fitting is the basis for point localization microscopy, and can also be applied to tracking spot motion over time or measuring spot shape and size. All together, these three methods serve as powerful tools for high-resolution imaging using widefield microscopy. PMID:23893718

  17. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  18. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  19. MO-FG-202-01: A Fast Yet Sensitive EPID-Based Real-Time Treatment Verification System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, M; Nourzadeh, H; Neal, B

    2016-06-15

    Purpose: To create a real-time EPID-based treatment verification system which robustly detects treatment delivery and patient attenuation variations. Methods: Treatment plan DICOM files sent to the record-and-verify system are captured and utilized to predict EPID images for each planned control point using a modified GPU-based digitally reconstructed radiograph algorithm which accounts for the patient attenuation, source energy fluence, source size effects, and MLC attenuation. The DICOM and predicted images are utilized by our C++ treatment verification software which compares EPID acquired 1024×768 resolution frames acquired at ∼8.5hz from Varian Truebeam™ system. To maximize detection sensitivity, image comparisons determine (1) ifmore » radiation exists outside of the desired treatment field; (2) if radiation is lacking inside the treatment field; (3) if translations, rotations, and magnifications of the image are within tolerance. Acquisition was tested with known test fields and prior patient fields. Error detection was tested in real-time and utilizing images acquired during treatment with another system. Results: The computational time of the prediction algorithms, for a patient plan with 350 control points and 60×60×42cm^3 CT volume, is 2–3minutes on CPU and <27 seconds on GPU for 1024×768 images. The verification software requires a maximum of ∼9ms and ∼19ms for 512×384 and 1024×768 resolution images, respectively, to perform image analysis and dosimetric validations. Typical variations in geometric parameters between reference and the measured images are 0.32°for gantry rotation, 1.006 for scaling factor, and 0.67mm for translation. For excess out-of-field/missing in-field fluence, with masks extending 1mm (at isocenter) from the detected aperture edge, the average total in-field area missing EPID fluence was 1.5mm2 the out-of-field excess EPID fluence was 8mm^2, both below error tolerances. Conclusion: A real-time verification software, with EPID images prediction algorithm, was developed. The system is capable of performing verifications between frames acquisitions and identifying source(s) of any out-of-tolerance variations. This work was supported in part by Varian Medical Systems.« less

  20. Concrete/mortar water phase transition studied by single-point MRI methods.

    PubMed

    Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W; Grattan-Bellew, P E

    1998-01-01

    A series of magnetic resonance imaging (MRI) water density and T2* profiles in hardened concrete and mortar samples has been obtained during freezing conditions (-50 degrees C < T < 11 degrees C). The single-point ramped imaging with T1 enhancement (SPRITE) sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 microseconds and T1 < 3.6 ms). The frozen and evaporable water distribution was quantified through a position based study of the profile magnitude. Submillimetric resolution of proton-density and T2*-relaxation parameters as a function of temperature has been achieved.

  1. Improving arrival time identification in transient elastography

    NASA Astrophysics Data System (ADS)

    Klein, Jens; McLaughlin, Joyce; Renzi, Daniel

    2012-04-01

    In this paper, we improve the first step in the arrival time algorithm used for shear wave speed recovery in transient elastography. In transient elastography, a shear wave is initiated at the boundary and the interior displacement of the propagating shear wave is imaged with an ultrasound ultra-fast imaging system. The first step in the arrival time algorithm finds the arrival times of the shear wave by cross correlating displacement time traces (the time history of the displacement at a single point) with a reference time trace located near the shear wave source. The second step finds the shear wave speed from the arrival times. In performing the first step, we observe that the wave pulse decorrelates as it travels through the medium, which leads to inaccurate estimates of the arrival times and ultimately to blurring and artifacts in the shear wave speed image. In particular, wave ‘spreading’ accounts for much of this decorrelation. Here we remove most of the decorrelation by allowing the reference wave pulse to spread during the cross correlation. This dramatically improves the images obtained from arrival time identification. We illustrate the improvement of this method on phantom and in vivo data obtained from the laboratory of Mathias Fink at ESPCI, Paris.

  2. Identification of the ideal clutter metric to predict time dependence of human visual search

    NASA Astrophysics Data System (ADS)

    Cartier, Joan F.; Hsu, David H.

    1995-05-01

    The Army Night Vision and Electronic Sensors Directorate (NVESD) has recently performed a human perception experiment in which eye tracker measurements were made on trained military observers searching for targets in infrared images. This data offered an important opportunity to evaluate a new technique for search modeling. Following the approach taken by Jeff Nicoll, this model treats search as a random walk in which the observers are in one of two states until they quit: they are either searching, or they are wandering around looking for a point of interest. When wandering they skip rapidly from point to point. When examining they move more slowly, reflecting the fact that target discrimination requires additional thought processes. In this paper we simulate the random walk, using a clutter metric to assign relative attractiveness to points of interest within the image which are competing for the observer's attention. The NVESD data indicates that a number of standard clutter metrics are good estimators of the apportionment of observer's time between wandering and examining. Conversely, the apportionment of observer time spent wandering and examining could be used to reverse engineer the ideal clutter metric which would most perfectly describe the behavior of the group of observers. It may be possible to use this technique to design the optimal clutter metric to predict performance of visual search.

  3. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  4. Real-time non-rigid target tracking for ultrasound-guided clinical interventions

    NASA Astrophysics Data System (ADS)

    Zachiu, C.; Ries, M.; Ramaekers, P.; Guey, J.-L.; Moonen, C. T. W.; de Senneville, B. Denis

    2017-10-01

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of  ˜1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  5. Real-time non-rigid target tracking for ultrasound-guided clinical interventions.

    PubMed

    Zachiu, C; Ries, M; Ramaekers, P; Guey, J-L; Moonen, C T W; de Senneville, B Denis

    2017-10-04

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target anatomy during the intervention would be beneficial for such applications. Since the aforementioned interventions are often conducted under B-mode ultrasound (US) guidance, target tracking can be achieved via image registration, by comparing the acquired US images to a separate image established as positional reference. However, such US images are intrinsically altered by speckle noise, introducing incoherent gray-level intensity variations. This may prove problematic for existing intensity-based registration methods. In the current study we address US-based target tracking by employing the recently proposed EVolution registration algorithm. The method is, by construction, robust to transient gray-level intensities. Instead of directly matching image intensities, EVolution aligns similar contrast patterns in the images. Moreover, the displacement is computed by evaluating a matching criterion for image sub-regions rather than on a point-by-point basis, which typically provides more robust motion estimates. However, unlike similar previously published approaches, which assume rigid displacements in the image sub-regions, the EVolution algorithm integrates the matching criterion in a global functional, allowing the estimation of an elastic dense deformation. The approach was validated for soft tissue tracking under free-breathing conditions on the abdomen of seven healthy volunteers. Contact echography was performed on all volunteers, while three of the volunteers also underwent standoff echography. Each of the two modalities is predominantly specific to a particular type of non- or mini-invasive clinical intervention. The method demonstrated on average an accuracy of  ∼1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  6. Apparent diffusion coefficient histogram analysis can evaluate radiation-induced parotid damage and predict late xerostomia degree in nasopharyngeal carcinoma

    PubMed Central

    Zhou, Nan; Guo, Tingting; Zheng, Huanhuan; Pan, Xia; Chu, Chen; Dou, Xin; Li, Ming; Liu, Song; Zhu, Lijing; Liu, Baorui; Chen, Weibo; He, Jian; Yan, Jing; Zhou, Zhengyang; Yang, Xiaofeng

    2017-01-01

    We investigated apparent diffusion coefficient (ADC) histogram analysis to evaluate radiation-induced parotid damage and predict xerostomia degrees in nasopharyngeal carcinoma (NPC) patients receiving radiotherapy. The imaging of bilateral parotid glands in NPC patients was conducted 2 weeks before radiotherapy (time point 1), one month after radiotherapy (time point 2), and four months after radiotherapy (time point 3). From time point 1 to 2, parotid volume, skewness, and kurtosis decreased (P < 0.001, = 0.001, and < 0.001, respectively), but all other ADC histogram parameters increased (all P < 0.001, except P = 0.006 for standard deviation [SD]). From time point 2 to 3, parotid volume continued to decrease (P = 0.022), and SD, 75th and 90th percentiles continued to increase (P = 0.024, 0.010, and 0.006, respectively). Early change rates of parotid ADCmean, ADCmin, kurtosis, and 25th, 50th, 75th, 90th percentiles (from time point 1 to 2) correlated with late parotid atrophy rate (from time point 1 to 3) (all P < 0.05). Multiple linear regression analysis revealed correlations among parotid volume, time point, and ADC histogram parameters. Early mean change rates for bilateral parotid SD and ADCmax could predict late xerostomia degrees at seven months after radiotherapy (three months after time point 3) with AUC of 0.781 and 0.818 (P = 0.014, 0.005, respectively). ADC histogram parameters were reproducible (intraclass correlation coefficient, 0.830 - 0.999). ADC histogram analysis could be used to evaluate radiation-induced parotid damage noninvasively, and predict late xerostomia degrees of NPC patients treated with radiotherapy. PMID:29050274

  7. Apparent diffusion coefficient histogram analysis can evaluate radiation-induced parotid damage and predict late xerostomia degree in nasopharyngeal carcinoma.

    PubMed

    Zhou, Nan; Guo, Tingting; Zheng, Huanhuan; Pan, Xia; Chu, Chen; Dou, Xin; Li, Ming; Liu, Song; Zhu, Lijing; Liu, Baorui; Chen, Weibo; He, Jian; Yan, Jing; Zhou, Zhengyang; Yang, Xiaofeng

    2017-09-19

    We investigated apparent diffusion coefficient (ADC) histogram analysis to evaluate radiation-induced parotid damage and predict xerostomia degrees in nasopharyngeal carcinoma (NPC) patients receiving radiotherapy. The imaging of bilateral parotid glands in NPC patients was conducted 2 weeks before radiotherapy (time point 1), one month after radiotherapy (time point 2), and four months after radiotherapy (time point 3). From time point 1 to 2, parotid volume, skewness, and kurtosis decreased ( P < 0.001, = 0.001, and < 0.001, respectively), but all other ADC histogram parameters increased (all P < 0.001, except P = 0.006 for standard deviation [SD]). From time point 2 to 3, parotid volume continued to decrease ( P = 0.022), and SD, 75 th and 90 th percentiles continued to increase ( P = 0.024, 0.010, and 0.006, respectively). Early change rates of parotid ADC mean , ADC min , kurtosis, and 25 th , 50 th , 75 th , 90 th percentiles (from time point 1 to 2) correlated with late parotid atrophy rate (from time point 1 to 3) (all P < 0.05). Multiple linear regression analysis revealed correlations among parotid volume, time point, and ADC histogram parameters. Early mean change rates for bilateral parotid SD and ADC max could predict late xerostomia degrees at seven months after radiotherapy (three months after time point 3) with AUC of 0.781 and 0.818 ( P = 0.014, 0.005, respectively). ADC histogram parameters were reproducible (intraclass correlation coefficient, 0.830 - 0.999). ADC histogram analysis could be used to evaluate radiation-induced parotid damage noninvasively, and predict late xerostomia degrees of NPC patients treated with radiotherapy.

  8. The use of multiple time point dynamic positron emission tomography/computed tomography in patients with oral/head and neck cancer does not predictably identify metastatic cervical lymph nodes.

    PubMed

    Carlson, Eric R; Schaefferkoetter, Josh; Townsend, David; McCoy, J Michael; Campbell, Paul D; Long, Misty

    2013-01-01

    To determine whether the time course of 18-fluorine fluorodeoxyglucose (18F-FDG) activity in multiple consecutively obtained 18F-FDG positron emission tomography (PET)/computed tomography (CT) scans predictably identifies metastatic cervical adenopathy in patients with oral/head and neck cancer. It is hypothesized that the activity will increase significantly over time only in those lymph nodes harboring metastatic cancer. A prospective cohort study was performed whereby patients with oral/head and neck cancer underwent consecutive imaging at 9 time points with PET/CT from 60 to 115 minutes after injection with (18)F-FDG. The primary predictor variable was the status of the lymph nodes based on dynamic PET/CT imaging. Metastatic lymph nodes were defined as those that showed an increase greater than or equal to 10% over the baseline standard uptake values. The primary outcome variable was the pathologic status of the lymph node. A total of 2,237 lymph nodes were evaluated histopathologically in the 83 neck dissections that were performed in 74 patients. A total of 119 lymph nodes were noted to have hypermetabolic activity on the 90-minute (static) portion of the study and were able to be assessed by time points. When we compared the PET/CT time point (dynamic) data with the histopathologic analysis of the lymph nodes, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 60.3%, 70.5%, 66.0%, 65.2%, and 65.5%, respectively. The use of dynamic PET/CT imaging does not permit the ablative surgeon to depend only on the results of the PET/CT study to determine which patients will benefit from neck dissection. As such, we maintain that surgeons should continue to rely on clinical judgment and maintain a low threshold for executing neck dissection in patients with oral/head and neck cancer, including those patients with N0 neck designations. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    PubMed

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better identify more qualified cancer drug candidates for drug discovery research. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  10. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volker, Arno; Hunter, Alan

    Anisotropic materials are being used increasingly in high performance industrial applications, particularly in the aeronautical and nuclear industries. Some important examples of these materials are composites, single-crystal and heavy-grained metals. Ultrasonic array imaging in these materials requires exact knowledge of the anisotropic material properties. Without this information, the images can be adversely affected, causing a reduction in defect detection and characterization performance. The imaging operation can be formulated in two consecutive and reciprocal focusing steps, i.e., focusing the sources and then focusing the receivers. Applying just one of these focusing steps yields an interesting intermediate domain. The resulting common focusmore » point gather (CFP-gather) can be interpreted to determine the propagation operator. After focusing the sources, the observed travel-time in the CFP-gather describes the propagation from the focus point to the receivers. If the correct propagation operator is used, the measured travel-times should be the same as the time-reversed focusing operator due to reciprocity. This makes it possible to iteratively update the focusing operator using the data only and allows the material to be imaged without explicit knowledge of the anisotropic material parameters. Furthermore, the determined propagation operator can also be used to invert for the anisotropic medium parameters. This paper details the proposed technique and demonstrates its use on simulated array data from a specimen of Inconel single-crystal alloy commonly used in the aeronautical and nuclear industries.« less

  12. A new integrated dual time-point amyloid PET/MRI data analysis method.

    PubMed

    Cecchin, Diego; Barthel, Henryk; Poggiali, Davide; Cagnin, Annachiara; Tiepolt, Solveig; Zucchetta, Pietro; Turco, Paolo; Gallo, Paolo; Frigo, Anna Chiara; Sabri, Osama; Bui, Franco

    2017-11-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative "dual time-point" indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age and the indexes of the new dual time-point amyloid imaging method in amyloid-negative patients. The method can be considered a valuable tool in both routine clinical practice and in the research setting as it will standardize data regarding amyloid deposition. It could potentially also be used to identify early amyloid plaque deposition in younger subjects in whom treatment could theoretically be more effective.

  13. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

    PubMed Central

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-01-01

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757

  14. Interest point detection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Dorado-Muñoz, Leidy P.; Vélez-Reyes, Miguel; Roysam, Badrinath; Mukherjee, Amit

    2009-05-01

    This paper presents an algorithm for automated extraction of interest points (IPs)in multispectral and hyperspectral images. Interest points are features of the image that capture information from its neighbours and they are distinctive and stable under transformations such as translation and rotation. Interest-point operators for monochromatic images were proposed more than a decade ago and have since been studied extensively. IPs have been applied to diverse problems in computer vision, including image matching, recognition, registration, 3D reconstruction, change detection, and content-based image retrieval. Interest points are helpful in data reduction, and reduce the computational burden of various algorithms (like registration, object detection, 3D reconstruction etc) by replacing an exhaustive search over the entire image domain by a probe into a concise set of highly informative points. An interest operator seeks out points in an image that are structurally distinct, invariant to imaging conditions, stable under geometric transformation, and interpretable which are good candidates for interest points. Our approach extends ideas from Lowe's keypoint operator that uses local extrema of Difference of Gaussian (DoG) operator at multiple scales to detect interest point in gray level images. The proposed approach extends Lowe's method by direct conversion of scalar operations such as scale-space generation, and extreme point detection into operations that take the vector nature of the image into consideration. Experimental results with RGB and hyperspectral images which demonstrate the potential of the method for this application and the potential improvements of a fully vectorial approach over band-by-band approaches described in the literature.

  15. Comparison of breast DCE-MRI contrast time points for predicting response to neoadjuvant chemotherapy using deep convolutional neural network features with transfer learning

    NASA Astrophysics Data System (ADS)

    Huynh, Benjamin Q.; Antropova, Natasha; Giger, Maryellen L.

    2017-03-01

    DCE-MRI datasets have a temporal aspect to them, resulting in multiple regions of interest (ROIs) per subject, based on contrast time points. It is unclear how the different contrast time points vary in terms of usefulness for computer-aided diagnosis tasks in conjunction with deep learning methods. We thus sought to compare the different DCE-MRI contrast time points with regard to how well their extracted features predict response to neoadjuvant chemotherapy within a deep convolutional neural network. Our dataset consisted of 561 ROIs from 64 subjects. Each subject was categorized as a non-responder or responder, determined by recurrence-free survival. First, features were extracted from each ROI using a convolutional neural network (CNN) pre-trained on non-medical images. Linear discriminant analysis classifiers were then trained on varying subsets of these features, based on their contrast time points of origin. Leave-one-out cross validation (by subject) was used to assess performance in the task of estimating probability of response to therapy, with area under the ROC curve (AUC) as the metric. The classifier trained on features from strictly the pre-contrast time point performed the best, with an AUC of 0.85 (SD = 0.033). The remaining classifiers resulted in AUCs ranging from 0.71 (SD = 0.028) to 0.82 (SD = 0.027). Overall, we found the pre-contrast time point to be the most effective at predicting response to therapy and that including additional contrast time points moderately reduces variance.

  16. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  17. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  18. Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: a prospective pilot study.

    PubMed

    Kenngott, Hannes Götz; Preukschas, Anas Amin; Wagner, Martin; Nickel, Felix; Müller, Michael; Bellemann, Nadine; Stock, Christian; Fangerau, Markus; Radeleff, Boris; Kauczor, Hans-Ulrich; Meinzer, Hans-Peter; Maier-Hein, Lena; Müller-Stich, Beat Peter

    2018-06-01

    Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient bedside.

  19. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  20. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  1. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.

  2. Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types

    NASA Astrophysics Data System (ADS)

    Gehrke, S.; Beshah, B. T.

    2016-06-01

    Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.

  3. Searching for Comets on the World Wide Web: The Orbit of 17P/Holmes from the Behavior of Photographers

    NASA Astrophysics Data System (ADS)

    Lang, Dustin; Hogg, David W.

    2012-08-01

    We performed an image search for "Comet Holmes," using the Yahoo! Web search engine, on 2010 April 1. Thousands of images were returned. We astrometrically calibrated—and therefore vetted—the images using the Astrometry.net system. The calibrated image pointings form a set of data points to which we can fit a test-particle orbit in the solar system, marginalizing over image dates and detecting outliers. The approach is Bayesian and the model is, in essence, a model of how comet astrophotographers point their instruments. In this work, we do not measure the position of the comet within each image, but rather use the celestial position of the whole image to infer the orbit. We find very strong probabilistic constraints on the orbit, although slightly off the Jet Propulsion Lab ephemeris, probably due to limitations of our model. Hyperparameters of the model constrain the reliability of date meta-data and where in the image astrophotographers place the comet; we find that ~70% of the meta-data are correct and that the comet typically appears in the central third of the image footprint. This project demonstrates that discoveries and measurements can be made using data of extreme heterogeneity and unknown provenance. As the size and diversity of astronomical data sets continues to grow, approaches like ours will become more essential. This project also demonstrates that the Web is an enormous repository of astronomical information, and that if an object has been given a name and photographed thousands of times by observers who post their images on the Web, we can (re-)discover it and infer its dynamical properties.

  4. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less

  5. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less

  6. Integral imaging with multiple image planes using a uniaxial crystal plate.

    PubMed

    Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho

    2003-08-11

    Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.

  7. Evaluation of 89Zr-rituximab tracer by Cerenkov luminescence imaging and correlation with PET in a humanized transgenic mouse model to image NHL.

    PubMed

    Natarajan, Arutselvan; Habte, Frezghi; Liu, Hongguang; Sathirachinda, Ataya; Hu, Xiang; Cheng, Zhen; Nagamine, Claude M; Gambhir, Sanjiv Sam

    2013-08-01

    This research aimed to study the use of Cerenkov luminescence imaging (CLI) for non-Hodgkin's lymphoma (NHL) using 89Zr-rituximab positron emission tomography (PET) tracer with a humanized transgenic mouse model that expresses human CD20 and the correlation of CLI with PET. Zr-rituximab (2.6 MBq) was tail vein-injected into transgenic mice that express the human CD20 on their B cells (huCD20TM). One group (n=3) received 2 mg/kg pre-dose (blocking) of cold rituximab 2 h prior to tracer; a second group (n=3) had no pre-dose (non-blocking). CLI was performed using a cooled charge-coupled device optical imager. We also performed PET imaging and ex vivo studies in order to confirm the in vivo CLI results. At each time point (4, 24, 48, 72, and 96 h), two groups of mice were imaged in vivo and ex vivo with CLI and PET, and at 96 h, organs were measured by gamma counter. huCD20 transgenic mice injected with 89Zr-rituximab demonstrated a high-contrast CLI image compared to mice blocked with a cold dose. At various time points of 4-96 h post-radiotracer injection, the in vivo CLI signal intensity showed specific uptake in the spleen where B cells reside and, hence, the huCD20 biomarker is present at very high levels. The time-activity curve of dose decay-corrected CLI intensity and percent injected dose per gram of tissue of PET uptake in the spleen were increased over the time period (4-96 h). At 96 h, the 89Zr-rituximab uptake ratio (non-blocking vs blocking) counted (mean±standard deviation) for the spleen was 1.5±0.6 for CLI and 1.9±0.3 for PET. Furthermore, spleen uptake measurements (non-blocking and blocking of all time points) of CLI vs PET showed good correlation (R2=0.85 and slope=0.576), which also confirmed the corresponding correlations parameter value (R2=0.834 and slope=0.47) obtained for ex vivo measurements. CLI and PET of huCD20 transgenic mice injected with 89Zr-rituximab demonstrated that the tracer was able to target huCD20-expressing B cells. The in vivo and ex vivo tracer uptake corresponding to the CLI radiance intensity from the spleen is in good agreement with PET. In this report, we have validated the use of CLI with PET for NHL imaging in huCD20TM.

  8. Estimating occupancy and abundance using aerial images with imperfect detection

    USGS Publications Warehouse

    Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.

    2017-01-01

    Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.

  9. Wide-field synovial fluid imaging using polarized lens-free on-chip microscopy for point-of-care diagnostics of gout (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Lee, Seung Yoon; Zhang, Yun; Furst, Daniel; Fitzgerald, John; Ozcan, Aydogan

    2016-03-01

    Gout and pseudogout are forms of crystal arthropathy caused by monosodium urate (MSU) and calcium pyrophosphate dehydrate (CPPD) crystals in the joint, respectively, that can result in painful joints. Detecting the unique-shaped, birefringent MSU/CPPD crystals in a synovial fluid sample using a compensated polarizing microscope has been the gold-standard for diagnosis since the 1960's. However, this can be time-consuming and inaccurate, especially if there are only few crystals in the fluid. The high-cost and bulkiness of conventional microscopes can also be limiting for point-of-care diagnosis. Lens-free on-chip microscopy based on digital holography routinely achieves high-throughput and high-resolution imaging in a cost-effective and field-portable design. Here we demonstrate, for the first time, polarized lens-free on-chip imaging of MSU and CPPD crystals over a wide field-of-view (FOV ~ 20.5 mm2, i.e., <20-fold larger compared a typical 20X objective-lens FOV) for point-of-care diagnostics of gout and pseudogout. Circularly polarizer partially-coherent light is used to illuminate the synovial fluid sample on a glass slide, after which a quarter-wave-plate and an angle-mismatched linear polarizer are used to analyze the transmitted light. Two lens-free holograms of the MSU/CPPD sample are taken, with the sample rotated by 90°, to rule out any non-birefringent objects within the specimen. A phase-recovery algorithm is also used to improve the reconstruction quality, and digital pseudo-coloring is utilized to match the color and contrast of the lens-free image to that of a gold-standard microscope image to ease the examination by a rheumatologist or a laboratory technician, and to facilitate computerized analysis.

  10. Analytical reverse time migration: An innovation in imaging of infrastructures using ultrasonic shear waves.

    PubMed

    Asadollahi, Aziz; Khazanovich, Lev

    2018-04-11

    The emergence of ultrasonic dry point contact (DPC) transducers that emit horizontal shear waves has enabled efficient collection of high-quality data in the context of a nondestructive evaluation of concrete structures. This offers an opportunity to improve the quality of evaluation by adapting advanced imaging techniques. Reverse time migration (RTM) is a simulation-based reconstruction technique that offers advantages over conventional methods, such as the synthetic aperture focusing technique. RTM is capable of imaging boundaries and interfaces with steep slopes and the bottom boundaries of inclusions and defects. However, this imaging technique requires a massive amount of memory and its computation cost is high. In this study, both bottlenecks of the RTM are resolved when shear transducers are used for data acquisition. An analytical approach was developed to obtain the source and receiver wavefields needed for imaging using reverse time migration. It is shown that the proposed analytical approach not only eliminates the high memory demand, but also drastically reduces the computation time from days to minutes. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Gallium 68 PSMA-11 PET/MR Imaging in Patients with Intermediate- or High-Risk Prostate Cancer.

    PubMed

    Park, Sonya Youngju; Zacharias, Claudia; Harrison, Caitlyn; Fan, Richard E; Kunder, Christian; Hatami, Negin; Giesel, Frederik; Ghanouni, Pejman; Daniel, Bruce; Loening, Andreas M; Sonn, Geoffrey A; Iagaru, Andrei

    2018-05-16

    Purpose To report the results of dual-time-point gallium 68 ( 68 Ga) prostate-specific membrane antigen (PSMA)-11 positron emission tomography (PET)/magnetic resonance (MR) imaging prior to prostatectomy in patients with intermediate- or high-risk cancer. Materials and Methods Thirty-three men who underwent conventional imaging as clinically indicated and who were scheduled for radical prostatectomy with pelvic lymph node dissection were recruited for this study. A mean dose of 4.1 mCi ± 0.7 (151.7 MBq ± 25.9) of 68 Ga-PSMA-11 was administered. Whole-body images were acquired starting 41-61 minutes after injection by using a GE SIGNA PET/MR imaging unit, followed by an additional pelvic PET/MR imaging acquisition at 87-125 minutes after injection. PET/MR imaging findings were compared with findings at multiparametric MR imaging (including diffusion-weighted imaging, T2-weighted imaging, and dynamic contrast material-enhanced imaging) and were correlated with results of final whole-mount pathologic examination and pelvic nodal dissection to yield sensitivity and specificity. Dual-time-point metabolic parameters (eg, maximum standardized uptake value [SUV max ]) were compared by using a paired t test and were correlated with clinical and histopathologic variables including prostate-specific antigen level, Gleason score, and tumor volume. Results Prostate cancer was seen at 68 Ga-PSMA-11 PET in all 33 patients, whereas multiparametric MR imaging depicted Prostate Imaging Reporting and Data System (PI-RADS) 4 or 5 lesions in 26 patients and PI-RADS 3 lesions in four patients. Focal uptake was seen in the pelvic lymph nodes in five patients. Pathologic examination confirmed prostate cancer in all patients, as well as nodal metastasis in three. All patients with normal pelvic nodes in PET/MR imaging had no metastases at pathologic examination. The accumulation of 68 Ga-PSMA-11 increased at later acquisition times, with higher mean SUV max (15.3 vs 12.3, P < .001). One additional prostate cancer was identified only at delayed imaging. Conclusion This study found that 68 Ga-PSMA-11 PET can be used to identify prostate cancer, while MR imaging provides detailed anatomic guidance. Hence, 68 Ga-PSMA-11 PET/MR imaging provides valuable diagnostic information and may inform the need for and extent of pelvic node dissection. © RSNA, 2018 Online supplemental material is available for this article.

  12. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

  13. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  14. A Fast Implementation of the Isodata Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Le Moigne, Jacqueline; Mount, David M.; Netanyahu, Nathan S.

    2007-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to IsoDATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  15. The use of short and wide x-ray pulses for time-of-flight x-ray Compton Scatter Imaging in cargo security

    NASA Astrophysics Data System (ADS)

    Calvert, Nick; Betcke, Marta M.; Cresswell, John R.; Deacon, Alick N.; Gleeson, Anthony J.; Judson, Daniel S.; Mason, Peter; McIntosh, Peter A.; Morton, Edward J.; Nolan, Paul J.; Ollier, James; Procter, Mark G.; Speller, Robert D.

    2015-05-01

    Using a short pulse width x-ray source and measuring the time-of-flight of photons that scatter from an object under inspection allows for the point of interaction to be determined, and a profile of the object to be sampled along the path of the beam. A three dimensional image can be formed by interrogating the entire object. Using high energy x rays enables the inspection of cargo containers with steel walls, in the search for concealed items. A longer pulse width x-ray source can also be used with deconvolution techniques to determine the points of interaction. We present time-of-flight results from both short (picosecond) width and long (hundreds of nanoseconds) width x-ray sources, and show that the position of scatter can be localised with a resolution of 2 ns, equivalent to 30 cm, for a 3 cm thick plastic test object.

  16. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.

  17. [Development of the automatic dental X-ray film processor].

    PubMed

    Bai, J; Chen, H

    1999-07-01

    This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.

  18. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  19. Movies of Finite Deformation within Western North American Plate Boundary Zone

    NASA Astrophysics Data System (ADS)

    Holt, W. E.; Birkes, B.; Richard, G. A.

    2004-12-01

    Animations of finite strain within deforming continental zones can be an important tool for both education and research. We present finite strain models for western North America. We have found that these moving images, which portray plate motions, landform uplift, and subsidence, are highly useful for enabling students to conceptualize the dramatic changes that can occur within plate boundary zones over geologic time. These models use instantaneous rates of strain inferred from both space geodetic observations and Quaternary fault slip rates. Geodetic velocities and Quaternary strain rates are interpolated to define a continuous, instantaneous velocity field for western North America. This velocity field is then used to track topography points and fault locations through time (both backward and forward in time), using small time steps, to produce a 6 million year image. The strain rate solution is updated at each time step, accounting for changes in boundary conditions of plate motion, and changes in fault orientation. Assuming zero volume change, Airy isostasy, and a ratio of erosion rate to tectonic uplift rate, the topography is also calculated as a function of time. The animations provide interesting moving images of the transform boundary, highlighting ongoing extension and subsidence, convergence and uplift, and large translations taking place within the strike-slip regime. Moving images of the strain components, uplift volume through time, and inferred erosion volume through time, have also been produced. These animations are an excellent demonstration for education purposes and also hold potential as an important tool for research enabling the quantification of finite rotations of fault blocks, potential erosion volume, uplift volume, and the influence of climate on these parameters. The models, however, point to numerous shortcomings of taking constraints from instantaneous calculations to provide insight into time evolution and reconstruction models. More rigorous calculations are needed to account for changes in dynamics (body forces) through time and resultant changes in fault behavior and crustal rheology.

  20. Pointing History Engine for the Spitzer Space Telescope

    NASA Technical Reports Server (NTRS)

    Bayard, David; Ahmed, Asif; Brugarolas, Paul

    2007-01-01

    The Pointing History Engine (PHE) is a computer program that provides mathematical transformations needed to reconstruct, from downlinked telemetry data, the attitude of the Spitzer Space Telescope (formerly known as the Space Infrared Telescope Facility) as a function of time. The PHE also serves as an example for development of similar pointing reconstruction software for future space telescopes. The transformations implemented in the PHE take account of the unique geometry of the Spitzer telescope-pointing chain, including all data on relative alignments of components, and all information available from attitude-determination instruments. The PHE makes it possible to coordinate attitude data with observational data acquired at the same time, so that any observed astronomical object can be located for future reference and re-observation. The PHE is implemented as a subroutine used in conjunction with telemetry-formatting services of the Mission Image Processing Laboratory of NASA s Jet Propulsion Laboratory to generate the Boresight Pointing History File (BPHF). The BPHF is an archival database designed to serve as Spitzer s primary astronomical reference documenting where the telescope was pointed at any time during its mission.

  1. A new method for mapping multidimensional data to lower dimensions

    NASA Technical Reports Server (NTRS)

    Gowda, K. C.

    1983-01-01

    A multispectral mapping method is proposed which is based on the new concept of BEND (Bidimensional Effective Normalised Difference). The method, which involves taking one sample point at a time and finding the interrelationships between its features, is found very economical from the point of view of storage and processing time. It has good dimensionality reduction and clustering properties, and is highly suitable for computer analysis of large amounts of data. The transformed values obtained by this procedure are suitable for either a planar 2-space mapping of geological sample points or for making grayscale and color images of geo-terrains. A few examples are given to justify the efficacy of the proposed procedure.

  2. Streak camera based SLR receiver for two color atmospheric measurements

    NASA Technical Reports Server (NTRS)

    Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael

    1993-01-01

    To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.

  3. Use of a Fluorometric Imaging Plate Reader in high-throughput screening

    NASA Astrophysics Data System (ADS)

    Groebe, Duncan R.; Gopalakrishnan, Sujatha; Hahn, Holly; Warrior, Usha; Traphagen, Linda; Burns, David J.

    1999-04-01

    High-throughput screening (HTS) efforts at Abbott Laboratories have been greatly facilitated by the use of a Fluorometric Imaging Plate Reader. The FLIPR consists of an incubated cabinet with integrated 96-channel pipettor and fluorometer. An argon laser is used to excite fluorophores in a 96-well microtiter plate and the emitted fluorometer. An argon laser is used to excite fluorophores in a 96-well microtiter plate and the emitted fluorescence is imaged by a cooled CCD camera. The image data is downloaded from the camera and processed to average the signal form each well of the microtiter pate for each time point. The data is presented in real time on the computer screen, facilitating interpretation and trouble-shooting. In addition to fluorescence, the camera can also detect luminescence form firefly luciferase.

  4. Enhanced fluorescence microscope and its application

    NASA Astrophysics Data System (ADS)

    Wang, Susheng; Li, Qin; Yu, Xin

    1997-12-01

    A high gain fluorescence microscope is developed to meet the needs in medical and biological research. By the help of an image intensifier with luminance gain of 4 by 104 the sensitivity of the system can achieve 10-6 1x level and be 104 times higher than ordinary fluorescence microscope. Ultra-weak fluorescence image can be detected by it. The concentration of fluorescent label and emitting light intensity of the system are decreased as much as possible, therefore, the natural environment of the detected call can be kept. The CCD image acquisition set-up controlled by computer obtains the quantitative data of each point according to the gray scale. The relation between luminous intensity and output of CCD is obtained by using a wide range weak photometry. So the system not only shows the image of ultra-weak fluorescence distribution but also gives the intensity of fluorescence of each point. Using this system, we obtained the images of distribution of hypocrellin A (HA) in Hela cell, the images of Hela cell being protected by antioxidant reagent Vit. E, SF and BHT. The images show that the digitized ultra-sensitive fluorescence microscope is a useful tool for medical and biological research.

  5. [Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].

    PubMed

    Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang

    2007-02-01

    Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.

  6. Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography

    PubMed Central

    Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303

  7. Three-dimensional imaging for large LArTPCs

    NASA Astrophysics Data System (ADS)

    Qian, X.; Zhang, C.; Viren, B.; Diwan, M.

    2018-05-01

    High-performance event reconstruction is critical for current and future massive liquid argon time projection chambers (LArTPCs) to realize their full scientific potential. LArTPCs with readout using wire planes provide a limited number of 2D projections. In general, without a pixel-type readout it is challenging to achieve unambiguous 3D event reconstruction. As a remedy, we present a novel 3D imaging method, Wire-Cell, which incorporates the charge and sparsity information in addition to the time and geometry through simple and robust mathematics. The resulting 3D image of ionization density provides an excellent starting point for further reconstruction and enables the true power of 3D tracking calorimetry in LArTPCs.

  8. Cartographic analyses of geographic information available on Google Earth Images

    NASA Astrophysics Data System (ADS)

    Oliveira, J. C.; Ramos, J. R.; Epiphanio, J. C.

    2011-12-01

    The propose was to evaluate planimetric accuracy of satellite images available on database of Google Earth. These images are referents to the vicinities of the Federal Univertisity of Viçosa, Minas Gerais - Brazil. The methodology developed evaluated the geographical information of three groups of images which were in accordance to the level of detail presented in the screen images (zoom). These groups of images were labeled to Zoom 1000 (a single image for the entire study area), Zoom 100 (formed by a mosaic of 73 images) and Zoom 100 with geometric correction (this mosaic is like before, however, it was applied a geometric correction through control points). In each group of image was measured the Cartographic Accuracy based on statistical analyses and brazilian's law parameters about planimetric mapping. For this evaluation were identified 22 points in each group of image, where the coordinates of each point were compared to the coordinates of the field obtained by GPS (Global Positioning System). The Table 1 show results related to accuracy (based on a threshold equal to 0.5 mm * mapping scale) and tendency (abscissa and ordinate) between the coordinates of the image and the coordinates of field. Table 1 The geometric correction applied to the Group Zoom 100 reduced the trends identified earlier, and the statistical tests pointed a usefulness of the data for a mapping at a scale of 1/5000 with error minor than 0.5 mm * scale. The analyses proved the quality of cartographic data provided by Google, as well as the possibility of reduce the divergences of positioning present on the data. It can be concluded that it is possible to obtain geographic information database available on Google Earth, however, the level of detail (zoom) used at the time of viewing and capturing information on the screen influences the quality cartographic of the mapping. Although cartographic and thematic potential present in the database, it is important to note that both the software as data distributed by Google Earth has policies for use and distribution.
    Table 1 - PLANIMETRIC ANALYSIS

  9. Analysis of plasmaspheric plumes: CLUSTER and IMAGE observations and numerical simulations

    NASA Technical Reports Server (NTRS)

    Darouzet, Fabien; DeKeyser, Johan; Decreau, Pierrette; Gallagher, Dennis; Pierrard, Viviane; Lemaire, Joseph; Dandouras, Iannis; Matsui, Hiroshi; Dunlop, Malcolm; Andre, Mats

    2005-01-01

    Plasmaspheric plumes have been routinely observed by CLUSTER and IMAGE. The CLUSTER mission provides high time resolution four-point measurements of the plasmasphere near perigee. Total electron density profiles can be derived from the plasma frequency and/or from the spacecraft potential (note that the electron spectrometer is usually not operating inside the plasmasphere); ion velocity is also measured onboard these satellites (but ion density is not reliable because of instrumental limitations). The EUV imager onboard the IMAGE spacecraft provides global images of the plasmasphere with a spatial resolution of 0.1 RE every 10 minutes; such images acquired near apogee from high above the pole show the geometry of plasmaspheric plumes, their evolution and motion. We present coordinated observations for 3 plume events and compare CLUSTER in-situ data (panel A) with global images of the plasmasphere obtained from IMAGE (panel B), and with numerical simulations for the formation of plumes based on a model that includes the interchange instability mechanism (panel C). In particular, we study the geometry and the orientation of plasmaspheric plumes by using a four-point analysis method, the spatial gradient. We also compare several aspects of their motion as determined by different methods: (i) inner and outer plume boundary velocity calculated from time delays of this boundary observed by the wave experiment WHISPER on the four spacecraft, (ii) ion velocity derived from the ion spectrometer CIS onboard CLUSTER, (iii) drift velocity measured by the electron drift instrument ED1 onboard CLUSTER and (iv) global velocity determined from successive EUV images. These different techniques consistently indicate that plasmaspheric plumes rotate around the Earth, with their foot fully co-rotating, but with their tip rotating slower and moving farther out.

  10. Surface regions of illusory images are detected with a slower processing speed than those of luminance-defined images.

    PubMed

    Mihaylova, Milena; Manahilov, Velitchko

    2010-11-24

    Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.

  11. High Resolution Imaging of the Sun with CORONAS-1

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita

    1998-01-01

    We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.

  12. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  13. A method of fast mosaic for massive UAV images

    NASA Astrophysics Data System (ADS)

    Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong

    2014-11-01

    With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.

  14. Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).

    PubMed

    Goldfarb, James W

    2010-04-01

    The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.

  15. Strategies of statistical windows in PET image reconstruction to improve the user’s real time experience

    NASA Astrophysics Data System (ADS)

    Moliner, L.; Correcher, C.; Gimenez-Alventosa, V.; Ilisie, V.; Alvarez, J.; Sanchez, S.; Rodríguez-Alvarez, M. J.

    2017-11-01

    Nowadays, with the increase of the computational power of modern computers together with the state-of-the-art reconstruction algorithms, it is possible to obtain Positron Emission Tomography (PET) images in practically real time. These facts open the door to new applications such as radio-pharmaceuticals tracking inside the body or the use of PET for image-guided procedures, such as biopsy interventions, among others. This work is a proof of concept that aims to improve the user experience with real time PET images. Fixed, incremental, overlapping, sliding and hybrid windows are the different statistical combinations of data blocks used to generate intermediate images in order to follow the path of the activity in the Field Of View (FOV). To evaluate these different combinations, a point source is placed in a dedicated breast PET device and moved along the FOV. These acquisitions are reconstructed according to the different statistical windows, resulting in a smoother transition of positions for the image reconstructions that use the sliding and hybrid window.

  16. Contrast in Terahertz Images of Archival Documents—Part II: Influence of Topographic Features

    NASA Astrophysics Data System (ADS)

    Bardon, Tiphaine; May, Robert K.; Taday, Philip F.; Strlič, Matija

    2017-04-01

    We investigate the potential of terahertz time-domain imaging in reflection mode to reveal archival information in documents in a non-invasive way. In particular, this study explores the parameters and signal processing tools that can be used to produce well-contrasted terahertz images of topographic features commonly found in archival documents, such as indentations left by a writing tool, as well as sieve lines. While the amplitude of the waveforms at a specific time delay can provide the most contrasted and legible images of topographic features on flat paper or parchment sheets, this parameter may not be suitable for documents that have a highly irregular surface, such as water- or fire-damaged documents. For analysis of such documents, cross-correlation of the time-domain signals can instead yield images with good contrast. Analysis of the frequency-domain representation of terahertz waveforms can also provide well-contrasted images of topographic features, with improved spatial resolution when utilising high-frequency content. Finally, we point out some of the limitations of these means of analysis for extracting information relating to topographic features of interest from documents.

  17. Determination of sub-daily glacier uplift and horizontal flow velocity with time-lapse images using ImGRAFT

    NASA Astrophysics Data System (ADS)

    Egli, Pascal; Mankoff, Ken; Mettra, François; Lane, Stuart

    2017-04-01

    This study investigates the application of feature tracking algorithms to monitoring of glacier uplift. Several publications have confirmed the occurrence of an uplift of the glacier surface in the late morning hours of the mid to late ablation season. This uplift is thought to be caused by high sub-glacial water pressures at the onset of melt caused by overnight-deposited sediment that blocks subglacial channels. We use time-lapse images from a camera mounted in front of the glacier tongue of Haut Glacier d'Arolla during August 2016 in combination with a Digital Elevation Model and GPS measurements in order to investigate the phenomenon of glacier uplift using the feature tracking toolbox ImGRAFT. Camera position is corrected for all images and the images are geo-rectified using Ground Control Points visible in every image. Changing lighting conditions due to different sun angles create substantial noise and complicate the image analysis. A small glacier uplift of the order of 5 cm over a time span of 3 hours may be observed on certain days, confirming previous research.

  18. Fully automatic registration and segmentation of first-pass myocardial perfusion MR image sequences.

    PubMed

    Gupta, Vikas; Hendriks, Emile A; Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2010-11-01

    Derivation of diagnostically relevant parameters from first-pass myocardial perfusion magnetic resonance images involves the tedious and time-consuming manual segmentation of the myocardium in a large number of images. To reduce the manual interaction and expedite the perfusion analysis, we propose an automatic registration and segmentation method for the derivation of perfusion linked parameters. A complete automation was accomplished by first registering misaligned images using a method based on independent component analysis, and then using the registered data to automatically segment the myocardium with active appearance models. We used 18 perfusion studies (100 images per study) for validation in which the automatically obtained (AO) contours were compared with expert drawn contours on the basis of point-to-curve error, Dice index, and relative perfusion upslope in the myocardium. Visual inspection revealed successful segmentation in 15 out of 18 studies. Comparison of the AO contours with expert drawn contours yielded 2.23 ± 0.53 mm and 0.91 ± 0.02 as point-to-curve error and Dice index, respectively. The average difference between manually and automatically obtained relative upslope parameters was found to be statistically insignificant (P = .37). Moreover, the analysis time per slice was reduced from 20 minutes (manual) to 1.5 minutes (automatic). We proposed an automatic method that significantly reduced the time required for analysis of first-pass cardiac magnetic resonance perfusion images. The robustness and accuracy of the proposed method were demonstrated by the high spatial correspondence and statistically insignificant difference in perfusion parameters, when AO contours were compared with expert drawn contours. Copyright © 2010 AUR. Published by Elsevier Inc. All rights reserved.

  19. Marker Registration Technique for Handwritten Text Marker in Augmented Reality Applications

    NASA Astrophysics Data System (ADS)

    Thanaborvornwiwat, N.; Patanukhom, K.

    2018-04-01

    Marker registration is a fundamental process to estimate camera poses in marker-based Augmented Reality (AR) systems. We developed AR system that creates correspondence virtual objects on handwritten text markers. This paper presents a new method for registration that is robust for low-content text markers, variation of camera poses, and variation of handwritten styles. The proposed method uses Maximally Stable Extremal Regions (MSER) and polygon simplification for a feature point extraction. The experiment shows that we need to extract only five feature points per image which can provide the best registration results. An exhaustive search is used to find the best matching pattern of the feature points in two images. We also compared performance of the proposed method to some existing registration methods and found that the proposed method can provide better accuracy and time efficiency.

  20. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

Top