MEMS-based system and image processing strategy for epiretinal prosthesis.
Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong
2015-01-01
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu
2016-01-01
Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
NASA Astrophysics Data System (ADS)
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Art Therapy and Its Contemplative Nature: Unifying Aspects of Image Making
ERIC Educational Resources Information Center
Salom, Andrée
2013-01-01
This article describes an art-based inquiry that explored two contemplative strategies--the conceptual strategy and the awareness strategy--through observation of art images and processes of creation, conceptual understanding, assessment, and the inner movements of self-awareness. Art media and directives were used to subjectively test key…
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
Active learning methods for interactive image retrieval.
Gosselin, Philippe Henri; Cord, Matthieu
2008-07-01
Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.
Chen, Yili; Fu, Jixiang; Chu, Dawei; Li, Rongmao; Xie, Yaoqin
2017-11-27
A retinal prosthesis is designed to help the blind to obtain some sight. It consists of an external part and an internal part. The external part is made up of a camera, an image processor and an RF transmitter. The internal part is made up of an RF receiver, implant chip and microelectrode. Currently, the number of microelectrodes is in the hundreds, and we do not know the mechanism for using an electrode to stimulate the optic nerve. A simple hypothesis is that the pixels in an image correspond to the electrode. The images captured by the camera should be processed by suitable strategies to correspond to stimulation from the electrode. Thus, it is a question of how to obtain the important information from the image captured in the picture. Here, we use the region of interest (ROI), a useful algorithm for extracting the ROI, to retain the important information, and to remove the redundant information. This paper explains the details of the principles and functions of the ROI. Because we are investigating a real-time system, we need a fast processing ROI as a useful algorithm to extract the ROI. Thus, we simplified the ROI algorithm and used it in an outside image-processing digital signal processing (DSP) system of the retinal prosthesis. The results show that our image-processing strategies are suitable for a real-time retinal prosthesis and can eliminate redundant information and provide useful information for expression in a low-size image.
Reasoning strategies modulate gender differences in emotion processing.
Markovits, Henry; Trémolière, Bastien; Blanchette, Isabelle
2018-01-01
The dual strategy model of reasoning has proposed that people's reasoning can be understood asa combination of two different ways of processing information related to problem premises: a counterexample strategy that examines information for explicit potential counterexamples and a statistical strategy that uses associative access to generate a likelihood estimate of putative conclusions. Previous studies have examined this model in the context of basic conditional reasoning tasks. However, the information processing distinction that underlies the dual strategy model can be seen asa basic description of differences in reasoning (similar to that described by many general dual process models of reasoning). In two studies, we examine how these differences in reasoning strategy may relate to processing very different information, specifically we focus on previously observed gender differences in processing negative emotions. Study 1 examined the intensity of emotional reactions to a film clip inducing primarily negative emotions. Study 2 examined the speed at which participants determine the emotional valence of sequences of negative images. In both studies, no gender differences were observed among participants using a counterexample strategy. Among participants using a statistical strategy, females produce significantly stronger emotional reactions than males (in Study 1) and were faster to recognize the valence of negative images than were males (in Study 2). Results show that the processing distinction underlying the dual strategy model of reasoning generalizes to the processing of emotions. Copyright © 2017 Elsevier B.V. All rights reserved.
2011-01-01
Novel molecular imaging techniques are at the forefront of both preclinical and clinical imaging strategies. They have significant potential to offer visualisation and quantification of molecular and cellular changes in health and disease. This will help to shed light on pathobiology and underlying disease processes and provide further information about the mechanisms of action of novel therapeutic strategies. This review explores currently available molecular imaging techniques that are available for preclinical studies with a focus on optical imaging techniques and discusses how current and future advances will enable translation into the clinic for patients with arthritis. PMID:21345267
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Space-based optical image encryption.
Chen, Wen; Chen, Xudong
2010-12-20
In this paper, we propose a new method based on a three-dimensional (3D) space-based strategy for the optical image encryption. The two-dimensional (2D) processing of a plaintext in the conventional optical encryption methods is extended to a 3D space-based processing. Each pixel of the plaintext is considered as one particle in the proposed space-based optical image encryption, and the diffraction of all particles forms an object wave in the phase-shifting digital holography. The effectiveness and advantages of the proposed method are demonstrated by numerical results. The proposed method can provide a new optical encryption strategy instead of the conventional 2D processing, and may open up a new research perspective for the optical image encryption.
Image recognition on raw and processed potato detection: a review
NASA Astrophysics Data System (ADS)
Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan
2018-02-01
Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as staple food in China.
Automation of Cassini Support Imaging Uplink Command Development
NASA Technical Reports Server (NTRS)
Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert
2010-01-01
"Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.
Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J
2016-05-03
Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.
[Individual differences in strategy use in the Japanese reading span test].
Endo, Kaori; Osaka, Mariko
2012-02-01
Working memory is a system for processing and storing information. The Reading Span Test (RST), developed by Daneman and Carpenter (1980), is well-known for assessing individual difference in working memory. In the present investigation, we used the Japanese version of the RST (Osaka, 2002) and analyzed individual differences in strategy use from the viewpoint of strategy type (rehearsal, chaining, word-image, scene-image, and initial letter) and frequency of use (used in almost all trials, in half the trials, or not used). Data from the participants (N = 132) were assigned to groups according to the scores, for the total number of words correctly recalled and the proportion correct. The results showed that the frequency of word-image strategy use differed significantly between high-scoring subjects (HSS) and low-scoring subjects (LSS). HSS mainly used word-image and chaining strategies, while LSS used rehearsal and chaining strategies. This indicates that HSS used both verbal and visual strategies, whereas LSS relied only on verbal strategies. The use of the word-image is important for effective retention of words in memory.
[Imaging center - optimization of the imaging process].
Busch, H-P
2013-04-01
Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.
Low-level processing for real-time image analysis
NASA Technical Reports Server (NTRS)
Eskenazi, R.; Wilf, J. M.
1979-01-01
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Concept Learning through Image Processing.
ERIC Educational Resources Information Center
Cifuentes, Lauren; Yi-Chuan, Jane Hsieh
This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…
Reasoning strategies with rational numbers revealed by eye tracking.
Plummer, Patrick; DeWolf, Melissa; Bassok, Miriam; Gordon, Peter C; Holyoak, Keith J
2017-07-01
Recent research has begun to investigate the impact of different formats for rational numbers on the processes by which people make relational judgments about quantitative relations. DeWolf, Bassok, and Holyoak (Journal of Experimental Psychology: General, 144(1), 127-150, 2015) found that accuracy on a relation identification task was highest when fractions were presented with countable sets, whereas accuracy was relatively low for all conditions where decimals were presented. However, it is unclear what processing strategies underlie these disparities in accuracy. We report an experiment that used eye-tracking methods to externalize the strategies that are evoked by different types of rational numbers for different types of quantities (discrete vs. continuous). Results showed that eye-movement behavior during the task was jointly determined by image and number format. Discrete images elicited a counting strategy for both fractions and decimals, but this strategy led to higher accuracy only for fractions. Continuous images encouraged magnitude estimation and comparison, but to a greater degree for decimals than fractions. This strategy led to decreased accuracy for both number formats. By analyzing participants' eye movements when they viewed a relational context and made decisions, we were able to obtain an externalized representation of the strategic choices evoked by different ontological types of entities and different types of rational numbers. Our findings using eye-tracking measures enable us to go beyond previous studies based on accuracy data alone, demonstrating that quantitative properties of images and the different formats for rational numbers jointly influence strategies that generate eye-movement behavior.
Manger, Ryan P; Paxton, Adam B; Pawlicki, Todd; Kim, Gwe-Ya
2015-05-01
Surface image guided, Linac-based radiosurgery (SIG-RS) is a modern approach for delivering radiosurgery that utilizes optical stereoscopic imaging to monitor the surface of the patient during treatment in lieu of using a head frame for patient immobilization. Considering the novelty of the SIG-RS approach and the severity of errors associated with delivery of large doses per fraction, a risk assessment should be conducted to identify potential hazards, determine their causes, and formulate mitigation strategies. The purpose of this work is to investigate SIG-RS using the combined application of failure modes and effects analysis (FMEA) and fault tree analysis (FTA), report on the effort required to complete the analysis, and evaluate the use of FTA in conjunction with FMEA. A multidisciplinary team was assembled to conduct the FMEA on the SIG-RS process. A process map detailing the steps of the SIG-RS was created to guide the FMEA. Failure modes were determined for each step in the SIG-RS process, and risk priority numbers (RPNs) were estimated for each failure mode to facilitate risk stratification. The failure modes were ranked by RPN, and FTA was used to determine the root factors contributing to the riskiest failure modes. Using the FTA, mitigation strategies were formulated to address the root factors and reduce the risk of the process. The RPNs were re-estimated based on the mitigation strategies to determine the margin of risk reduction. The FMEA and FTAs for the top two failure modes required an effort of 36 person-hours (30 person-hours for the FMEA and 6 person-hours for two FTAs). The SIG-RS process consisted of 13 major subprocesses and 91 steps, which amounted to 167 failure modes. Of the 91 steps, 16 were directly related to surface imaging. Twenty-five failure modes resulted in a RPN of 100 or greater. Only one of these top 25 failure modes was specific to surface imaging. The riskiest surface imaging failure mode had an overall RPN-rank of eighth. Mitigation strategies for the top failure mode decreased the RPN from 288 to 72. Based on the FMEA performed in this work, the use of surface imaging for monitoring intrafraction position in Linac-based stereotactic radiosurgery (SRS) did not greatly increase the risk of the Linac-based SRS process. In some cases, SIG helped to reduce the risk of Linac-based RS. The FMEA was augmented by the use of FTA since it divided the failure modes into their fundamental components, which simplified the task of developing mitigation strategies.
Coding Strategies and Implementations of Compressive Sensing
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Point target detection utilizing super-resolution strategy for infrared scanning oversampling system
NASA Astrophysics Data System (ADS)
Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei
2017-11-01
To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.
Performance evaluation of canny edge detection on a tiled multicore architecture
NASA Astrophysics Data System (ADS)
Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald
2011-01-01
In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.
Adult stem cell lineage tracing and deep tissue imaging
Fink, Juergen; Andersson-Rolf, Amanda; Koo, Bon-Kyoung
2015-01-01
Lineage tracing is a widely used method for understanding cellular dynamics in multicellular organisms during processes such as development, adult tissue maintenance, injury repair and tumorigenesis. Advances in tracing or tracking methods, from light microscopy-based live cell tracking to fluorescent label-tracing with two-photon microscopy, together with emerging tissue clearing strategies and intravital imaging approaches have enabled scientists to decipher adult stem and progenitor cell properties in various tissues and in a wide variety of biological processes. Although technical advances have enabled time-controlled genetic labeling and simultaneous live imaging, a number of obstacles still need to be overcome. In this review, we aim to provide an in-depth description of the traditional use of lineage tracing as well as current strategies and upcoming new methods of labeling and imaging. [BMB Reports 2015; 48(12): 655-667] PMID:26634741
Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects.
Amaro, Edson; Williams, Steve C R; Shergill, Sukhi S; Fu, Cynthia H Y; MacSweeney, Mairead; Picchioni, Marco M; Brammer, Michael J; McGuire, Philip K
2002-11-01
Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. Copyright 2002 Wiley-Liss, Inc.
Image counter-forensics based on feature injection
NASA Astrophysics Data System (ADS)
Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.
2014-02-01
Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.
Image processing for cryogenic transmission electron microscopy of symmetry-mismatched complexes.
Huiskonen, Juha T
2018-02-08
Cryogenic transmission electron microscopy (cryo-TEM) is a high-resolution biological imaging method, whereby biological samples, such as purified proteins, macromolecular complexes, viral particles, organelles and cells, are embedded in vitreous ice preserving their native structures. Due to sensitivity of biological materials to the electron beam of the microscope, only relatively low electron doses can be applied during imaging. As a result, the signal arising from the structure of interest is overpowered by noise in the images. To increase the signal-to-noise ratio, different image processing-based strategies that aim at coherent averaging of signal have been devised. In such strategies, images are generally assumed to arise from multiple identical copies of the structure. Prior to averaging, the images must be grouped according to the view of the structure they represent and images representing the same view must be simultaneously aligned relatively to each other. For computational reconstruction of the three-dimensional structure, images must contain different views of the original structure. Structures with multiple symmetry-related substructures are advantageous in averaging approaches because each image provides multiple views of the substructures. However, the symmetry assumption may be valid for only parts of the structure, leading to incoherent averaging of the other parts. Several image processing approaches have been adapted to tackle symmetry-mismatched substructures with increasing success. Such structures are ubiquitous in nature and further computational method development is needed to understanding their biological functions. ©2018 The Author(s).
Neurocognitive inefficacy of the strategy process.
Klein, Harold E; D'Esposito, Mark
2007-11-01
The most widely used (and taught) protocols for strategic analysis-Strengths, Weaknesses, Opportunities, and Threats (SWOT) and Porter's (1980) Five Force Framework for industry analysis-have been found to be insufficient as stimuli for strategy creation or even as a basis for further strategy development. We approach this problem from a neurocognitive perspective. We see profound incompatibilities between the cognitive process-deductive reasoning-channeled into the collective mind of strategists within the formal planning process through its tools of strategic analysis (i.e., rational technologies) and the essentially inductive reasoning process actually needed to address ill-defined, complex strategic situations. Thus, strategic analysis protocols that may appear to be and, indeed, are entirely rational and logical are not interpretable as such at the neuronal substrate level where thinking takes place. The analytical structure (or propositional representation) of these tools results in a mental dead end, the phenomenon known in cognitive psychology as functional fixedness. The difficulty lies with the inability of the brain to make out meaningful (i.e., strategy-provoking) stimuli from the mental images (or depictive representations) generated by strategic analysis tools. We propose decreasing dependence on these tools and conducting further research employing brain imaging technology to explore complex data handling protocols with richer mental representation and greater potential for strategy creation.
NASA Astrophysics Data System (ADS)
Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung
2015-11-01
Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr05839b
a Region-Based Multi-Scale Approach for Object-Based Image Analysis
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.
2016-06-01
Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.
Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M
1999-01-01
To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Automated Analysis of CT Images for the Inspection of Hardwood Logs
Harbin Li; A. Lynn Abbott; Daniel L. Schmoldt
1996-01-01
This paper investigates several classifiers for labeling internal features of hardwood logs using computed tomography (CT) images. A primary motivation is to locate and classify internal defects so that an optimal cutting strategy can be chosen. Previous work has relied on combinations of low-level processing, image segmentation, autoregressive texture modeling, and...
Phage display and molecular imaging: expanding fields of vision in living subjects.
Cochran, R; Cochran, Frank
2010-01-01
In vivo molecular imaging enables non-invasive visualization of biological processes within living subjects, and holds great promise for diagnosis and monitoring of disease. The ability to create new agents that bind to molecular targets and deliver imaging probes to desired locations in the body is critically important to further advance this field. To address this need, phage display, an established technology for the discovery and development of novel binding agents, is increasingly becoming a key component of many molecular imaging research programs. This review discusses the expanding role played by phage display in the field of molecular imaging with a focus on in vivo applications. Furthermore, new methodological advances in phage display that can be directly applied to the discovery and development of molecular imaging agents are described. Various phage library selection strategies are summarized and compared, including selections against purified target, intact cells, and ex vivo tissue, plus in vivo homing strategies. An outline of the process for converting polypeptides obtained from phage display library selections into successful in vivo imaging agents is provided, including strategies to optimize in vivo performance. Additionally, the use of phage particles as imaging agents is also described. In the latter part of the review, a survey of phage-derived in vivo imaging agents is presented, and important recent examples are highlighted. Other imaging applications are also discussed, such as the development of peptide tags for site-specific protein labeling and the use of phage as delivery agents for reporter genes. The review concludes with a discussion of how phage display technology will continue to impact both basic science and clinical applications in the field of molecular imaging.
Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael
1999-01-01
Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229
Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M
2015-10-01
Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ray, Pritha
2011-04-01
Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.
Xia, Meng-lei; Wang, Lan; Yang, Zhi-xia; Chen, Hong-zhang
2016-04-01
This work proposed a new method which applied image processing and support vector machine (SVM) for screening of mold strains. Taking Monascus as example, morphological characteristics of Monascus colony were quantified by image processing. And the association between the characteristics and pigment production capability was determined by SVM. On this basis, a highly automated screening strategy was achieved. The accuracy of the proposed strategy is 80.6 %, which is compatible with the existing methods (81.1 % for microplate and 85.4 % for flask). Meanwhile, the screening of 500 colonies only takes 20-30 min, which is the highest rate among all published results. By applying this automated method, 13 strains with high-predicted production were obtained and the best one produced as 2.8-fold (226 U/mL) of pigment and 1.9-fold (51 mg/L) of lovastatin compared with the parent strain. The current study provides us with an effective and promising method for strain improvement.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengel, S.K.; Morrison, D.B.
1985-01-01
Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less
Rudmik, Luke; Smith, Kristine A; Soler, Zachary M; Schlosser, Rodney J; Smith, Timothy L
2014-10-01
Idiopathic olfactory loss is a common clinical scenario encountered by otolaryngologists. While trying to allocate limited health care resources appropriately, the decision to obtain a magnetic resonance imaging (MRI) scan to investigate for a rare intracranial abnormality can be difficult. To evaluate the cost-effectiveness of ordering routine MRI in patients with idiopathic olfactory loss. We performed a modeling-based economic evaluation with a time horizon of less than 1 year. Patients included in the analysis had idiopathic olfactory loss defined by no preceding viral illness or head trauma and negative findings of a physical examination and nasal endoscopy. Routine MRI vs no-imaging strategies. We developed a decision tree economic model from the societal perspective. Effectiveness, probability, and cost data were obtained from the published literature. Litigation rates and costs related to a missed diagnosis were obtained from the Physicians Insurers Association of America. A univariate threshold analysis and multivariate probabilistic sensitivity analysis were performed to quantify the degree of certainty in the economic conclusion of the reference case. The comparative groups included those who underwent routine MRI of the brain with contrast alone and those who underwent no brain imaging. The primary outcome was the cost per correct diagnosis of idiopathic olfactory loss. The mean (SD) cost for the MRI strategy totaled $2400.00 ($1717.54) and was effective 100% of the time, whereas the mean (SD) cost for the no-imaging strategy totaled $86.61 ($107.40) and was effective 98% of the time. The incremental cost-effectiveness ratio for the MRI strategy compared with the no-imaging strategy was $115 669.50, which is higher than most acceptable willingness-to-pay thresholds. The threshold analysis demonstrated that when the probability of having a treatable intracranial disease process reached 7.9%, the incremental cost-effectiveness ratio for MRI vs no imaging was $24 654.38. The probabilistic sensitivity analysis demonstrated that the no-imaging strategy was the cost-effective decision with 81% certainty at a willingness-to-pay threshold of $50 000. This economic evaluation suggests that the most cost-effective decision is to not obtain a routine MRI scan of the brain in patients with idiopathic olfactory loss. Outcomes from this study may be used to counsel patients and aid in the decision-making process.
Diken, Mustafa; Pektor, Stefanie; Miederer, Matthias
2016-10-01
Preclinical imaging has become a powerful method for investigation of in vivo processes such as pharmacokinetics of therapeutic substances and visualization of physiologic and pathophysiological mechanisms. These are important aspects to understand diseases and develop strategies to modify their progression with pharmacologic interventions. One promising intervention is the application of specifically tailored nanoscale particles that modulate the immune system to generate a tumor targeting immune response. In this complex interaction between immunomodulatory therapies, the immune system and malignant disease, imaging methods are expected to play a key role on the way to generate new therapeutic strategies. Here, we summarize examples which demonstrate the current potential of imaging methods and develop a perspective on the future value of preclinical imaging of the immune system.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar
2018-01-15
Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte; Häkkinen, Jukka
2014-11-01
To understand the viewing strategies employed in a quality estimation task, we compared two visual tasks-quality estimation and difference estimation. The estimation was done for a pair of natural images having small global changes in quality. Two groups of observers estimated the same set of images, but with different instructions. One group estimated the difference in quality and the other the difference between image pairs. The results demonstrated the use of different visual strategies in the tasks. The quality estimation was found to include more visual planning during the first fixation than the difference estimation, but afterward needed only a few long fixations on the semantically important areas of the image. The difference estimation used many short fixations. Salient image areas were mainly attended to when these areas were also semantically important. The results support the hypothesis that these tasks' general characteristics (evaluation time, number of fixations, area fixated on) show differences in processing, but also suggest that examining only single fixations when comparing tasks is too narrow a view. When planning a subjective experiment, one must remember that a small change in the instructions might lead to a noticeable change in viewing strategy.
Bebko, Genna M; Franconeri, Steven L; Ochsner, Kevin N; Chiao, Joan Y
2014-06-01
According to appraisal theories of emotion, cognitive reappraisal is a successful emotion regulation strategy because it involves cognitively changing our thoughts, which, in turn, change our emotions. However, recent evidence has challenged the importance of cognitive change and, instead, has suggested that attentional deployment may at least partly explain the emotion regulation success of cognitive reappraisal. The purpose of the current study was to examine the causal relationship between attentional deployment and emotion regulation success. We examined 2 commonly used emotion regulation strategies--cognitive reappraisal and expressive suppression-because both depend on attention but have divergent behavioral, experiential, and physiological outcomes. Participants were either instructed to regulate emotions during free-viewing (unrestricted image viewing) or gaze-controlled (restricted image viewing) conditions and to self-report negative emotional experience. For both emotion regulation strategies, emotion regulation success was not altered by changes in participant control over the (a) direction of attention (free-viewing vs. gaze-controlled) during image viewing and (b) valence (negative vs. neutral) of visual stimuli viewed when gaze was controlled. Taken together, these findings provide convergent evidence that attentional deployment does not alter subjective negative emotional experience during either cognitive reappraisal or expressive suppression, suggesting that strategy-specific processes, such as cognitive appraisal and response modulation, respectively, may have a greater impact on emotional regulation success than processes common to both strategies, such as attention.
Mihaylova, Milena; Manahilov, Velitchko
2010-11-24
Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.
Image Tracing: An Analysis of Its Effectiveness in Children's Pictorial Discrimination Learning
ERIC Educational Resources Information Center
Levin, Joel R.; And Others
1977-01-01
A total of 45 fifth grade students were the subjects of an experiment offering support for a component of learning strategy (memory imagery). Various theoretical explanations of the image-tracing phenomenon are considered, including depth of processing, dual coding and frequency. (MS)
Parallel Guessing: A Strategy for High-Speed Computation
1984-09-19
for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or
Central Executive Dysfunction and Deferred Prefrontal Processing in Veterans with Gulf War Illness.
Hubbard, Nicholas A; Hutchison, Joanna L; Motes, Michael A; Shokri-Kojori, Ehsan; Bennett, Ilana J; Brigante, Ryan M; Haley, Robert W; Rypma, Bart
2014-05-01
Gulf War Illness is associated with toxic exposure to cholinergic disruptive chemicals. The cholinergic system has been shown to mediate the central executive of working memory (WM). The current work proposes that impairment of the cholinergic system in Gulf War Illness patients (GWIPs) leads to behavioral and neural deficits of the central executive of WM. A large sample of GWIPs and matched controls (MCs) underwent functional magnetic resonance imaging during a varied-load working memory task. Compared to MCs, GWIPs showed a greater decline in performance as WM-demand increased. Functional imaging suggested that GWIPs evinced separate processing strategies, deferring prefrontal cortex activity from encoding to retrieval for high demand conditions. Greater activity during high-demand encoding predicted greater WM performance. Behavioral data suggest that WM executive strategies are impaired in GWIPs. Functional data further support this hypothesis and suggest that GWIPs utilize less effective strategies during high-demand WM.
Central Executive Dysfunction and Deferred Prefrontal Processing in Veterans with Gulf War Illness
Hubbard, Nicholas A.; Hutchison, Joanna L.; Motes, Michael A.; Shokri-Kojori, Ehsan; Bennett, Ilana J.; Brigante, Ryan M.; Haley, Robert W.; Rypma, Bart
2015-01-01
Gulf War Illness is associated with toxic exposure to cholinergic disruptive chemicals. The cholinergic system has been shown to mediate the central executive of working memory (WM). The current work proposes that impairment of the cholinergic system in Gulf War Illness patients (GWIPs) leads to behavioral and neural deficits of the central executive of WM. A large sample of GWIPs and matched controls (MCs) underwent functional magnetic resonance imaging during a varied-load working memory task. Compared to MCs, GWIPs showed a greater decline in performance as WM-demand increased. Functional imaging suggested that GWIPs evinced separate processing strategies, deferring prefrontal cortex activity from encoding to retrieval for high demand conditions. Greater activity during high-demand encoding predicted greater WM performance. Behavioral data suggest that WM executive strategies are impaired in GWIPs. Functional data further support this hypothesis and suggest that GWIPs utilize less effective strategies during high-demand WM. PMID:25767746
Knowledge-based low-level image analysis for computer vision systems
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.
1988-01-01
Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
A depth enhancement strategy for kinect depth image
NASA Astrophysics Data System (ADS)
Quan, Wei; Li, Hua; Han, Cheng; Xue, Yaohong; Zhang, Chao; Hu, Hanping; Jiang, Zhengang
2018-03-01
Kinect is a motion sensing input device which is widely used in computer vision and other related fields. However, there are many inaccurate depth data in Kinect depth images even Kinect v2. In this paper, an algorithm is proposed to enhance Kinect v2 depth images. According to the principle of its depth measuring, the foreground and the background are considered separately. As to the background, the holes are filled according to the depth data in the neighborhood. And as to the foreground, a filling algorithm, based on the color image concerning about both space and color information, is proposed. An adaptive joint bilateral filtering method is used to reduce noise. Experimental results show that the processed depth images have clean background and clear edges. The results are better than ones of traditional Strategies. It can be applied in 3D reconstruction fields to pretreat depth image in real time and obtain accurate results.
Image-guidance enables new methods for customizing cochlear implant stimulation strategies
Noble, Jack H.; Labadie, Robert F.; Gifford, René H.; Dawant, Benoit M.
2013-01-01
Over the last 20 years, cochlear implants (CIs) have become what is arguably the most successful neural prosthesis to date. Despite this success, a significant number of CI recipients experience marginal hearing restoration, and, even among the best performers, restoration to normal fidelity is rare. In this article, we present image processing techniques that can be used to detect, for the first time, the positions of implanted CI electrodes and the nerves they stimulate for individual CI users. These techniques permit development of new, customized CI stimulation strategies. We present one such strategy and show that it leads to significant hearing improvement in an experiment conducted with 11 CI recipients. These results indicate that image-guidance can be used to improve hearing outcomes for many existing CI recipients without requiring additional surgical procedures. PMID:23529109
ERIC Educational Resources Information Center
Canelos, James
An internal cognitive variable--mental imagery representation--was studied using a set of three information-processing strategies under external stimulus visual display conditions for various learning levels. The copy strategy provided verbal and visual dual-coding and required formation of a vivid mental image. The relational strategy combined…
Paradigms of perception in clinical practice.
Jacobson, Francine L; Berlanstein, Bruce P; Andriole, Katherine P
2006-06-01
Display strategies for medical images in radiology have evolved in tandem with the technology by which images are made. The close of the 20th century, nearly coincident with the 100th anniversary of the discovery of x-rays, brought radiologists to a new crossroad in the evolution of image display. The increasing availability, speed, and flexibility of computer technology can now revolutionize how images are viewed and interpreted. Radiologists are not yet in agreement regarding the next paradigm for image display. The possibilities are being explored systematically through the Society for Computer Applications in Radiology's Transforming the Radiological Interpretation Process initiative. The varied input of radiologists who work in a large variety of settings will enable new display strategies to best serve radiologists in the detection and quantification of disease. Considerations and possibilities for the future are presented in this paper.
Real-time high-velocity resolution color Doppler OCT
NASA Astrophysics Data System (ADS)
Westphal, Volker; Yazdanfar, Siavash; Rollins, Andrew M.; Izatt, Joseph A.
2001-05-01
Color Doppler optical coherence tomography (CDOCT), also called Optical Doppler Tomography) is a noninvasive optical imaging technique, which allows for micron-scale physiological flow mapping simultaneous with morphological OCT imaging. Current systems for real-time endoscopic optical coherence tomography (EOCT) would be enhanced by the capability to visualize sub-surface blood flow for applications in early cancer diagnosis and the management of bleeding ulcers. Unfortunately, previous implementations of CDOCT have either been sufficiently computationally expensive (employing Fourier or Hilbert transform techniques) to rule out real-time imaging of flow, or have been restricted to imaging of excessively high flow velocities when used in real time. We have developed a novel Doppler OCT signal-processing strategy capable of imaging physiological flow rates in real time. This strategy employs cross-correlation processing of sequential A-scans in an EOCT image, as opposed to autocorrelation processing as described previously. To measure Doppler shifts in the kHz range using this technique, it was necessary to stabilize the EOCT interferometer center frequency, eliminate parasitic phase noise, and to construct a digital cross correlation unit able to correlate signals of megahertz bandwidth by a fixed lag of up to a few ms. The performance of the color Doppler OCT system was demonstrated in a flow phantom, demonstrating a minimum detectable flow velocity of ~0.8 mm/s at a data acquisition rate of 8 images/second (with 480 A-scans/image) using a handheld probe. Dynamic flow as well as using it freehanded was shown. Flow was also detectable in a phantom in combination with a clinical usable endoscopic probe.
The correlation study of parallel feature extractor and noise reduction approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria
2015-05-15
This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less
NASA Astrophysics Data System (ADS)
Kwee, Edward; Peterson, Alexander; Stinson, Jeffrey; Halter, Michael; Yu, Liya; Majurski, Michael; Chalfoun, Joe; Bajcsy, Peter; Elliott, John
2018-02-01
Induced pluripotent stem cells (iPSCs) are reprogrammed cells that can have heterogeneous biological potential. Quality assurance metrics of reprogrammed iPSCs will be critical to ensure reliable use in cell therapies and personalized diagnostic tests. We present a quantitative phase imaging (QPI) workflow which includes acquisition, processing, and stitching multiple adjacent image tiles across a large field of view (LFOV) of a culture vessel. Low magnification image tiles (10x) were acquired with a Phasics SID4BIO camera on a Zeiss microscope. iPSC cultures were maintained using a custom stage incubator on an automated stage. We implement an image acquisition strategy that compensates for non-flat illumination wavefronts to enable imaging of an entire well plate, including the meniscus region normally obscured in Zernike phase contrast imaging. Polynomial fitting and background mode correction was implemented to enable comparability and stitching between multiple tiles. LFOV imaging of reference materials indicated that image acquisition and processing strategies did not affect quantitative phase measurements across the LFOV. Analysis of iPSC colony images demonstrated mass doubling time was significantly different than area doubling time. These measurements were benchmarked with prototype microsphere beads and etched-glass gratings with specified spatial dimensions designed to be QPI reference materials with optical pathlength shifts suitable for cell microscopy. This QPI workflow and the use of reference materials can provide non-destructive traceable imaging method for novel iPSC heterogeneity characterization.
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
Calipso's Mission Design: Sun-Glint Avoidance Strategies
NASA Technical Reports Server (NTRS)
Mailhe, Laurie M.; Schiff, Conrad; Stadler, John H.
2004-01-01
CALIPSO will fly in formation with the Aqua spacecraft to obtain a coincident image of a portion of the Aqua/MODIS swath. Since MODIS pixels suffering sun-glint degradation are not processed, it is essential that CALIPSO only co- image the glint h e portion of the MODIS instrument swath. This paper presents sun-glint avoidance strategies for the CALIPSO mission. First, we introduce the Aqua sun-glint geometry and its relation to the CALIPSO-Aqua formation flying parameters. Then, we detail our implementation of the computation and perform a cross-track trade-space analysis. Finally, we analyze the impact of the sun-glint avoidance strategy on the spacecraft power and delta-V budget over the mission lifetime.
NASA Astrophysics Data System (ADS)
Preusker, F.; Oberst, J.; Stark, A.; Burmeister, S.
2018-04-01
We produce high-resolution (222 m/grid element) Digital Terrain Models (DTMs) for Mercury using stereo images from the MESSENGER orbital mission. We have developed a scheme to process large numbers, typically more than 6000, images by photogrammetric techniques, which include, multiple image matching, pyramid strategy, and bundle block adjustments. In this paper, we present models for map quadrangles of the southern hemisphere H11, H12, H13, and H14.
Schwegmann, Alexander; Lindemann, Jens Peter; Egelhaaf, Martin
2014-01-01
Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments. PMID:25340761
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
NASA Astrophysics Data System (ADS)
Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.
2008-03-01
An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.
Markl, Michael; Harloff, Andreas; Bley, Thorsten A; Zaitsev, Maxim; Jung, Bernd; Weigang, Ernst; Langer, Mathias; Hennig, Jürgen; Frydrychowicz, Alex
2007-04-01
To evaluate an improved image acquisition and data-processing strategy for assessing aortic vascular geometry and 3D blood flow at 3T. In a study with five normal volunteers and seven patients with known aortic pathology, prospectively ECG-gated cine three-dimensional (3D) MR velocity mapping with improved navigator gating, real-time adaptive k-space ordering and dynamic adjustment of the navigator acceptance criteria was performed. In addition to morphological information and three-directional blood flow velocities, phase-contrast (PC)-MRA images were derived from the same data set, which permitted 3D isosurface rendering of vascular boundaries in combination with visualization of blood-flow patterns. Analysis of navigator performance and image quality revealed improved scan efficiencies of 63.6%+/-10.5% and temporal resolution (<50 msec) compared to previous implementations. Semiquantitative evaluation of image quality by three independent observers demonstrated excellent general image appearance with moderate blurring and minor ghosting artifacts. Results from volunteer and patient examinations illustrate the potential of the improved image acquisition and data-processing strategy for identifying normal and pathological blood-flow characteristics. Navigator-gated time-resolved 3D MR velocity mapping at 3T in combination with advanced data processing is a powerful tool for performing detailed assessments of global and local blood-flow characteristics in the aorta to describe or exclude vascular alterations. Copyright (c) 2007 Wiley-Liss, Inc.
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing.
Kim, Hyunjun; Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu; Sim, Sung-Han
2017-09-07
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%.
Shrivastava, Sajal; Sohn, Il-Yung; Son, Young-Min; Lee, Won-Il; Lee, Nae-Eung
2015-12-14
Although real-time label-free fluorescent aptasensors based on nanomaterials are increasingly recognized as a useful strategy for the detection of target biomolecules with high fidelity, the lack of an imaging-based quantitative measurement platform limits their implementation with biological samples. Here we introduce an ensemble strategy for a real-time label-free fluorescent graphene (Gr) aptasensor platform. This platform employs aptamer length-dependent tunability, thus enabling the reagentless quantitative detection of biomolecules through computational processing coupled with real-time fluorescence imaging data. We demonstrate that this strategy effectively delivers dose-dependent quantitative readouts of adenosine triphosphate (ATP) concentration on chemical vapor deposited (CVD) Gr and reduced graphene oxide (rGO) surfaces, thereby providing cytotoxicity assessment. Compared with conventional fluorescence spectrometry methods, our highly efficient, universally applicable, and rational approach will facilitate broader implementation of imaging-based biosensing platforms for the quantitative evaluation of a range of target molecules.
TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.
Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan
2017-01-01
Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.
Illuminating magma shearing processes via synchrotron imaging
NASA Astrophysics Data System (ADS)
Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.
2017-04-01
Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.
Marchewka, Artur; Kherif, Ferath; Krueger, Gunnar; Grabowska, Anna; Frackowiak, Richard; Draganski, Bogdan
2014-05-01
Multi-centre data repositories like the Alzheimer's Disease Neuroimaging Initiative (ADNI) offer a unique research platform, but pose questions concerning comparability of results when using a range of imaging protocols and data processing algorithms. The variability is mainly due to the non-quantitative character of the widely used structural T1-weighted magnetic resonance (MR) images. Although the stability of the main effect of Alzheimer's disease (AD) on brain structure across platforms and field strength has been addressed in previous studies using multi-site MR images, there are only sparse empirically-based recommendations for processing and analysis of pooled multi-centre structural MR data acquired at different magnetic field strengths (MFS). Aiming to minimise potential systematic bias when using ADNI data we investigate the specific contributions of spatial registration strategies and the impact of MFS on voxel-based morphometry in AD. We perform a whole-brain analysis within the framework of Statistical Parametric Mapping, testing for main effects of various diffeomorphic spatial registration strategies, of MFS and their interaction with disease status. Beyond the confirmation of medial temporal lobe volume loss in AD, we detect a significant impact of spatial registration strategy on estimation of AD related atrophy. Additionally, we report a significant effect of MFS on the assessment of brain anatomy (i) in the cerebellum, (ii) the precentral gyrus and (iii) the thalamus bilaterally, showing no interaction with the disease status. We provide empirical evidence in support of pooling data in multi-centre VBM studies irrespective of disease status or MFS. Copyright © 2013 Wiley Periodicals, Inc.
Automatic anatomy recognition in whole-body PET/CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqian; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey
Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity ofmore » anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. Results: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. Conclusions: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.« less
Verbalization and imagery in the process of formation of operator labor skills
NASA Technical Reports Server (NTRS)
Mistyuk, V. V.
1975-01-01
Sensorimotor control tests show that mastering operational skills occurs under conditions that stimulate the operator to independent active analysis and summarization of current information with the goal of clarifying the signs and the integral images that are a model of the situation. Goal directed determination of such an image requires inner and external speech, activates and improves the thinking of the operator, accelerates the training process, increases its effectiveness, and enables the formation of strategies in anticipating the course of events.
Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Héctor; Castaño, Víctor M
2014-01-01
This research maps the knowledge translation process for two different types of nanotechnologies applied to cancer: liposomes and metallic nanostructures (MNs). We performed a structural analysis of citation networks and text mining supported in controlled vocabularies. In the case of liposomes, our results identify subnetworks (invisible colleges) associated with different therapeutic strategies: nanopharmacology, hyperthermia, and gene therapy. Only in the pharmacological strategy was an organized knowledge translation process identified, which, however, is monopolized by the liposomal doxorubicins. In the case of MNs, subnetworks are not differentiated by the type of therapeutic strategy, and the content of the documents is still basic research. Research on MNs is highly focused on developing a combination of molecular imaging and photothermal therapy.
Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Héctor; Castaño, Víctor M
2014-01-01
This research maps the knowledge translation process for two different types of nanotechnologies applied to cancer: liposomes and metallic nanostructures (MNs). We performed a structural analysis of citation networks and text mining supported in controlled vocabularies. In the case of liposomes, our results identify subnetworks (invisible colleges) associated with different therapeutic strategies: nanopharmacology, hyperthermia, and gene therapy. Only in the pharmacological strategy was an organized knowledge translation process identified, which, however, is monopolized by the liposomal doxorubicins. In the case of MNs, subnetworks are not differentiated by the type of therapeutic strategy, and the content of the documents is still basic research. Research on MNs is highly focused on developing a combination of molecular imaging and photothermal therapy. PMID:24920900
CORSE-81: The 1981 Conference on Remote Sensing Education
NASA Technical Reports Server (NTRS)
Davis, S. M. (Compiler)
1981-01-01
Summaries of the presentations and tutorial workshops addressing various strategies in remote sensing education are presented. Course design from different discipline perspectives, equipment requirements for image interpretation and processing, and the role of universities, private industry, and government agencies in the education process are covered.
Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
A color-corrected strategy for information multiplexed Fourier ptychographic imaging
NASA Astrophysics Data System (ADS)
Wang, Mingqun; Zhang, Yuzhen; Chen, Qian; Sun, Jiasong; Fan, Yao; Zuo, Chao
2017-12-01
Fourier ptychography (FP) is a novel computational imaging technique that provides both wide field of view (FoV) and high-resolution (HR) imaging capacity for biomedical imaging. Combined with information multiplexing technology, wavelength multiplexed (or color multiplexed) FP imaging can be implemented by lighting up R/G/B LED units simultaneously. Furthermore, a HR image can be recovered at each wavelength from the multiplexed dataset. This enhances the efficiency of data acquisition. However, since the same dataset of intensity measurement is used to recover the HR image at each wavelength, the mean value in each channel would converge to the same value. In this paper, a color correction strategy embedded in the multiplexing FP scheme is demonstrated, which is termed as color corrected wavelength multiplexed Fourier ptychography (CWMFP). Three images captured by turning on a LED array in R/G/B are required as priori knowledge to improve the accuracy of reconstruction in the recovery process. Using the reported technique, the redundancy requirement of information multiplexed FP is reduced. Moreover, the accuracy of reconstruction at each channel is improved with correct color reproduction of the specimen.
Oximetry using multispectral imaging: theory and application
NASA Astrophysics Data System (ADS)
MacKenzie, Lewis E.; Harvey, Andrew R.
2018-06-01
Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.
7 CFR 1219.15 - Industry information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...
7 CFR 1219.15 - Industry information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...
7 CFR 1219.15 - Industry information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...
7 CFR 1219.15 - Industry information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... efficiency in processing, enhance the development of new markets and marketing strategies, increase marketing efficiency, and enhance the image of Hass avocados and the Hass avocado industry in the United States. ...
Conference on Space and Military Applications of Automation and Robotics
NASA Technical Reports Server (NTRS)
1988-01-01
Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.
ERIC Educational Resources Information Center
Sundaram, Senthil K.; Chugani, Harry T.; Chugani, Diane C.
2005-01-01
Positron emission tomography (PET) is a technique that enables imaging of the distribution of radiolabeled tracers designed to track biochemical and molecular processes in the body after intravenous injection or inhalation. New strategies for the use of radiolabeled tracers hold potential for imaging gene expression in the brain during development…
Laser Illuminated Imaging: Multiframe Beam Tilt Tracking and Deconvolution Algorithm
2013-03-01
same way with atmospheric turbulence resulting in tilt, blur and other higher order distortions on the returned image. Using the Fourier shift...of the target image with distortions such as speckle, blurring and defocus mitigated via a multiframe processing strategy. Atmospheric turbulence ...propagating a beam in a turbulent atmosphere with a beam width at the target is smaller than the field of view (FOV) of the receiver optics. 1.2
NASA Astrophysics Data System (ADS)
Guan, Wen; Li, Li; Jin, Weiqi; Qiu, Su; Zou, Yan
2015-10-01
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can't both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
NASA Technical Reports Server (NTRS)
1983-01-01
The approach pictures taken by the Viking 1 and Viking 2 spacecrafts two days before their Mars orbital insertion maneuvers were analyzed in order to search for new satellites within the orbit of Phobos. To accomplish this task, search procedure and analysis strategy were formulated, developed and executed using the substantial image processing capabilities of the Image Processing Laboratory at the Jet Propulsion Laboratory. The development of these new search capabilities should prove to be valuable to NASA in processing of image data obtained from other spacecraft missions. The result of applying the search procedures to the Viking approach pictures was as follows: no new satellites of comparable size (approx. 20 km) and brightness to Phobos or Demios were detected within the orbit of Phobos.
Wein, Lawrence M.; Baveja, Manas
2005-01-01
Motivated by the difficulty of biometric systems to correctly match fingerprints with poor image quality, we formulate and solve a game-theoretic formulation of the identification problem in two settings: U.S. visa applicants are checked against a list of visa holders to detect visa fraud, and visitors entering the U.S. are checked against a watchlist of criminals and suspected terrorists. For three types of biometric strategies, we solve the game in which the U.S. Government chooses the strategy's optimal parameter values to maximize the detection probability subject to a constraint on the mean biometric processing time per legal visitor, and then the terrorist chooses the image quality to minimize the detection probability. At current inspector staffing levels at ports of entry, our model predicts that a quality-dependent two-finger strategy achieves a detection probability of 0.733, compared to 0.526 under the quality-independent two-finger strategy that is currently implemented at the U.S. border. Increasing the staffing level of inspectors offers only minor increases in the detection probability for these two strategies. Using more than two fingers to match visitors with poor image quality allows a detection probability of 0.949 under current staffing levels, but may require major changes to the current U.S. biometric program. The detection probabilities during visa application are ≈11–22% smaller than at ports of entry for all three strategies, but the same qualitative conclusions hold. PMID:15894628
Wein, Lawrence M; Baveja, Manas
2005-05-24
Motivated by the difficulty of biometric systems to correctly match fingerprints with poor image quality, we formulate and solve a game-theoretic formulation of the identification problem in two settings: U.S. visa applicants are checked against a list of visa holders to detect visa fraud, and visitors entering the U.S. are checked against a watchlist of criminals and suspected terrorists. For three types of biometric strategies, we solve the game in which the U.S. Government chooses the strategy's optimal parameter values to maximize the detection probability subject to a constraint on the mean biometric processing time per legal visitor, and then the terrorist chooses the image quality to minimize the detection probability. At current inspector staffing levels at ports of entry, our model predicts that a quality-dependent two-finger strategy achieves a detection probability of 0.733, compared to 0.526 under the quality-independent two-finger strategy that is currently implemented at the U.S. border. Increasing the staffing level of inspectors offers only minor increases in the detection probability for these two strategies. Using more than two fingers to match visitors with poor image quality allows a detection probability of 0.949 under current staffing levels, but may require major changes to the current U.S. biometric program. The detection probabilities during visa application are approximately 11-22% smaller than at ports of entry for all three strategies, but the same qualitative conclusions hold.
Contrast-Enhanced Magnetic Resonance Imaging of Gastric Emptying and Motility in Rats.
Lu, Kun-Han; Cao, Jiayue; Oleson, Steven Thomas; Powley, Terry L; Liu, Zhongming
2017-11-01
The assessment of gastric emptying and motility in humans and animals typically requires radioactive imaging or invasive measurements. Here, we developed a robust strategy to image and characterize gastric emptying and motility in rats based on contrast-enhanced magnetic resonance imaging (MRI) and computer-assisted image processing. The animals were trained to naturally consume a gadolinium-labeled dietgel while bypassing any need for oral gavage. Following this test meal, the animals were scanned under low-dose anesthesia for high-resolution T1-weighted MRI in 7 Tesla, visualizing the time-varying distribution of the meal with greatly enhanced contrast against non-gastrointestinal (GI) tissues. Such contrast-enhanced images not only depicted the gastric anatomy, but also captured and quantified stomach emptying, intestinal filling, antral contraction, and intestinal absorption with fully automated image processing. Over four postingestion hours, the stomach emptied by 27%, largely attributed to the emptying of the forestomach rather than the corpus and the antrum, and most notable during the first 30 min. Stomach emptying was accompanied by intestinal filling for the first 2 h, whereas afterward intestinal absorption was observable as cumulative contrast enhancement in the renal medulla. The antral contraction was captured as a peristaltic wave propagating from the proximal to distal antrum. The frequency, velocity, and amplitude of the antral contraction were on average 6.34 ± 0.07 contractions per minute, 0.67 ± 0.01 mm/s, and 30.58 ± 1.03%, respectively. These results demonstrate an optimized MRI-based strategy to assess gastric emptying and motility in healthy rats, paving the way for using this technique to understand GI diseases, or test new therapeutics in rat models.The assessment of gastric emptying and motility in humans and animals typically requires radioactive imaging or invasive measurements. Here, we developed a robust strategy to image and characterize gastric emptying and motility in rats based on contrast-enhanced magnetic resonance imaging (MRI) and computer-assisted image processing. The animals were trained to naturally consume a gadolinium-labeled dietgel while bypassing any need for oral gavage. Following this test meal, the animals were scanned under low-dose anesthesia for high-resolution T1-weighted MRI in 7 Tesla, visualizing the time-varying distribution of the meal with greatly enhanced contrast against non-gastrointestinal (GI) tissues. Such contrast-enhanced images not only depicted the gastric anatomy, but also captured and quantified stomach emptying, intestinal filling, antral contraction, and intestinal absorption with fully automated image processing. Over four postingestion hours, the stomach emptied by 27%, largely attributed to the emptying of the forestomach rather than the corpus and the antrum, and most notable during the first 30 min. Stomach emptying was accompanied by intestinal filling for the first 2 h, whereas afterward intestinal absorption was observable as cumulative contrast enhancement in the renal medulla. The antral contraction was captured as a peristaltic wave propagating from the proximal to distal antrum. The frequency, velocity, and amplitude of the antral contraction were on average 6.34 ± 0.07 contractions per minute, 0.67 ± 0.01 mm/s, and 30.58 ± 1.03%, respectively. These results demonstrate an optimized MRI-based strategy to assess gastric emptying and motility in healthy rats, paving the way for using this technique to understand GI diseases, or test new therapeutics in rat models.
Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja
2015-01-01
In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706
Concrete Crack Identification Using a UAV Incorporating Hybrid Image Processing
Lee, Junhwa; Ahn, Eunjong; Cho, Soojin; Shin, Myoungsu
2017-01-01
Crack assessment is an essential process in the maintenance of concrete structures. In general, concrete cracks are inspected by manual visual observation of the surface, which is intrinsically subjective as it depends on the experience of inspectors. Further, it is time-consuming, expensive, and often unsafe when inaccessible structural members are to be assessed. Unmanned aerial vehicle (UAV) technologies combined with digital image processing have recently been applied to crack assessment to overcome the drawbacks of manual visual inspection. However, identification of crack information in terms of width and length has not been fully explored in the UAV-based applications, because of the absence of distance measurement and tailored image processing. This paper presents a crack identification strategy that combines hybrid image processing with UAV technology. Equipped with a camera, an ultrasonic displacement sensor, and a WiFi module, the system provides the image of cracks and the associated working distance from a target structure on demand. The obtained information is subsequently processed by hybrid image binarization to estimate the crack width accurately while minimizing the loss of the crack length information. The proposed system has shown to successfully measure cracks thicker than 0.1 mm with the maximum length estimation error of 7.3%. PMID:28880254
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia
2014-11-01
This study characterizes the composition of plastic frames and printed circuit boards from end-of-life mobile phones. This knowledge may help define an optimal processing strategy for using these items as potential raw materials. Correct handling of such a waste is essential for its further "sustainable" recovery, especially to maximize the extraction of base, rare and precious metals, minimizing the environmental impact of the entire process chain. A combination of electronic and chemical imaging techniques was thus examined, applied and critically evaluated in order to optimize the processing, through the identification and the topological assessment of the materials of interest and their quantitative distribution. To reach this goal, end-of-life mobile phone derived wastes have been systematically characterized adopting both "traditional" (e.g. scanning electronic microscopy combined with microanalysis and Raman spectroscopy) and innovative (e.g. hyperspectral imaging in short wave infrared field) techniques, with reference to frames and printed circuit boards. Results showed as the combination of both the approaches (i.e. traditional and classical) could dramatically improve recycling strategies set up, as well as final products recovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu
2014-01-01
As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248
Autocorrelation techniques for soft photogrammetry
NASA Astrophysics Data System (ADS)
Yao, Wu
In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia, E-mail: silvia.serranti@uniroma1.it
Highlights: • A recycling oriented characterization of end-of-life mobile phones was carried out. • Characterization was developed in a zero-waste-perspective, aiming to recover all the mobile phone materials. • Plastic frames and printed circuit boards were analyzed by electronic and chemical imaging. • Suitable milling/classification strategies were set up to define specialized-pre-concentrated-streams. • The proposed approach can improve the recovery of polymers, base/precious metals, rare earths and critical raw materials. - Abstract: This study characterizes the composition of plastic frames and printed circuit boards from end-of-life mobile phones. This knowledge may help define an optimal processing strategy for using thesemore » items as potential raw materials. Correct handling of such a waste is essential for its further “sustainable” recovery, especially to maximize the extraction of base, rare and precious metals, minimizing the environmental impact of the entire process chain. A combination of electronic and chemical imaging techniques was thus examined, applied and critically evaluated in order to optimize the processing, through the identification and the topological assessment of the materials of interest and their quantitative distribution. To reach this goal, end-of-life mobile phone derived wastes have been systematically characterized adopting both “traditional” (e.g. scanning electronic microscopy combined with microanalysis and Raman spectroscopy) and innovative (e.g. hyperspectral imaging in short wave infrared field) techniques, with reference to frames and printed circuit boards. Results showed as the combination of both the approaches (i.e. traditional and classical) could dramatically improve recycling strategies set up, as well as final products recovery.« less
Makela, Ashley V; Murrell, Donna H; Parkins, Katie M; Kara, Jenna; Gaudet, Jeffrey M; Foster, Paula J
2016-10-01
Cellular magnetic resonance imaging (MRI) is an evolving field of imaging with strong translational and research potential. The ability to detect, track, and quantify cells in vivo and over time allows for studying cellular events related to disease processes and may be used as a biomarker for decisions about treatments and for monitoring responses to treatments. In this review, we discuss methods for labeling cells, various applications for cellular MRI, the existing limitations, strategies to address these shortcomings, and clinical cellular MRI.
White blood cell counting analysis of blood smear images using various segmentation strategies
NASA Astrophysics Data System (ADS)
Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza
2017-09-01
In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.
The Application of Nanoparticles in Gene Therapy and Magnetic Resonance Imaging
HERRANZ, FERNANDO; ALMARZA, ELENA; RODRÍGUEZ, IGNACIO; SALINAS, BEATRIZ; ROSELL, YAMILKA; DESCO, MANUEL; BULTE, JEFF W.; RUIZ-CABELLO, JESÚS
2012-01-01
The combination of nanoparticles, gene therapy, and medical imaging has given rise to a new field known as gene theranostics, in which a nanobioconjugate is used to diagnose and treat the disease. The process generally involves binding between a vector carrying the genetic information and a nanoparticle, which provides the signal for imaging. The synthesis of this probe generates a synergic effect, enhancing the efficiency of gene transduction and imaging contrast. We discuss the latest approaches in the synthesis of nanoparticles for magnetic resonance imaging, gene therapy strategies, and their conjugation and in vivo application. PMID:21484943
Impact of negative cognitions about body image on inflammatory status in relation to health.
Černelič-Bizjak, Maša; Jenko-Pražnikar, Zala
2014-01-01
Evidence suggests that body dissatisfaction may relate to biological processes and that negative cognitions can influence physical health through the complex pathways linking psychological and biological factors. The present study investigates the relationships between body image satisfaction, inflammation (cytokine levels), aerobic fitness level and obesity in 96 middle-aged men and women (48 normal and 48 overweight). All participants underwent measurements of body satisfaction, body composition, serological measurements of inflammation and aerobic capabilities assessment. Body image dissatisfaction uniquely predicted inflammation biomarkers, C-reactive protein and tumour necrosis factor-α, even when controlled for obesity indicators. Thus, body image dissatisfaction is strongly linked to inflammation processes and may promote the increase in cytokines, representing a relative metabolic risk, independent of most traditional risk factors, such as gender, body mass index and intra-abdominal (waist to hip ratio) adiposity. Results highlight the fact that person's negative cognitions need to be considered in psychologically based interventions and strategies in treatment of obesity, including strategies for health promotion. Results contribute to the knowledge base of the complex pathways in the association between psychological factors and physical illness and some important attempts were made to explain the psychological pathways linking cognitions with inflammation.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
Image reconstruction by domain-transform manifold learning.
Zhu, Bo; Liu, Jeremiah Z; Cauley, Stephen F; Rosen, Bruce R; Rosen, Matthew S
2018-03-21
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction-automated transform by manifold approximation (AUTOMAP)-which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
Image reconstruction by domain-transform manifold learning
NASA Astrophysics Data System (ADS)
Zhu, Bo; Liu, Jeremiah Z.; Cauley, Stephen F.; Rosen, Bruce R.; Rosen, Matthew S.
2018-03-01
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
Jiang, Lu; Greenwood, Tiffany R.; Amstalden van Hove, Erika R.; Chughtai, Kamila; Raman, Venu; Winnard, Paul T.; Heeren, Ron; Artemov, Dmitri; Glunde, Kristine
2014-01-01
Applications of molecular imaging in cancer and other diseases frequently require combining in vivo imaging modalities, such as magnetic resonance and optical imaging, with ex vivo optical, fluorescence, histology, and immunohistochemical (IHC) imaging, to investigate and relate molecular and biological processes to imaging parameters within the same region of interest. We have developed a multimodal image reconstruction and fusion framework that accurately combines in vivo magnetic resonance imaging (MRI) and magnetic resonance spectroscopic imaging (MRSI), ex vivo brightfield and fluorescence microscopic imaging, and ex vivo histology imaging. Ex vivo brightfield microscopic imaging was used as an intermediate modality to facilitate the ultimate link between ex vivo histology and in vivo MRI/MRSI. Tissue sectioning necessary for optical and histology imaging required generation of a three-dimensional (3D) reconstruction module for 2D ex vivo optical and histology imaging data. We developed an external fiducial marker based 3D reconstruction method, which was able to fuse optical brightfield and fluorescence with histology imaging data. Registration of 3D tumor shape was pursued to combine in vivo MRI/MRSI and ex vivo optical brightfield and fluorescence imaging data. This registration strategy was applied to in vivo MRI/MRSI, ex vivo optical brightfield/fluorescence, as well as histology imaging data sets obtained from human breast tumor models. 3D human breast tumor data sets were successfully reconstructed and fused with this platform. PMID:22945331
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
Masking Strategies for Image Manifolds.
Dadkhahi, Hamid; Duarte, Marco F
2016-07-07
We consider the problem of selecting an optimal mask for an image manifold, i.e., choosing a subset of the pixels of the image that preserves the manifold's geometric structure present in the original data. Such masking implements a form of compressive sensing through emerging imaging sensor platforms for which the power expense grows with the number of pixels acquired. Our goal is for the manifold learned from masked images to resemble its full image counterpart as closely as possible. More precisely, we show that one can indeed accurately learn an image manifold without having to consider a large majority of the image pixels. In doing so, we consider two masking methods that preserve the local and global geometric structure of the manifold, respectively. In each case, the process of finding the optimal masking pattern can be cast as a binary integer program, which is computationally expensive but can be approximated by a fast greedy algorithm. Numerical experiments show that the relevant manifold structure is preserved through the datadependent masking process, even for modest mask sizes.
TeraStitcher - A tool for fast automatic 3D-stitching of teravoxel-sized microscopy images
2012-01-01
Background Further advances in modern microscopy are leading to teravoxel-sized tiled 3D images at high resolution, thus increasing the dimension of the stitching problem of at least two orders of magnitude. The existing software solutions do not seem adequate to address the additional requirements arising from these datasets, such as the minimization of memory usage and the need to process just a small portion of data. Results We propose a free and fully automated 3D Stitching tool designed to match the special requirements coming out of teravoxel-sized tiled microscopy images that is able to stitch them in a reasonable time even on workstations with limited resources. The tool was tested on teravoxel-sized whole mouse brain images with micrometer resolution and it was also compared with the state-of-the-art stitching tools on megavoxel-sized publicy available datasets. This comparison confirmed that the solutions we adopted are suited for stitching very large images and also perform well on datasets with different characteristics. Indeed, some of the algorithms embedded in other stitching tools could be easily integrated in our framework if they turned out to be more effective on other classes of images. To this purpose, we designed a software architecture which separates the strategies that use efficiently memory resources from the algorithms which may depend on the characteristics of the acquired images. Conclusions TeraStitcher is a free tool that enables the stitching of Teravoxel-sized tiled microscopy images even on workstations with relatively limited resources of memory (<8 GB) and processing power. It exploits the knowledge of approximate tile positions and uses ad-hoc strategies and algorithms designed for such very large datasets. The produced images can be saved into a multiresolution representation to be efficiently retrieved and processed. We provide TeraStitcher both as standalone application and as plugin of the free software Vaa3D. PMID:23181553
UltraColor: a new gamut-mapping strategy
NASA Astrophysics Data System (ADS)
Spaulding, Kevin E.; Ellson, Richard N.; Sullivan, James R.
1995-04-01
Many color calibration and enhancement strategies exist for digital systems. Typically, these approaches are optimized to work well with one class of images, but may produce unsatisfactory results for other types of images. For example, a colorimetric strategy may work well when printing photographic scenes, but may give inferior results for business graphic images because of device color gamut limitations. On the other hand, a color enhancement strategy that works well for business graphics images may distort the color reproduction of skintones and other important photographic colors. This paper describes a method for specifying different color mapping strategies in various regions of color space, while providing a mechanism for smooth transitions between the different regions. The method involves a two step process: (1) constraints are applied so some subset of the points in the input color space explicitly specifying the color mapping function; (2) the color mapping for the remainder of the color values is then determined using an interpolation algorithm that preserves continuity and smoothness. The interpolation algorithm that was developed is based on a computer graphics morphing technique. This method was used to develop the UltraColor gamut mapping strategy, which combines a colorimetric mapping for colors with low saturation levels, with a color enhancement technique for colors with high saturation levels. The result is a single color transformation that produces superior quality for all classes of imagery. UltraColor has been incorporated in several models of Kodak printers including the Kodak ColorEase PS and the Kodak XLS 8600 PS thermal dye sublimation printers.
Using deep learning in image hyper spectral segmentation, classification, and detection
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Su, Zhenyu
2018-02-01
Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.
Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.
2017-01-01
ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312
Automatic insertion of simulated microcalcification clusters in a software breast phantom
NASA Astrophysics Data System (ADS)
Shankla, Varsha; Pokrajac, David D.; Weinstein, Susan P.; DeLeo, Michael; Tuite, Catherine; Roth, Robyn; Conant, Emily F.; Maidment, Andrew D.; Bakic, Predrag R.
2014-03-01
An automated method has been developed to insert realistic clusters of simulated microcalcifications (MCs) into computer models of breast anatomy. This algorithm has been developed as part of a virtual clinical trial (VCT) software pipeline, which includes the simulation of breast anatomy, mechanical compression, image acquisition, image processing, display and interpretation. An automated insertion method has value in VCTs involving large numbers of images. The insertion method was designed to support various insertion placement strategies, governed by probability distribution functions (pdf). The pdf can be predicated on histological or biological models of tumor growth, or estimated from the locations of actual calcification clusters. To validate the automated insertion method, a 2-AFC observer study was designed to compare two placement strategies, undirected and directed. The undirected strategy could place a MC cluster anywhere within the phantom volume. The directed strategy placed MC clusters within fibroglandular tissue on the assumption that calcifications originate from epithelial breast tissue. Three radiologists were asked to select between two simulated phantom images, one from each placement strategy. Furthermore, questions were posed to probe the rationale behind the observer's selection. The radiologists found the resulting cluster placement to be realistic in 92% of cases, validating the automated insertion method. There was a significant preference for the cluster to be positioned on a background of adipose or mixed adipose/fibroglandular tissues. Based upon these results, this automated lesion placement method will be included in our VCT simulation pipeline.
Visualization of children's mathematics solving process using near infrared spectroscopic approach
NASA Astrophysics Data System (ADS)
Kuroda, Yasufumi; Okamoto, Naoko; Chance, Britton; Nioka, Shoko; Eda, Hideo; Maesako, Takanori
2009-02-01
Over the past decade, the application of results from brain science research to education research has been a controversial topic. A NIRS imaging system shows images of Hb parameters in the brain. Measurements using NIRS are safe, easy and the equipment is portable, allowing subjects to tolerate longer research periods. The purpose of this research is to examine the characteristics of Hb using NIRS at the moment of understanding. We measured Hb in the prefrontal cortex of children while they were solving mathematical problems (tangram puzzles). As a result of the experiment, we were able to classify the children into three groups based on their solution methods. Hb continually increased in a group which could not develop a problem solving strategy for the tangram puzzles. Hb declined steadily for a group which was able to develop a strategy for the tangram puzzles. Hb was steady for a certain group that had already developed a strategy before solving the problems. Our experiments showed that the brain data from NIRS enables the visualization of children's mathematical solution processes.
Tracking serum antibody response to viral antigens with arrayed imaging reflectometry
NASA Astrophysics Data System (ADS)
Mace, Charles R.; Rose, Robert C.; Miller, Benjamin L.
2009-02-01
Arrayed Imaging Reflectometry, or "AIR", is a new label-free technique for detecting proteins that relies on bindinginduced changes in the response of an antireflective coating on the surface of a silicon ship. Because the technique provides high sensitivity, excellent dynamic range, and readily integrates with standard silicon wafer processing technology, it is an exceptionally attractive platform on which to build systems for detecting proteins in complex solutions. In our early research, we used AIR chips bearing secreted receptor proteins from enteropathogenic E. coli to develop sensors for this pathogen. Recently, we have been exploring an alternative strategy: Rather than detecting the pathogen directly, can one immobilize antigens from a pathogen, and employ AIR to detect antibody responses to those antigens? Such a strategy would provide enhanced sensitivity for pathogen detection (as the immune system essentially amplifies the "signal" caused by the presence of an organism to which it responds), and would also potentially prove useful in the process of vaccine development. We describe herein preliminary results in the application of such a strategy to the detection of antibodies to human papillomavirus (HPV).
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
Takashima, Kenta; Hoshino, Masato; Uesugi, Kentaro; Yagi, Naoto; Matsuda, Shojiro; Nakahira, Atsushi; Osumi, Noriko; Kohzuki, Masahiro; Onodera, Hiroshi
2015-01-01
Tissue engineering strategies for spinal cord repair are a primary focus of translational medicine after spinal cord injury (SCI). Many tissue engineering strategies employ three-dimensional scaffolds, which are made of biodegradable materials and have microstructure incorporated with viable cells and bioactive molecules to promote new tissue generation and functional recovery after SCI. It is therefore important to develop an imaging system that visualizes both the microstructure of three-dimensional scaffolds and their degradation process after SCI. Here, X-ray phase-contrast computed tomography imaging based on the Talbot grating interferometer is described and it is shown how it can visualize the polyglycolic acid scaffold, including its microfibres, after implantation into the injured spinal cord. Furthermore, X-ray phase-contrast computed tomography images revealed that degradation occurred from the end to the centre of the braided scaffold in the 28 days after implantation into the injured spinal cord. The present report provides the first demonstration of an imaging technique that visualizes both the microstructure and degradation of biodegradable scaffolds in SCI research. X-ray phase-contrast imaging based on the Talbot grating interferometer is a versatile technique that can be used for a broad range of preclinical applications in tissue engineering strategies. PMID:25537600
Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan
2018-06-06
Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
NASA Astrophysics Data System (ADS)
Vuori, Tero; Olkkonen, Maria
2006-01-01
The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.
A review of GPU-based medical image reconstruction.
Després, Philippe; Jia, Xun
2017-10-01
Tomographic image reconstruction is a computationally demanding task, even more so when advanced models are used to describe a more complete and accurate picture of the image formation process. Such advanced modeling and reconstruction algorithms can lead to better images, often with less dose, but at the price of long calculation times that are hardly compatible with clinical workflows. Fortunately, reconstruction tasks can often be executed advantageously on Graphics Processing Units (GPUs), which are exploited as massively parallel computational engines. This review paper focuses on recent developments made in GPU-based medical image reconstruction, from a CT, PET, SPECT, MRI and US perspective. Strategies and approaches to get the most out of GPUs in image reconstruction are presented as well as innovative applications arising from an increased computing capacity. The future of GPU-based image reconstruction is also envisioned, based on current trends in high-performance computing. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Automobile Marketing Strategies, Pricing, and Product Planning
DOT National Transportation Integrated Search
1978-04-01
The objective of the study was to determine the decision-making processes concerning major model year product introductions and alterations in the automotive industry as well as to investigate techniques of price positioning, product and image positi...
Automobile Marketing Strategies, Pricing, and Product Planning
DOT National Transportation Integrated Search
1978-01-01
The objective of the study was to determine the decision-making processes concerning major model year product introductions and alterations in the automotive industry as well as to investigate techniques of price positioning, product and image positi...
Aging, culture, and memory for categorically processed information.
Yang, Lixia; Chen, Wenfeng; Ng, Andy H; Fu, Xiaolan
2013-11-01
Literature on cross-cultural differences in cognition suggests that categorization, as an information processing and organization strategy, was more often used by Westerners than by East Asians, particularly for older adults. This study examines East-West cultural differences in memory for categorically processed items and sources in young and older Canadians and native Chinese with a conceptual source memory task (Experiment 1) and a reality monitoring task (Experiment 2). In Experiment 1, participants encoded photographic faces of their own ethnicity that were artificially categorized into GOOD or EVIL characters and then completed a source memory task in which they identified faces as old-GOOD, old-EVIL, or new. In Experiment 2, participants viewed a series of words, each followed either by a corresponding image (i.e., SEEN) or by a blank square within which they imagined an image for the word (i.e., IMAGINED). At test, they decided whether the test words were old-SEEN, old-IMAGINED, or new. In general, Canadians outperformed Chinese in memory for categorically processed information, an effect more pronounced for older than for young adults. Extensive exercise of culturally preferred categorization strategy differentially benefits Canadians and reduces their age group differences in memory for categorically processed information.
An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.
2015-01-01
Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.
Local residue coupling strategies by neural network for InSAR phase unwrapping
NASA Astrophysics Data System (ADS)
Refice, Alberto; Satalino, Giuseppe; Chiaradia, Maria T.
1997-12-01
Phase unwrapping is one of the toughest problems in interferometric SAR processing. The main difficulties arise from the presence of point-like error sources, called residues, which occur mainly in close couples due to phase noise. We present an assessment of a local approach to the resolution of these problems by means of a neural network. Using a multi-layer perceptron, trained with the back- propagation scheme on a series of simulated phase images, fashion the best pairing strategies for close residue couples. Results show that god efficiencies and accuracies can have been obtained, provided a sufficient number of training examples are supplied. Results show that good efficiencies and accuracies can be obtained, provided a sufficient number of training examples are supplied. The technique is tested also on real SAR ERS-1/2 tandem interferometric images of the Matera test site, showing a good reduction of the residue density. The better results obtained by use of the neural network as far as local criteria are adopted appear justified given the probabilistic nature of the noise process on SAR interferometric phase fields and allows to outline a specifically tailored implementation of the neural network approach as a very fast pre-processing step intended to decrease the residue density and give sufficiently clean images to be processed further by more conventional techniques.
Positron Emission Tomography Molecular Imaging in Late-Life Depression
Hirao, Kentaro; Smith, Gwenn S.
2017-01-01
Molecular imaging represents a bridge between basic and clinical neuroscience observations and provides many opportunities for translation and identifying mechanisms that may inform prevention and intervention strategies in late-life depression (LLD). Substantial advances in instrumentation and radiotracer chemistry have resulted in improved sensitivity and spatial resolution and the ability to study in vivo an increasing number of neurotransmitters, neuromodulators, and, importantly, neuropathological processes. Molecular brain imaging studies in LLD will be reviewed, with a primary focus on positron emission tomography. Future directions for the field of molecular imaging in LLD will be discussed, including integrating molecular imaging with genetic, neuropsychiatric, and cognitive outcomes and multimodality neuroimaging. PMID:24394152
Momeni, Saba; Pourghassem, Hossein
2014-08-01
Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.
NASA Astrophysics Data System (ADS)
Choi, Sunghoon; Lee, Haenghwa; Lee, Donghoon; Choi, Seungyeon; Shin, Jungwook; Jang, Woojin; Seo, Chang-Woo; Kim, Hee-Joung
2017-03-01
A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time ( 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.
NASA Astrophysics Data System (ADS)
Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan
2014-03-01
Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.
Volume Segmentation and Ghost Particles
NASA Astrophysics Data System (ADS)
Ziskin, Isaac; Adrian, Ronald
2011-11-01
Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.
Nuts and Bolts of CEST MR imaging
Liu, Guanshu; Song, Xiaolei; Chan, Kannie W.Y.
2013-01-01
Chemical Exchange Saturation Transfer (CEST) has emerged as a novel MRI contrast mechanism that is well suited for molecular imaging studies. This new mechanism can be used to detect small amounts of contrast agent through saturation of rapidly exchanging protons on these agents, allowing a wide range of applications. CEST technology has a number of indispensable features, such as the possibility of simultaneous detection of multiple “colors” of agents and detecting changes in their environment (e.g. pH, metabolites, etc) through MR contrast. Currently a large number of new imaging schemes and techniques have been developed to improve the temporal resolution and specificity and to correct the influence of B0 and B1 inhomogeneities. In this review, the techniques developed over the last decade have been summarized with the different imaging strategies and post-processing methods discussed from a practical point of view including describing their relative merits for detecting CEST agents. The goal of the present work is to provide the reader with a fundamental understanding of the techniques developed, and to provide guidance to help refine future applications of this technology. This review is organized into three main sections: Basics of CEST Contrast, Implementation, Post-Processing, and also includes a brief Introduction section and Summary. The Basics of CEST Contrast section contains a description of the relevant background theory for saturation transfer and frequency labeled transfer, and a brief discussion of methods to determine exchange rates. The Implementation section contains a description of the practical considerations in conducting CEST MRI studies, including choice of magnetic field, pulse sequence, saturation pulse, imaging scheme, and strategies to separate MT and CEST. The Post-Processing section contains a description of the typical image processing employed for B0/B1 correction, Z-spectral interpolation, frequency selective detection, and improving CEST contrast maps. PMID:23303716
Robust image matching via ORB feature and VFC for mismatch removal
NASA Astrophysics Data System (ADS)
Ma, Tao; Fu, Wenxing; Fang, Bin; Hu, Fangyu; Quan, Siwen; Ma, Jie
2018-03-01
Image matching is at the base of many image processing and computer vision problems, such as object recognition or structure from motion. Current methods rely on good feature descriptors and mismatch removal strategies for detection and matching. In this paper, we proposed a robust image match approach based on ORB feature and VFC for mismatch removal. ORB (Oriented FAST and Rotated BRIEF) is an outstanding feature, it has the same performance as SIFT with lower cost. VFC (Vector Field Consensus) is a state-of-the-art mismatch removing method. The experiment results demonstrate that our method is efficient and robust.
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.
2016-03-01
Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Low-count PET image restoration using sparse representation
NASA Astrophysics Data System (ADS)
Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli
2018-04-01
In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.
USDA-ARS?s Scientific Manuscript database
Rapid detection and identification of pathogenic microorganisms naturally occurring during food processing are important in developing intervention and verification strategies. In the poultry industry, contamination of poultry meat with foodborne pathogens (especially, Salmonella and Campylobacter) ...
Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L
2015-03-01
In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost. Copyright © 2014 Elsevier Ltd. All rights reserved.
Post-processing images from the WFIRST-AFTA coronagraph testbed
NASA Astrophysics Data System (ADS)
Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo
2016-01-01
The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.
Image processing in biodosimetry: A proposal of a generic free software platform.
Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir
2015-08-01
The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.
Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans
2017-04-01
Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.
Martin, Elizabeth A.; Karcher, Nicole R.; Bartholow, Bruce D.; Siegle, Greg J.; Kerns, John G.
2017-01-01
Both extreme levels of social anhedonia (SocAnh) and perceptual aberration/magical ideation (PerMag) are associated with risk for schizophrenia-spectrum disorders and with emotional abnormalities. Yet, the nature of any psychophysiological-measured affective abnormality, including the role of automatic/controlled processes, is unclear. We examined the late positive potential (LPP) during passive viewing (to assess automatic processing) and during cognitive reappraisal (to assess controlled processing) in three groups: SocAnh, PerMag, and controls. The SocAnh group exhibited an increased LPP when viewing negative images. Further, SocAnh exhibited greater reductions in the LPP for negative images when told to use strategies to alter negative emotion. Similar to SocAnh, PerMag exhibited an increased LPP when viewing negative images. However, PerMag also exhibited an increased LPP when viewing positive images as well as an atypical decreased LPP when increasing positive emotion. Overall, these results suggest that at-risk groups are associated with shared and unique automatic and controlled abnormalities. PMID:28174121
Imaging energy landscapes with concentrated diffusing colloidal probes
NASA Astrophysics Data System (ADS)
Bahukudumbi, Pradipkumar; Bevan, Michael A.
2007-06-01
The ability to locally interrogate interactions between particles and energetically patterned surfaces provides essential information to design, control, and optimize template directed self-assembly processes. Although numerous techniques are capable of characterizing local physicochemical surface properties, no current method resolves interactions between colloids and patterned surfaces on the order of the thermal energy kT, which is the inherent energy scale of equilibrium self-assembly processes. Here, the authors describe video microscopy measurements and an inverse Monte Carlo analysis of diffusing colloidal probes as a means to image three dimensional free energy and potential energy landscapes due to physically patterned surfaces. In addition, they also develop a consistent analysis of self-diffusion in inhomogeneous fluids of concentrated diffusing probes on energy landscapes, which is important to the temporal imaging process and to self-assembly kinetics. Extension of the concepts developed in this work suggests a general strategy to image multidimensional and multiscale physical, chemical, and biological surfaces using a variety of diffusing probes (i.e., molecules, macromolecules, nanoparticles, and colloids).
Adaptive windowing in contrast-enhanced intravascular ultrasound imaging
Lindsey, Brooks D.; Martin, K. Heath; Jiang, Xiaoning; Dayton, Paul A.
2016-01-01
Intravascular ultrasound (IVUS) is one of the most commonly-used interventional imaging techniques and has seen recent innovations which attempt to characterize the risk posed by atherosclerotic plaques. One such development is the use of microbubble contrast agents to image vasa vasorum, fine vessels which supply oxygen and nutrients to the walls of coronary arteries and typically have diameters less than 200 µm. The degree of vasa vasorum neovascularization within plaques is positively correlated with plaque vulnerability. Having recently presented a prototype dual-frequency transducer for contrast agent-specific intravascular imaging, here we describe signal processing approaches based on minimum variance (MV) beamforming and the phase coherence factor (PCF) for improving the spatial resolution and contrast-to-tissue ratio (CTR) in IVUS imaging. These approaches are examined through simulations, phantom studies, ex vivo studies in porcine arteries, and in vivo studies in chicken embryos. In phantom studies, PCF processing improved CTR by a mean of 4.2 dB, while combined MV and PCF processing improved spatial resolution by 41.7%. Improvements of 2.2 dB in CTR and 37.2% in resolution were observed in vivo. Applying these processing strategies can enhance image quality in conventional B-mode IVUS or in contrast-enhanced IVUS, where signal-to-noise ratio is relatively low and resolution is at a premium. PMID:27161022
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Technical aspects of CT imaging of the spine.
Tins, Bernhard
2010-11-01
This review article discusses technical aspects of computed tomography (CT) imaging of the spine. Patient positioning, and its influence on image quality and movement artefact, is discussed. Particular emphasis is placed on the choice of scan parameters and their relation to image quality and radiation burden to the patient. Strategies to reduce radiation burden and artefact from metal implants are outlined. Data acquisition, processing, image display and steps to reduce artefact are reviewed. CT imaging of the spine is put into context with other imaging modalities for specific clinical indications or problems. This review aims to review underlying principles for image acquisition and to provide a rough guide for clinical problems without being prescriptive. Individual practice will always vary and reflect differences in local experience, technical provisions and clinical requirements.
Image Guided Biodistribution and Pharmacokinetic Studies of Theranostics
Ding, Hong; Wu, Fang
2012-01-01
Image guided technique is playing an increasingly important role in the investigation of the biodistribution and pharmacokinetics of drugs or drug delivery systems in various diseases, especially cancers. Besides anatomical imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), molecular imaging strategy including optical imaging, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) will facilitate the localization and quantization of radioisotope or optical probe labeled nanoparticle delivery systems in the category of theranostics. The quantitative measurement of the bio-distribution and pharmacokinetics of theranostics in the fields of new drug/probe development, diagnosis and treatment process monitoring as well as tracking the brain-blood-barrier (BBB) breaking through by high sensitive imaging method, and the applications of the representative imaging modalities are summarized in this review. PMID:23227121
Quantitative image quality evaluation of MR images using perceptual difference models
Miao, Jun; Huo, Donglai; Wilson, David L.
2008-01-01
The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was ≈1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM ≈ IDM (Sarnoff Corporation) ≈ SSIM [Wang et al. IEEE Trans. Image Process. 13, 600–612 (2004)] > mean squared error ≈ NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies. PMID:18649487
Strategies for the Segmentation of Subcutaneous Vascular Patterns in Thermographic Images
NASA Astrophysics Data System (ADS)
Chan, Eric K. Y.; Pearce, John A.
1989-05-01
Computer-assisted segmentation of vascular patterns in thermographic images provides the clinician with graphic outlines of thermally significant subcutaneous blood vessels. Segmentation strategies compared here consist of image smoothing protocols followed by thresholding and zero-crossing edge detectors. Median prefiltering followed by the Frei-Chen algorithm gave the most reproducible results, with an execution time of 143 seconds for 256 X 256 images. The Laplacian of Gaussian operator was not suitable due to streak artifacts in the thermographic imaging system. This computerized process may be adopted in a fast paced clinical environment to aid in the diagnosis and assessment of peripheral circulatory diseases, Raynaud's Disease3, phlebitis, varicose veins, as well as diseases of the autonomic nervous system. The same methodology may be applied to enhance the appearance of abnormal breast vascular patterns, and hence serve as an adjunct to mammography in the diagnosis of breast cancer. The automatically segmented vascular patterns, which have a hand drawn appearance, may also be used as a data reduction precursor to higher level pattern analysis and classification tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saleh, H Al; Erickson, B; Paulson, E
Purpose: MRI-based adaptive brachytherapy (ABT) is an emerging treatment modality for patients with gynecological tumors. However, MR image intensity non-uniformities (IINU) can vary from fraction to fraction, complicating image interpretation and auto-contouring accuracy. We demonstrate here an automated MR image standardization and auto-contouring strategy for MRI-based ABT of cervix cancer. Methods: MR image standardization consisted of: 1) IINU correction using the MNI N3 algorithm, 2) noise filtering using anisotropic diffusion, and 3) signal intensity normalization using the volumetric median. This post-processing chain was implemented as a series of custom Matlab and Java extensions in MIM (v6.4.5, MIM Software) and wasmore » applied to 3D T2 SPACE images of six patients undergoing MRI-based ABT at 3T. Coefficients of variation (CV=σ/µ) were calculated for both original and standardized images and compared using Mann-Whitney tests. Patient-specific cumulative MR atlases of bladder, rectum, and sigmoid contours were constructed throughout ABT, using original and standardized MR images from all previous ABT fractions. Auto-contouring was performed in MIM two ways: 1) best-match of one atlas image to the daily MR image, 2) multi-match of all previous fraction atlas images to the daily MR image. Dice’s Similarity Coefficients (DSCs) were calculated for auto-generated contours relative to reference contours for both original and standardized MR images and compared using Mann-Whitney tests. Results: Significant improvements in CV were detected following MR image standardization (p=0.0043), demonstrating an improvement in MR image uniformity. DSCs consistently increased for auto-contoured bladder, rectum, and sigmoid following MR image standardization, with the highest DSCs detected when the combination of MR image standardization and multi-match cumulative atlas-based auto-contouring was utilized. Conclusion: MR image standardization significantly improves MR image uniformity. The combination of MR image standardization and multi-match cumulative atlas-based auto-contouring produced the highest DSCs and is a promising strategy for MRI-based ABT for cervix cancer.« less
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
A Simple Chamber for Long-term Confocal Imaging of Root and Hypocotyl Development.
Kirchhelle, Charlotte; Moore, Ian
2017-05-17
Several aspects of plant development, such as lateral root morphogenesis, occur on time spans of several days. To study underlying cellular and subcellular processes, high resolution time-lapse microscopy strategies that preserve physiological conditions are required. Plant tissues must have adequate nutrient and water supply with sustained gaseous exchange but, when submerged and immobilized under a coverslip, they are particularly susceptible to anoxia. One strategy that has been successfully employed is the use of a perfusion system to maintain a constant supply of oxygen and nutrients. However, such arrangements can be complicated, cumbersome, and require specialized equipment. Presented here is an alternative strategy for a simple imaging system using perfluorodecalin as an immersion medium. This system is easy to set up, requires minimal equipment, and is easily mounted on a microscope stage, allowing several imaging chambers to be set up and imaged in parallel. In this system, lateral root growth rates are indistinguishable from growth rates under standard conditions on agar plates for the first two days, and lateral root growth continues at reduced rates for at least another day. Plant tissues are supplied with nutrients via an agar slab that can be used also to administer a range of pharmacological compounds. The system was established to monitor lateral root development but is readily adaptable to image other plant organs such as hypocotyls and primary roots.
Self-rated imagery and encoding strategies in visual memory.
Berger, G H; Gaunitz, S C
1979-02-01
The value of self-rated vividness of imagery in predicting performance was investigated, taking into account the mnemonic strategies utilized among subjects performing a visual-memory task. Subjects classified as 'good' or 'poor' imagers, according to their scores in the Vividness of Visual Imagery Questionnaire (VVIQ; Marks, 1972), were to detect as rapidly as possible differences between pairs of similar pictures presented consecutively. No coding instructions were given and the mnemonic strategies used were analysed by studying subjective reports and objective performance measurements. The results indicated that the subjects utilized two main strategies--a detail or an image strategy. The detail strategy was the more efficient. In accordance with a previous study (Berger & Gaunitz, 1977), it was found that the VVIQ did not discriminate between performance by 'good' and 'poor' imagers. However, among subjects who used the image strategy, 'good' imagers performed more rapidly than 'poor' imagers. Self-rated imagery may then have some value in predicting performance among individuals shown to have utilized an image strategy.
WE-G-BRF-09: Force- and Image-Adaptive Strategies for Robotised Placement of 4D Ultrasound Probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhlemann, I; Graduate School for Computing in Life Science, University of Luebeck, Luebeck; Bruder, R
2014-06-15
Purpose: To allow continuous acquisition of high quality 4D ultrasound images for non-invasive live tracking of tumours for IGRT, image- and force-adaptive strategies for robotised placement of 4D ultrasound probes are developed and evaluated. Methods: The developed robotised ultrasound system is based on a 6-axes industrial robot (adept Viper s850) carrying a 4D ultrasound transducer with a mounted force-torque sensor. The force-adaptive placement strategies include probe position control using artificial potential fields and contact pressure regulation by a PD controller strategy. The basis for live target tracking is a continuous minimum contact pressure to ensure good image quality and highmore » patient comfort. This contact pressure can be significantly disturbed by respiratory movements and has to be compensated. All measurements were performed on human subjects under realistic conditions. When performing cardiac ultrasound, rib- and lung shadows are a common source of interference and can disrupt the tracking. To ensure continuous tracking, these artefacts had to be detected to automatically realign the probe. The detection is realised by multiple algorithms based on entropy calculations as well as a determination of the image quality. Results: Through active contact pressure regulation it was possible to reduce the variance of the contact pressure by 89.79% despite respiratory motion of the chest. The results regarding the image processing clearly demonstrate the feasibility to detect image artefacts like rib shadows in real-time. Conclusion: In all cases, it was possible to stabilise the image quality by active contact pressure control and automatically detected image artefacts. This fact enables the possibility to compensate for such interferences by realigning the probe and thus continuously optimising the ultrasound images. This is a huge step towards fully automated transducer positioning and opens the possibility for stable target tracking in ultrasoundguided radiation therapy requiring contact pressure of 5–10 N. This work was supported by the Graduate School for Computing in Medicine and Life Sciences funded by Germany's Excellence Initiative [DFG GSC 235/1].« less
A proposed intracortical visual prosthesis image processing system.
Srivastava, N R; Troyk, P
2005-01-01
It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.
Fuzzy C-means classification for corrosion evolution of steel images
NASA Astrophysics Data System (ADS)
Trujillo, Maite; Sadki, Mustapha
2004-05-01
An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
A Specific Screening Strategy to Reduce Prostate Cancer Mortality
2013-09-01
Vashaw Scientific, Norcross, GA ). Filter, camera, and image processing from the in vitro imaging methods were used in coordination with the...Cancer Facts and Figures 2011. (2011). American Cancer Society. Atlanta, GA . 2. Jemal, A, Siegel, R, Ward, E, Murray, T, Xu, J, Smigal, C, et al. (2006...overexpression in prostate cancer: relevance to tumor differentiation. Pathology oncology research : POR 15: 91-96. 11. Ouyang, XS , Wang, X, Lee, DT, Tsao, SW
Integrated analysis of remote sensing products from basic geological surveys. [Brazil
NASA Technical Reports Server (NTRS)
Dasilvafagundesfilho, E. (Principal Investigator)
1984-01-01
Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.
Strategies GeoCape Intelligent Observation Studies @ GSFC
NASA Technical Reports Server (NTRS)
Cappelaere, Pat; Frye, Stu; Moe, Karen; Mandl, Dan; LeMoigne, Jacqueline; Flatley, Tom; Geist, Alessandro
2015-01-01
This presentation provides information a summary of the tradeoff studies conducted for GeoCape by the GSFC team in terms of how to optimize GeoCape observation efficiency. Tradeoffs include total ground scheduling with simple priorities, ground scheduling with cloud forecast, ground scheduling with sub-area forecast, onboard scheduling with onboard cloud detection and smart onboard scheduling and onboard image processing. The tradeoffs considered optimzing cost, downlink bandwidth and total number of images acquired.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Modern Techniques in Acoustical Signal and Image Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J V
2002-04-04
Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve thismore » goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.« less
Spine metastasis imaging: review of the literature.
Guillevin, R; Vallee, J-N; Lafitte, F; Menuel, C; Duverneuil, N-M; Chiras, J
2007-12-01
Any malignant neoplasm possesses the capacity to metastasize to the musculoskeletal system. Because the spine is the most frequent site of bone metastasis, imaging must be discussed in cases of cancer. Bone marrow is the main interest in imaging the metastatic process by magnetic resonance, while X-rays allow the study of cortical involvement. This article presents our experience, and a review of the literature, in an overview of the different imaging techniques-X-rays and magnetic resonance-with emphasis on the many difficulties that can be encountered in the diagnosis and monitoring of spinal metastases, allowing a management strategy for diagnosis and follow-up.
The elimination of zero-order diffraction of 10.6 μm infrared digital holography
NASA Astrophysics Data System (ADS)
Liu, Ning; Yang, Chao
2017-05-01
A new method of eliminating the zero-order diffraction in infrared digital holography has been raised in this paper. Usually in the reconstruction of digital holography, the spatial frequency of the infrared thermal imager, such as microbolometer, cannot be compared to the common visible CCD or CMOS devices. The infrared imager suffers the problems of large pixel size and low spatial resolution, which cause the zero-order diffraction a severe influence of the reconstruction process of digital holograms. The zero-order diffraction has very large energy and occupies the central region in the spectrum domain. In this paper, we design a new filtering strategy to overcome this problem. This filtering strategy contains two kinds of filtering process which are the Gaussian low-frequency filter and the high-pass phase averaging filter. With the correct set of the calculating parameters, these filtering strategies can work effectively on the holograms and fully eliminate the zero-order diffraction, as well as the two crossover bars shown in the spectrum domain. Detailed explanation and discussion about the new method have been proposed in this paper, and the experiment results are also demonstrated to prove the performance of this method.
Hyperspectral imaging applied to complex particulate solids systems
NASA Astrophysics Data System (ADS)
Bonifazi, Giuseppe; Serranti, Silvia
2008-04-01
HyperSpectral Imaging (HSI) is based on the utilization of an integrated hardware and software (HW&SW) platform embedding conventional imaging and spectroscopy to attain both spatial and spectral information from an object. Although HSI was originally developed for remote sensing, it has recently emerged as a powerful process analytical tool, for non-destructive analysis, in many research and industrial sectors. The possibility to apply on-line HSI based techniques in order to identify and quantify specific particulate solid systems characteristics is presented and critically evaluated. The originally developed HSI based logics can be profitably applied in order to develop fast, reliable and lowcost strategies for: i) quality control of particulate products that must comply with specific chemical, physical and biological constraints, ii) performance evaluation of manufacturing strategies related to processing chains and/or realtime tuning of operative variables and iii) classification-sorting actions addressed to recognize and separate different particulate solid products. Case studies, related to recent advances in the application of HSI to different industrial sectors, as agriculture, food, pharmaceuticals, solid waste handling and recycling, etc. and addressed to specific goals as contaminant detection, defect identification, constituent analysis and quality evaluation are described, according to authors' originally developed application.
Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.
Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S
2011-03-30
Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Infrared Imaging of Boundary Layer Transition Flight Experiments
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Horvath, Thomas J., Jr.; Schwartz, Richard; Ross, Martin; Anderson, Brian; Campbell, Charles H.
2008-01-01
The Hypersonic Thermodynamic Infrared Measurement (HYTHIRM) project is presently focused on near term support to the Shuttle program through the development of an infrared imaging capability of sufficient spatial and temporal resolution to augment existing on-board Orbiter instrumentation. Significant progress has been made with the identification and inventory of relevant existing optical imaging assets and the development, maturation, and validation of simulation and modeling tools for assessment and mission planning purposes, which were intended to lead to the best strategies and assets for successful acquisition of quantitative global surface temperature data on the Shuttle during entry. However, there are longer-term goals of providing global infrared imaging support to other flight projects as well. A status of HYTHIRM from the perspective of how two NASA-sponsored boundary layer transition flight experiments could benefit by infrared measurements is provided. Those two flight projects are the Hypersonic Boundary layer Transition (HyBoLT) flight experiment and the Shuttle Boundary Layer Transition Flight Experiment (BLT FE), which are both intended for reducing uncertainties associated with the extrapolation of wind tunnel derived transition correlations for flight application. Thus, the criticality of obtaining high quality flight data along with the impact it would provide to the Shuttle program damage assessment process are discussed. Two recent wind tunnel efforts that were intended as risk mitigation in terms of quantifying the transition process and resulting turbulent wedge locations are briefly reviewed. Progress is being made towards finalizing an imaging strategy in support of the Shuttle BLT FE, however there are no plans currently to image HyBoLT.
Identifying attentional bias and emotional response after appearance-related stimuli exposure.
Cho, Ara; Kwak, Soo-Min; Lee, Jang-Han
2013-01-01
The effect of media images has been regarded as a significant variable in the construction or in the activation of body images. Individuals who have a negative body image use avoidance coping strategies to minimize damage to their body image. We identified attentional biases and negative emotional responses following exposure to body stimuli. Female university students were divided into two groups based on their use of avoidance coping strategies (high-level group: high avoidance [HA]; low-group: low avoidance [LA]), and were assigned to two different conditions (exposure to thin body pictures, ET, and exposure to oversized body pictures, EO). Results showed that the HA group paid more attention to slim bodies and reported more negative emotions than the LA group, and that the EO had more negative effects than the ET. We suggest that HAs may attend more to slim bodies as a way of avoiding overweight bodies, influenced by social pressure, and in the search for a compensation of a positive emotional balance. However, attentional bias toward slim bodies can cause an upward comparison process, leading to increased body dissatisfaction, which is the main factor in the development of eating disorders (EDs). Therefore, altering avoidance coping strategies should be considered for people at risk of EDs.
An embedded multi-core parallel model for real-time stereo imaging
NASA Astrophysics Data System (ADS)
He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu
2018-04-01
The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lu, Xiaodong; Wu, Tianze; Zhou, Jun; Zhao, Bin; Ma, Xiaoyuan; Tang, Xiucheng
2016-03-01
An electronic image stabilization method compounded with inertia information, which can compensate the coupling interference caused by the pitch-yaw movement of the optical stable platform system, has been proposed in this paper. Firstly the mechanisms of coning rotation and lever-arm translation of line of sight (LOS) are analyzed during the stabilization process under moving carriers, and the mathematical model which describes the relationship between LOS rotation angle and platform attitude angle are derived. Then the image spin angle caused by coning rotation is estimated by using inertia information. Furthermore, an adaptive block matching method, which based on image edge and angular point, is proposed to smooth the jitter created by the lever-arm translation. This method optimizes the matching process and strategies. Finally, the results of hardware-in-the-loop simulation verified the effectiveness and real-time performance of the proposed method.
Preliminary study of rib articulated model based on dynamic fluoroscopy images
NASA Astrophysics Data System (ADS)
Villard, Pierre-Frederic; Escamilla, Pierre; Kerrien, Erwan; Gorges, Sebastien; Trousset, Yves; Berger, Marie-Odile
2014-03-01
We present in this paper a preliminary study of rib motion tracking during Interventional Radiology (IR) fluoroscopy guided procedures. It consists in providing a physician with moving rib three-dimensional (3D) models projected in the fluoroscopy plane during a treatment. The strategy is to help to quickly recognize the target and the no-go areas i.e. the tumor and the organs to avoid. The method consists in i) elaborating a kinematic model of each rib from a preoperative computerized tomography (CT) scan, ii) processing the on-line fluoroscopy image and iii) optimizing the parameters of the kinematic law such as the transformed 3D rib projected on the medical image plane fit well with the previously processed image. The results show a visually good rib tracking that has been quantitatively validated by showing a periodic motion as well as a good synchronism between ribs.
Imaging and the completion of the omics paradigm in breast cancer.
Leithner, D; Horvat, J V; Ochoa-Albiztegui, R E; Thakur, S; Wengert, G; Morris, E A; Helbich, T H; Pinker, K
2018-06-08
Within the field of oncology, "omics" strategies-genomics, transcriptomics, proteomics, metabolomics-have many potential applications and may significantly improve our understanding of the underlying processes of cancer development and progression. Omics strategies aim to develop meaningful imaging biomarkers for breast cancer (BC) by rapid assessment of large datasets with different biological information. In BC the paradigm of omics technologies has always favored the integration of multiple layers of omics data to achieve a complete portrait of BC. Advances in medical imaging technologies, image analysis, and the development of high-throughput methods that can extract and correlate multiple imaging parameters with "omics" data have ushered in a new direction in medical research. Radiogenomics is a novel omics strategy that aims to correlate imaging characteristics (i. e., the imaging phenotype) with underlying gene expression patterns, gene mutations, and other genome-related characteristics. Radiogenomics not only represents the evolution in the radiology-pathology correlation from the anatomical-histological level to the molecular level, but it is also a pivotal step in the omics paradigm in BC in order to fully characterize BC. Armed with modern analytical software tools, radiogenomics leads to new discoveries of quantitative and qualitative imaging biomarkers that offer hitherto unprecedented insights into the complex tumor biology and facilitate a deeper understanding of cancer development and progression. The field of radiogenomics in breast cancer is rapidly evolving, and results from previous studies are encouraging. It can be expected that radiogenomics will play an important role in the future and has the potential to revolutionize the diagnosis, treatment, and prognosis of BC patients. This article aims to give an overview of breast radiogenomics, its current role, future applications, and challenges.
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Real, Eusebio; Pontón, Alejandro; Calvo Diez, Marta; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José M.; Conde, Olga M.
2014-05-01
Optical Coherence Tomography is a natural candidate for imaging biological structures just under tissue surface. Human thoracic aorta from aneurysms reveal elastin disorders and smooth muscle cell alterations when visualizing the media layer of the aortic wall, which is only some tens of microns in depth from surface. The resulting images require a suitable processing to enhance interesting disorder features and to use them as indicators for wall degradation, converting OCT into a hallmark for diagnosis of risk of aneurysm under intraoperative conditions. This work proposes gradient-based digital image processing approaches to conclude this risk. These techniques are believed to be useful in these applications as aortic wall disorders directly affect the refractive index of the tissue, having an effect on the gradient of the tissue reflectivity that conform the OCT image. Preliminary results show that the direction of the gradient contains information to estimate the tissue abnormality score. The detection of the edges of the OCT image is performed using the Canny algorithm. The edges delineate tissue disorders in the region of interest and isolate the abnormalities. These edges can be quantified to estimate a degradation score. Furthermore, the direction of the gradient seems to be a promising enhancement technique, as it detects areas of homogeneity in the region of interest. Automatic results from gradient-based strategies are finally compared to the histopathological global aortic score, which accounts for each risk factor presence and seriousness.
NASA Astrophysics Data System (ADS)
Yu, Le; Zhang, Dengrong; Holden, Eun-Jung
2008-07-01
Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.
Xia, Yong; Eberl, Stefan; Wen, Lingfeng; Fulham, Michael; Feng, David Dagan
2012-01-01
Dual medical imaging modalities, such as PET-CT, are now a routine component of clinical practice. Medical image segmentation methods, however, have generally only been applied to single modality images. In this paper, we propose the dual-modality image segmentation model to segment brain PET-CT images into gray matter, white matter and cerebrospinal fluid. This model converts PET-CT image segmentation into an optimization process controlled simultaneously by PET and CT voxel values and spatial constraints. It is innovative in the creation and application of the modality discriminatory power (MDP) coefficient as a weighting scheme to adaptively combine the functional (PET) and anatomical (CT) information on a voxel-by-voxel basis. Our approach relies upon allowing the modality with higher discriminatory power to play a more important role in the segmentation process. We compared the proposed approach to three other image segmentation strategies, including PET-only based segmentation, combination of the results of independent PET image segmentation and CT image segmentation, and simultaneous segmentation of joint PET and CT images without an adaptive weighting scheme. Our results in 21 clinical studies showed that our approach provides the most accurate and reliable segmentation for brain PET-CT images. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hybrid cardiac imaging with MR-CAT scan: a feasibility study.
Hillenbrand, C; Sandstede, J; Pabst, T; Hahn, D; Haase, A; Jakob, P M
2000-06-01
We demonstrate the feasibility of a new versatile hybrid imaging concept, the combined acquisition technique (CAT), for cardiac imaging. The cardiac CAT approach, which combines new methodology with existing technology, essentially integrates fast low-angle shot (FLASH) and echoplanar imaging (EPI) modules in a sequential fashion, whereby each acquisition module is employed with independently optimized imaging parameters. One important CAT sequence optimization feature is the ability to use different bandwidths for different acquisition modules. Twelve healthy subjects were imaged using three cardiac CAT acquisition strategies: a) CAT was used to reduce breath-hold duration times while maintaining constant spatial resolution; b) CAT was used to increase spatial resolution in a given breath-hold time; and c) single-heart beat CAT imaging was performed. The results obtained demonstrate the feasibility of cardiac imaging using the CAT approach and the potential of this technique to accelerate the imaging process with almost conserved image quality. Copyright 2000 Wiley-Liss, Inc.
iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM
Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.
2011-01-01
iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445
How musical expertise shapes speech perception: evidence from auditory classification images.
Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel
2015-09-24
It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.
Cocrystals Strategy towards Materials for Near-Infrared Photothermal Conversion and Imaging.
Wang, Yu; Zhu, Weigang; Du, Wenna; Liu, Xinfeng; Zhang, Xiaotao; Dong, Huanli; Hu, Wenping
2018-04-03
A cocrystal strategy with a simple preparation process is developed to prepare novel materials for near-infrared photothermal (PT) conversion and imaging. DBTTF and TCNB are selected as electron donor (D) and electron acceptor (A) to self-assemble into new cocrystals through non-covalent interactions. The strong D-A interaction leads to a narrow band gap with NIR absorption and that both the ground state and lowest-lying excited state are charge transfer states. Under the NIR laser illumination, the temperature of the cocrystal sharply increases in a short time with high PT conversion efficiency (η=18.8 %), which is due to the active non-radiative pathways and inhibition of radiative transition process, as revealed by femtosecond transient absorption spectroscopy. This is the first PT conversion cocrystal, which not only provides insights for the development of novel PT materials, but also paves the way of designing functional materials with appealing applications. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Graves, William W.; Binder, Jeffrey R.; Desai, Rutvik H.; Humphries, Colin; Stengel, Benjamin C.; Seidenberg, Mark S.
2014-01-01
Are there multiple ways to be a skilled reader? To address this longstanding, unresolved question, we hypothesized that individual variability in using semantic information in reading aloud would be associated with neuroanatomical variation in pathways linking semantics and phonology. Left-hemisphere regions of interest for diffusion tensor imaging analysis were defined based on fMRI results, including two regions linked with semantic processing – angular gyrus (AG) and inferior temporal sulcus (ITS) – and two linked with phonological processing – posterior superior temporal gyrus (pSTG) and posterior middle temporal gyrus (pMTG). Effects of imageability (a semantic measure) on response times varied widely among individuals and covaried with the volume of pathways through the ITS and pMTG, and through AG and pSTG, partially overlapping the inferior longitudinal fasciculus and the posterior branch of the arcuate fasciculus. These results suggest strategy differences among skilled readers associated with structural variation in the neural reading network. PMID:24735993
NASA Astrophysics Data System (ADS)
Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.
Molecular imaging promotes progress in orthopedic research.
Mayer-Kuckuk, Philipp; Boskey, Adele L
2006-11-01
Modern orthopedic research is directed towards the understanding of molecular mechanisms that determine development, maintenance and health of musculoskeletal tissues. In recent years, many genetic and proteomic discoveries have been made which necessitate investigation under physiological conditions in intact, living tissues. Molecular imaging can meet this demand and is, in fact, the only strategy currently available for noninvasive, quantitative, real-time biology studies in living subjects. In this review, techniques of molecular imaging are summarized, and applications to bone and joint biology are presented. The imaging modality most frequently used in the past was optical imaging, particularly bioluminescence and near-infrared fluorescence imaging. Alternate technologies including nuclear and magnetic resonance imaging were also employed. Orthopedic researchers have applied molecular imaging to murine models including transgenic mice to monitor gene expression, protein degradation, cell migration and cell death. Within the bone compartment, osteoblasts and their stem cells have been investigated, and the organic and mineral bone phases have been assessed. These studies addressed malignancy and injury as well as repair, including fracture healing and cell/gene therapy for skeletal defects. In the joints, molecular imaging has focused on the inflammatory and tissue destructive processes that cause arthritis. As described in this review, the feasibility of applying molecular imaging to numerous areas of orthopedic research has been demonstrated and will likely result in an increase in research dedicated to this powerful strategy. Molecular imaging holds great promise in the future for preclinical orthopedic research as well as next-generation clinical musculoskeletal diagnostics.
Industrial applications of automated X-ray inspection
NASA Astrophysics Data System (ADS)
Shashishekhar, N.
2015-03-01
Many industries require that 100% of manufactured parts be X-ray inspected. Factors such as high production rates, focus on inspection quality, operator fatigue and inspection cost reduction translate to an increasing need for automating the inspection process. Automated X-ray inspection involves the use of image processing algorithms and computer software for analysis and interpretation of X-ray images. This paper presents industrial applications and illustrative case studies of automated X-ray inspection in areas such as automotive castings, fuel plates, air-bag inflators and tires. It is usually necessary to employ application-specific automated inspection strategies and techniques, since each application has unique characteristics and interpretation requirements.
Strategic questions for consumer-based health communications.
Sutton, S M; Balch, G I; Lefebvre, R C
1995-01-01
Using the consumer-oriented approach of social and commercial marketers, this article presents a process for crafting messages designed to improve people's health behaviors. The process, termed consumer-based health communications (CHC), transforms scientific recommendations into message strategies that are relevant to the consumer. The core of CHC is consumer research conducted to understand the consumer's reality, and thereby allowing six strategic questions to be answered. The immediate result of the CHC process is a strategy statement--a few pages that lay out who the target consumer is, what action should be taken, what to promise and how to make the promise credible, how and when to reach him or her, and what image to convey. The strategy statement then guides the execution of all communication efforts, be they public relations, mass media, direct marketing, media advocacy, or interpersonal influence. It identifies the most important "levers" for contact with the consumer. Everyone from creative specialists through management and program personnel can use the strategy statement as a touchstone to guide and judge the effectiveness of their efforts. The article provides a step by step illustration of the CHC process using the 5 A Day campaign as an example. PMID:8570827
Strategic questions for consumer-based health communications.
Sutton, S M; Balch, G I; Lefebvre, R C
1995-01-01
Using the consumer-oriented approach of social and commercial marketers, this article presents a process for crafting messages designed to improve people's health behaviors. The process, termed consumer-based health communications (CHC), transforms scientific recommendations into message strategies that are relevant to the consumer. The core of CHC is consumer research conducted to understand the consumer's reality, and thereby allowing six strategic questions to be answered. The immediate result of the CHC process is a strategy statement--a few pages that lay out who the target consumer is, what action should be taken, what to promise and how to make the promise credible, how and when to reach him or her, and what image to convey. The strategy statement then guides the execution of all communication efforts, be they public relations, mass media, direct marketing, media advocacy, or interpersonal influence. It identifies the most important "levers" for contact with the consumer. Everyone from creative specialists through management and program personnel can use the strategy statement as a touchstone to guide and judge the effectiveness of their efforts. The article provides a step by step illustration of the CHC process using the 5 A Day campaign as an example.
Evaluation of an Area-Based matching algorithm with advanced shape models
NASA Astrophysics Data System (ADS)
Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.
2014-04-01
Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Adaptive windowing in contrast-enhanced intravascular ultrasound imaging.
Lindsey, Brooks D; Martin, K Heath; Jiang, Xiaoning; Dayton, Paul A
2016-08-01
Intravascular ultrasound (IVUS) is one of the most commonly-used interventional imaging techniques and has seen recent innovations which attempt to characterize the risk posed by atherosclerotic plaques. One such development is the use of microbubble contrast agents to image vasa vasorum, fine vessels which supply oxygen and nutrients to the walls of coronary arteries and typically have diameters less than 200μm. The degree of vasa vasorum neovascularization within plaques is positively correlated with plaque vulnerability. Having recently presented a prototype dual-frequency transducer for contrast agent-specific intravascular imaging, here we describe signal processing approaches based on minimum variance (MV) beamforming and the phase coherence factor (PCF) for improving the spatial resolution and contrast-to-tissue ratio (CTR) in IVUS imaging. These approaches are examined through simulations, phantom studies, ex vivo studies in porcine arteries, and in vivo studies in chicken embryos. In phantom studies, PCF processing improved CTR by a mean of 4.2dB, while combined MV and PCF processing improved spatial resolution by 41.7%. Improvements of 2.2dB in CTR and 37.2% in resolution were observed in vivo. Applying these processing strategies can enhance image quality in conventional B-mode IVUS or in contrast-enhanced IVUS, where signal-to-noise ratio is relatively low and resolution is at a premium. Copyright © 2016 Elsevier B.V. All rights reserved.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.
Strategy of restraining ripple error on surface for optical fabrication.
Wang, Tan; Cheng, Haobo; Feng, Yunpeng; Tam, Honyuen
2014-09-10
The influence from the ripple error to the high imaging quality is effectively reduced by restraining the ripple height. A method based on the process parameters and the surface error distribution is designed to suppress the ripple height in this paper. The generating mechanism of the ripple error is analyzed by polishing theory with uniform removal character. The relation between the processing parameters (removal functions, pitch of path, and dwell time) and the ripple error is discussed through simulations. With these, the strategy for diminishing the error is presented. A final process is designed and demonstrated on K9 work-pieces using the optimizing strategy with magnetorheological jet polishing. The form error on the surface is decreased from 0.216λ PV (λ=632.8 nm) and 0.039λ RMS to 0.03λ PV and 0.004λ RMS. And the ripple error is restrained well at the same time, because the ripple height is less than 6 nm on the final surface. Results indicate that these strategies are suitable for high-precision optical manufacturing.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Geometric registration of remotely sensed data with SAMIR
NASA Astrophysics Data System (ADS)
Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto
2015-06-01
The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.
NASA Astrophysics Data System (ADS)
Zamora Ramos, Ernesto
Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.
Liechty, Janet M; Clarke, Samantha; Birky, Julie P; Harrison, Kristen
2016-12-01
This study sought to explore parental perceptions of body image in preschoolers. We conducted semi-structured interviews with 30 primary caregivers of preschoolers to examine knowledge, beliefs, and strategies regarding early body image socialization in families. Thematic Analysis yielded three themes highlighting knowledge gaps, belief discrepancies, and limited awareness of strategies. Findings regarding knowledge: Most participants defined body image as objective attractiveness rather than subjective self-assessment (53%) and focused on negative body image. Beliefs: Although 97% of participants believed weight and shape impact children's self-esteem, 63% believed preschoolers too young to have a body image. Strategies: Most participants (53%) said family was a primary influence on body image, but identified few effective strategies and 63% said they did not do anything to influence children's body image. Findings suggested family body image socialization in preschoolers is occurring outside the awareness of parents and the concept of positive body image is underdeveloped. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia
2016-06-01
A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Development of imaging biomarkers and generation of big data.
Alberich-Bayarri, Ángel; Hernández-Navarro, Rafael; Ruiz-Martínez, Enrique; García-Castro, Fabio; García-Juan, David; Martí-Bonmatí, Luis
2017-06-01
Several image processing algorithms have emerged to cover unmet clinical needs but their application to radiological routine with a clear clinical impact is still not straightforward. Moving from local to big infrastructures, such as Medical Imaging Biobanks (millions of studies), or even more, Federations of Medical Imaging Biobanks (in some cases totaling to hundreds of millions of studies) require the integration of automated pipelines for fast analysis of pooled data to extract clinically relevant conclusions, not uniquely linked to medical imaging, but in combination to other information such as genetic profiling. A general strategy for the development of imaging biomarkers and their integration in the cloud for the quantitative management and exploitation in large databases is herein presented. The proposed platform has been successfully launched and is being validated nowadays among the early adopters' community of radiologists, clinicians, and medical imaging researchers.
Imaging windows for long-term intravital imaging
Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco
2014-01-01
Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure. PMID:28243510
Imaging windows for long-term intravital imaging: General overview and technical insights.
Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco
2014-01-01
Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure.
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
Martin, Elizabeth A; Karcher, Nicole R; Bartholow, Bruce D; Siegle, Greg J; Kerns, John G
2017-03-01
Both extreme levels of social anhedonia (SocAnh) and perceptual aberration/magical ideation (PerMag) are associated with risk for schizophrenia-spectrum disorders and with emotional abnormalities. Yet, the nature of any psychophysiological-measured affective abnormality, including the role of automatic/controlled processes, is unclear. We examined the late positive potential (LPP) during passive viewing (to assess automatic processing) and during cognitive reappraisal (to assess controlled processing) in three groups: SocAnh, PerMag, and controls. The SocAnh group exhibited an increased LPP when viewing negative images. Further, SocAnh exhibited greater reductions in the LPP for negative images when told to use strategies to alter negative emotion. Similar to SocAnh, PerMag exhibited an increased LPP when viewing negative images. However, PerMag also exhibited an increased LPP when viewing positive images as well as an atypical decreased LPP when increasing positive emotion. Overall, these results suggest that at-risk groups are associated with shared and unique automatic and controlled abnormalities. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larner, J.
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H M; Poot, Dirk H J; Niessen, Wiro J; Klein, Stefan
2015-08-01
To evaluate the influence of image registration on apparent diffusion coefficient (ADC) images obtained from abdominal free-breathing diffusion-weighted MR images (DW-MRIs). A comprehensive pipeline based on automatic three-dimensional nonrigid image registrations is developed to compensate for misalignments in DW-MRI datasets obtained from five healthy subjects scanned twice. Motion is corrected both within each image and between images in a time series. ADC distributions are compared with and without registration in two abdominal volumes of interest (VOIs). The effects of interpolations and Gaussian blurring as alternative strategies to reduce motion artifacts are also investigated. Among the four considered scenarios (no processing, interpolation, blurring and registration), registration yields the best alignment scores. Median ADCs vary according to the chosen scenario: for the considered datasets, ADCs obtained without processing are 30% higher than with registration. Registration improves voxelwise reproducibility at least by a factor of 2 and decreases uncertainty (Fréchet-Cramér-Rao lower bound). Registration provides similar improvements in reproducibility and uncertainty as acquiring four times more data. Patient motion during image acquisition leads to misaligned DW-MRIs and inaccurate ADCs, which can be addressed using automatic registration. © 2014 Wiley Periodicals, Inc.
Discriminative dictionary learning for abdominal multi-organ segmentation.
Tong, Tong; Wolz, Robin; Wang, Zehan; Gao, Qinquan; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku; Hajnal, Joseph V; Rueckert, Daniel
2015-07-01
An automated segmentation method is presented for multi-organ segmentation in abdominal CT images. Dictionary learning and sparse coding techniques are used in the proposed method to generate target specific priors for segmentation. The method simultaneously learns dictionaries which have reconstructive power and classifiers which have discriminative ability from a set of selected atlases. Based on the learnt dictionaries and classifiers, probabilistic atlases are then generated to provide priors for the segmentation of unseen target images. The final segmentation is obtained by applying a post-processing step based on a graph-cuts method. In addition, this paper proposes a voxel-wise local atlas selection strategy to deal with high inter-subject variation in abdominal CT images. The segmentation performance of the proposed method with different atlas selection strategies are also compared. Our proposed method has been evaluated on a database of 150 abdominal CT images and achieves a promising segmentation performance with Dice overlap values of 94.9%, 93.6%, 71.1%, and 92.5% for liver, kidneys, pancreas, and spleen, respectively. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Painter and scribe: From model of mind to cognitive strategy.
MacKisack, Matthew
2017-12-06
Since antiquity the mind has been conceived to operate via images and words. Pre-scientific thinkers (and some scientific) who presented the mind as operating in such a way tended to i) bias one representational mode over the other, and ii) claim the dominance of the mode to be the case universally. The rise of empirical psychological science in the late 19th-century rehearses the word/image division of thought but makes universal statements - e.g., that recollection is a verbal process for everyone - untenable. Since then, the investigation of individual differences and case studies of imagery loss have shown rather that words and images present alternative cognitive "strategies" that individuals will be predisposed to employing - but which, should the necessity arise, can be relearned using the other representational mode. The following sketches out this historical shift in understanding, and concludes by inviting consideration of the wider context in which discussion of the relationships between 'images' and 'words' (as both internal and external forms of representation) must take place. Copyright © 2017 Elsevier Ltd. All rights reserved.
Imaging strategies for the study of gas turbine spark ignition
NASA Astrophysics Data System (ADS)
Gord, James R.; Tyler, Charles; Grinstead, Keith D., Jr.; Fiechtner, Gregory J.; Cochran, Michael J.; Frus, John R.
1999-10-01
Spark-ignition systems play a critical role in the performance of essentially all gas turbine engines. These devices are responsible for initiating the combustion process that sustains engine operation. Demanding applications such as cold start and high-altitude relight require continued enhancement of ignition systems. To characterize advanced ignition systems, we have developed a number of laser-based diagnostic techniques configured for ultrafast imaging of spark parameters including emission, density, temperature, and species concentration. These diagnostics have been designed to exploit an ultrafast- framing charge-coupled-device (CCD) camera and high- repetition-rate laser sources including mode-locked Ti:sapphire oscillators and regenerative amplifiers. Spontaneous-emission and laser-shlieren measurements have been accomplished with this instrumentation and the result applied to the study of a novel Unison Industries spark igniter that shows great promise for improved cold-start and high-altitude-relight capability as compared to that of igniters currently in use throughout military and commercial fleets. Phase-locked and ultrafast real-time imaging strategies are explored, and details of the imaging instrumentation, particularly the CCD camera and laser sources, are discussed.
Noise models for low counting rate coherent diffraction imaging.
Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John
2012-11-05
Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.
Hales, J. B.; Brewer, J. B.
2018-01-01
Given the diversity of stimuli encountered in daily life, a variety of strategies must be used for learning new information. Relating and encoding visual and verbal stimuli into memory has been probed using various tasks and stimulus-types. Engagement of specific subsequent memory and cortical processing regions depends on the stimulus modality of studied material; however, it remains unclear whether different encoding strategies similarly influence regional activity when stimulus-type is held constant. In this study, subjects encoded object pairs using a visual or verbal associative strategy during functional magnetic resonance imaging (fMRI), and subsequent memory was assessed for pairs encoded under each strategy. Each strategy elicited distinct regional processing and subsequent memory effects: middle / superior frontal, lateral parietal, and lateral occipital for visually-associated pairs and inferior frontal, medial frontal, and medial occipital for verbally-associated pairs. This regional selectivity mimics the effects of stimulus modality, suggesting that cortical involvement in associative encoding is driven by strategy, and not simply by stimulus-type. The clinical relevance of these findings, probed in two patients with recent aphasic strokes, suggest that training with strategies utilizing unaffected cortical regions might improve memory ability in patients with brain damage. PMID:22390467
Sebastian, Alexandra; Rössler, Kora; Wibral, Michael; Mobascher, Arian; Lieb, Klaus; Jung, Patrick; Tüscher, Oliver
2017-10-04
In stimulus-selective stop-signal tasks, the salient stop signal needs attentional processing before genuine response inhibition is completed. Differential prefrontal involvement in attentional capture and response inhibition has been linked to the right inferior frontal junction (IFJ) and ventrolateral prefrontal cortex (VLPFC), respectively. Recently, it has been suggested that stimulus-selective stopping may be accomplished by the following different strategies: individuals may selectively inhibit their response only upon detecting a stop signal (independent discriminate then stop strategy) or unselectively whenever detecting a stop or attentional capture signal (stop then discriminate strategy). Alternatively, the discrimination process of the critical signal (stop vs attentional capture signal) may interact with the go process (dependent discriminate then stop strategy). Those different strategies might differentially involve attention- and stopping-related processes that might be implemented by divergent neural networks. This should lead to divergent activation patterns and, if disregarded, interfere with analyses in neuroimaging studies. To clarify this crucial issue, we studied 87 human participants of both sexes during a stimulus-selective stop-signal task and performed strategy-dependent functional magnetic resonance imaging analyses. We found that, regardless of the strategy applied, outright stopping displayed indistinguishable brain activation patterns. However, during attentional capture different strategies resulted in divergent neural activation patterns with variable activation of right IFJ and bilateral VLPFC. In conclusion, the neural network involved in outright stopping is ubiquitous and independent of strategy, while different strategies impact on attention-related processes and underlying neural network usage. Strategic differences should therefore be taken into account particularly when studying attention-related processes in stimulus-selective stopping. SIGNIFICANCE STATEMENT Dissociating inhibition from attention has been a major challenge for the cognitive neuroscience of executive functions. Selective stopping tasks have been instrumental in addressing this question. However, recent theoretical, cognitive and behavioral research suggests that different strategies are applied in successful execution of the task. The underlying strategy-dependent neural networks might differ substantially. Here, we show evidence that, regardless of the strategy used, the neural network involved in outright stopping is ubiquitous. However, significant differences can only be found in the attention-related processes underlying those different strategies. Thus, when studying attentional processing of salient stop signals, strategic differences should be considered. In contrast, the neural networks implementing outright stopping seem less or not at all affected by strategic differences. Copyright © 2017 the authors 0270-6474/17/379786-10$15.00/0.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
Lin, Jenny B.; Phillips, Evan H.; Riggins, Ti’Air E.; Sangha, Gurneet S.; Chakraborty, Sreyashi; Lee, Janice Y.; Lycke, Roy J.; Hernandez, Clarissa L.; Soepriatna, Arvin H.; Thorne, Bradford R. H.; Yrineo, Alexa A.; Goergen, Craig J.
2015-01-01
Peripheral artery disease (PAD) is a broad disorder encompassing multiple forms of arterial disease outside of the heart. As such, PAD development is a multifactorial process with a variety of manifestations. For example, aneurysms are pathological expansions of an artery that can lead to rupture, while ischemic atherosclerosis reduces blood flow, increasing the risk of claudication, poor wound healing, limb amputation, and stroke. Current PAD treatment is often ineffective or associated with serious risks, largely because these disorders are commonly undiagnosed or misdiagnosed. Active areas of research are focused on detecting and characterizing deleterious arterial changes at early stages using non-invasive imaging strategies, such as ultrasound, as well as emerging technologies like photoacoustic imaging. Earlier disease detection and characterization could improve interventional strategies, leading to better prognosis in PAD patients. While rodents are being used to investigate PAD pathophysiology, imaging of these animal models has been underutilized. This review focuses on structural and molecular information and disease progression revealed by recent imaging efforts of aortic, cerebral, and peripheral vascular disease models in mice, rats, and rabbits. Effective translation to humans involves better understanding of underlying PAD pathophysiology to develop novel therapeutics and apply non-invasive imaging techniques in the clinic. PMID:25993289
Standing on the shoulders of giants: improving medical image segmentation via bias correction.
Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul
2010-01-01
We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.
NASA Astrophysics Data System (ADS)
Manoharan, Kodeeswari; Daniel, Philemon
2017-11-01
This paper presents a robust lane detection technique for roads on hilly terrain. The target of this paper is to utilize image processing strategies to recognize lane lines on structured mountain roads with the help of improved Hough transform. Vision-based approach is used as it performs well in a wide assortment of circumstances by abstracting valuable information contrasted with other sensors. The proposed strategy processes the live video stream, which is a progression of pictures, and concentrates on the position of lane markings in the wake of sending the edges through different channels and legitimate thresholding. The algorithm is tuned for Indian mountainous curved and paved roads. A technique of computation is utilized to discard the disturbing lines other than the credible lane lines and show just the required prevailing lane lines. This technique will consequently discover two lane lines that are nearest to the vehicle in a picture as right on time as could reasonably be expected. Various video sequences on hilly terrain are tested to verify the effectiveness of our method, and it has shown good performance with a detection accuracy of 91.89%.
Neural substrates of similarity and rule-based strategies in judgment
von Helversen, Bettina; Karlsson, Linnea; Rasch, Björn; Rieskamp, Jörg
2014-01-01
Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI), we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved. PMID:25360099
NASA Technical Reports Server (NTRS)
1987-01-01
The high-resolution imaging spectrometer (HIRIS) is an Earth Observing System (EOS) sensor developed for high spatial and spectral resolution. It can acquire more information in the 0.4 to 2.5 micrometer spectral region than any other sensor yet envisioned. Its capability for critical sampling at high spatial resolution makes it an ideal complement to the MODIS (moderate-resolution imaging spectrometer) and HMMR (high-resolution multifrequency microwave radiometer), lower resolution sensors designed for repetitive coverage. With HIRIS it is possible to observe transient processes in a multistage remote sensing strategy for Earth observations on a global scale. The objectives, science requirements, and current sensor design of the HIRIS are discussed along with the synergism of the sensor with other EOS instruments and data handling and processing requirements.
Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine
Herdtweck, Christian; Wallraven, Christian
2013-01-01
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types. PMID:24349073
Developing a Self-Help Brochure Series: Costs and Benefits.
ERIC Educational Resources Information Center
Allen, Deborah R.; Sipich, James F.
1987-01-01
Describes the process by which one counseling center developed a series of self-help brochures. In addition to the intended service benefits to students, the brochures present a positive image of the counseling center to numerous campus constituencies. Discusses costs, benefits, and strategies for marketing the brochures. (Author)
Neural differences in the processing of semantic relationships across cultures.
Gutchess, Angela H; Hedden, Trey; Ketay, Sarah; Aron, Arthur; Gabrieli, John D E
2010-06-01
The current study employed functional MRI to investigate the contribution of domain-general (e.g. executive functions) and domain-specific (e.g. semantic knowledge) processes to differences in semantic judgments across cultures. Previous behavioral experiments have identified cross-cultural differences in categorization, with East Asians preferring strategies involving thematic or functional relationships (e.g. cow-grass) and Americans preferring categorical relationships (e.g. cow-chicken). East Asians and American participants underwent functional imaging while alternating between categorical or thematic strategies to sort triads of words, as well as matching words on control trials. Many similarities were observed. However, across both category and relationship trials compared to match (control) trials, East Asians activated a frontal-parietal network implicated in controlled executive processes, whereas Americans engaged regions of the temporal lobes and the cingulate, possibly in response to conflict in the semantic content of information. The results suggest that cultures differ in the strategies employed to resolve conflict between competing semantic judgments.
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Training working memory in older adults: Is there an advantage of using strategies?
Borella, Erika; Carretti, Barbara; Sciore, Roberta; Capotosto, Emanuela; Taconnat, Laurence; Cornoldi, Cesare; De Beni, Rossana
2017-03-01
The purpose of the present study was to test the efficacy of a working memory (WM) training in elderly people, and to compare the effects of a WM training based on an adaptive procedure with one combining the same procedure with the use of a strategy, based on the construction of visual mental images. Eighteen older adults received training with a WM task (the WM group), another 18 received the same WM training and were also taught to use a visual imagery strategy (the WM + Strategy group), and another 18 served as active controls. Training-related gains in the WM (criterion) task and transfer effects on measures of verbal and visuospatial WM, short-term memory (STM), processing speed, and reasoning were considered. Training gains and transfer effects were also assessed after 6 months. After the training, both the trained groups performed better than the control group in the WM criterion task, and maintained these gains 6 months later; they also showed immediate transfer effects on processing speed. The two trained groups also outperformed the control group in the long term in the WM tasks, in one of the STM tasks (backward span task), and in the processing speed measure. Long-term large effect sizes were found for all the tasks involving memory processes in the WM + Strategy group, but only for the processing speed task in the WM group. Findings are discussed in terms of the benefits and limits of teaching older people a strategy in combination with an adaptive WM training. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images
NASA Astrophysics Data System (ADS)
Li, Yan; Shekhar, Raj; Huang, David
2002-05-01
Optical coherence tomography (OCT) provides a non-contact and non-invasive means to visualize the corneal anatomy at micron scale resolution. We obtained corneal images from an arc-scanning (converging) OCT system operating at a wavelength of 830nm and a fan-shaped-scanning high-speed OCT system with an operating wavelength of 1310nm. Different scan protocols (arc/fan) and data acquisition rates, as well as wavelength dependent bio-tissue backscatter contrast and optical absorption, make the images acquired using the two systems different. We developed image-processing algorithms to automatically detect the air-tear interface, epithelium-Bowman's layer interface, laser in-situ keratomileusis (LASIK) flap interface, and the cornea-aqueous interface in both kinds of images. The overall segmentation scheme for 830nm and 1310nm OCT images was similar, although different strategies were adopted for specific processing approaches. Ultrasound pachymetry measurements of the corneal thickness and Placido-ring based corneal topography measurements of the corneal curvature were made on the same day as the OCT examination. Anterior/posterior corneal surface curvature measurement with OCT was also investigated. Results showed that automated segmentation of OCT images could evaluate anatomic outcome of LASIK surgery.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Live cell imaging of in vitro human trophoblast syncytialization.
Wang, Rui; Dang, Yan-Li; Zheng, Ru; Li, Yue; Li, Weiwei; Lu, Xiaoyin; Wang, Li-Juan; Zhu, Cheng; Lin, Hai-Yan; Wang, Hongmei
2014-06-01
Human trophoblast syncytialization, a process of cell-cell fusion, is one of the most important yet least understood events during placental development. Investigating the fusion process in a placenta in vivo is very challenging given the complexity of this process. Application of primary cultured cytotrophoblast cells isolated from term placentas and BeWo cells derived from human choriocarcinoma formulates a biphasic strategy to achieve the mechanism of trophoblast cell fusion, as the former can spontaneously fuse to form the multinucleated syncytium and the latter is capable of fusing under the treatment of forskolin (FSK). Live-cell imaging is a powerful tool that is widely used to investigate many physiological or pathological processes in various animal models or humans; however, to our knowledge, the mechanism of trophoblast cell fusion has not been reported using a live- cell imaging manner. In this study, a live-cell imaging system was used to delineate the fusion process of primary term cytotrophoblast cells and BeWo cells. By using live staining with Hoechst 33342 or cytoplasmic dyes or by stably transfecting enhanced green fluorescent protein (EGFP) and DsRed2-Nuc reporter plasmids, we observed finger-like protrusions on the cell membranes of fusion partners before fusion and the exchange of cytoplasmic contents during fusion. In summary, this study provides the first video recording of the process of trophoblast syncytialization. Furthermore, the various live-cell imaging systems used in this study will help to yield molecular insights into the syncytialization process during placental development. © 2014 by the Society for the Study of Reproduction, Inc.
Overlay metrology for double patterning processes
NASA Astrophysics Data System (ADS)
Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej
2009-03-01
The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.
Multi-PSF fusion in image restoration of range-gated systems
NASA Astrophysics Data System (ADS)
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui
2018-07-01
For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.
2015-01-01
We report a dual illumination, single-molecule imaging strategy to dissect directly and in real-time the correlation between nanometer-scale domain motion of a DNA repair protein and its interaction with individual DNA substrates. The strategy was applied to XPD, an FeS cluster-containing DNA repair helicase. Conformational dynamics was assessed via FeS-mediated quenching of a fluorophore site-specifically incorporated into XPD. Simultaneously, binding of DNA molecules labeled with a spectrally distinct fluorophore was detected by colocalization of the DNA- and protein-derived signals. We show that XPD undergoes thermally driven conformational transitions that manifest in spatial separation of its two auxiliary domains. DNA binding does not strictly enforce a specific conformation. Interaction with a cognate DNA damage, however, stabilizes the compact conformation of XPD by increasing the weighted average lifetime of this state by 140% relative to an undamaged DNA. Our imaging strategy will be a valuable tool to study other FeS-containing nucleic acid processing enzymes. PMID:25204359
Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Lai, Koon Chun
2017-12-01
Image processing and analysis is an effective tool for monitoring and fault diagnosis of activated sludge (AS) wastewater treatment plants. The AS image comprise of flocs (microbial aggregates) and filamentous bacteria. In this paper, nine different approaches are proposed for image segmentation of phase-contrast microscopic (PCM) images of AS samples. The proposed strategies are assessed for their effectiveness from the perspective of microscopic artifacts associated with PCM. The first approach uses an algorithm that is based on the idea that different color space representation of images other than red-green-blue may have better contrast. The second uses an edge detection approach. The third strategy, employs a clustering algorithm for the segmentation and the fourth applies local adaptive thresholding. The fifth technique is based on texture-based segmentation and the sixth uses watershed algorithm. The seventh adopts a split-and-merge approach. The eighth employs Kittler's thresholding. Finally, the ninth uses a top-hat and bottom-hat filtering-based technique. The approaches are assessed, and analyzed critically with reference to the artifacts of PCM. Gold approximations of ground truth images are prepared to assess the segmentations. Overall, the edge detection-based approach exhibits the best results in terms of accuracy, and the texture-based algorithm in terms of false negative ratio. The respective scenarios are explained for suitability of edge detection and texture-based algorithms.
Aldridge, R Benjamin; Glodzik, Dominik; Ballerini, Lucia; Fisher, Robert B; Rees, Jonathan L
2011-05-01
Non-analytical reasoning is thought to play a key role in dermatology diagnosis. Considering its potential importance, surprisingly little work has been done to research whether similar identification processes can be supported in non-experts. We describe here a prototype diagnostic support software, which we have used to examine the ability of medical students (at the beginning and end of a dermatology attachment) and lay volunteers, to diagnose 12 images of common skin lesions. Overall, the non-experts using the software had a diagnostic accuracy of 98% (923/936) compared with 33% for the control group (215/648) (Wilcoxon p < 0.0001). We have demonstrated, within the constraints of a simplified clinical model, that novices' diagnostic scores are significantly increased by the use of a structured image database coupled with matching of index and referent images. The novices achieve this high degree of accuracy without any use of explicit definitions of likeness or rule-based strategies.
Exemplar-Based Image Inpainting Using a Modified Priority Definition.
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le
2015-01-01
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well.
NASA Astrophysics Data System (ADS)
Chen, Huaiguang; Fu, Shujun; Wang, Hong; Lv, Hongli; Zhang, Caiming
2018-03-01
As a high-resolution imaging mode of biological tissues and materials, optical coherence tomography (OCT) is widely used in medical diagnosis and analysis. However, OCT images are often degraded by annoying speckle noise inherent in its imaging process. Employing the bilateral sparse representation an adaptive singular value shrinking method is proposed for its highly sparse approximation of image data. Adopting the generalized likelihood ratio as similarity criterion for block matching and an adaptive feature-oriented backward projection strategy, the proposed algorithm can restore better underlying layered structures and details of the OCT image with effective speckle attenuation. The experimental results demonstrate that the proposed algorithm achieves a state-of-the-art despeckling performance in terms of both quantitative measurement and visual interpretation.
Exemplar-Based Image Inpainting Using a Modified Priority Definition
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le
2015-01-01
Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well. PMID:26492491
Evaluation of search strategies for microcalcifications and masses in 3D images
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.
2018-03-01
Medical imaging is quickly evolving towards 3D image modalities such as computed tomography (CT), magnetic resonance imaging (MRI) and digital breast tomosynthesis (DBT). These 3D image modalities add volumetric information but further increase the need for radiologists to search through the image data set. Although much is known about search strategies in 2D images less is known about the functional consequences of different 3D search strategies. We instructed readers to use two different search strategies: drillers had their eye movements restricted to a few regions while they quickly scrolled through the image stack, scanners explored through eye movements the 2D slices. We used real-time eye position monitoring to ensure observers followed the drilling or the scanning strategy while approximately preserving the percentage of the volumetric data covered by the useful field of view. We investigated search for two signals: a simulated microcalcification and a larger simulated mass. Results show an interaction between the search strategy and lesion type. In particular, scanning provided significantly better detectability for microcalcifications at the cost of 5 times more time to search while there was little change in the detectability for the larger simulated masses. Analyses of eye movements support the hypothesis that the effectiveness of a search strategy in 3D imaging arises from the interaction of the fixational sampling of visual information and the signals' visibility in the visual periphery.
A Novel Defect Inspection Method for Semiconductor Wafer Based on Magneto-Optic Imaging
NASA Astrophysics Data System (ADS)
Pan, Z.; Chen, L.; Li, W.; Zhang, G.; Wu, P.
2013-03-01
The defects of semiconductor wafer may be generated from the manufacturing processes. A novel defect inspection method of semiconductor wafer is presented in this paper. The method is based on magneto-optic imaging, which involves inducing eddy current into the wafer under test, and detecting the magnetic flux associated with eddy current distribution in the wafer by exploiting the Faraday rotation effect. The magneto-optic image being generated may contain some noises that degrade the overall image quality, therefore, in this paper, in order to remove the unwanted noise present in the magneto-optic image, the image enhancement approach using multi-scale wavelet is presented, and the image segmentation approach based on the integration of watershed algorithm and clustering strategy is given. The experimental results show that many types of defects in wafer such as hole and scratch etc. can be detected by the method proposed in this paper.
NASA Astrophysics Data System (ADS)
Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang
2010-11-01
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Time-lapse Inversion of Electrical Resistivity Data
NASA Astrophysics Data System (ADS)
Nguyen, F.; Kemna, A.
2005-12-01
Time-lapse geophysical measurements (also known as monitoring, repeat or multi-frame survey) now play a critical role for monitoring -non-destructively- changes induced by human, as reservoir compaction, or to study natural processes, as flow and transport in porous media. To invert such data sets into time-varying subsurface properties, several strategies are found in different engineering or scientific fields (e.g., in biomedical, process tomography, or geophysical applications). Indeed, for time-lapse surveys, the data sets and the models at each time frame have the particularity to be closely related to their "neighbors", if the process does not induce chaotic or very high variations. Therefore, the information contained in the different frames can be used for constraining the inversion in the others. A first strategy consists in imposing constraints to the model based on prior estimation, a priori spatiotemporal or temporal behavior (arbitrary or based on a law describing the monitored process), restriction of changes in certain areas, or data changes reproducibility. A second strategy aims to invert directly the model changes, where the objective function penalizes those models whose spatial, temporal, or spatiotemporal behavior differs from a prior assumption or from a computed a priori. Clearly, the incorporation of time-lapse a priori information, determined from data sets or assumed, in the inversion process has been proven to improve significantly the resolving capability, mainly by removing artifacts. However, there is a lack of comparison of these methods. In this paper, we focus on Tikhonov-like inversion approaches for electrical tomography imaging to evaluate the capability of the different existing strategies, and to propose new ones. To evaluate the bias inevitably introduced by time-lapse regularization, we quantified the relative contribution of the different approaches to the resolving power of the method. Furthermore, we incorporated different noise levels and types (random and/or systematic) to determine the strategies' ability to cope with real data. Introducing additional regularization terms yields also more regularization parameters to compute. Since this is a difficult and computationally costly task, we propose that it should be proportional to the velocity of the process. To achieve these objectives, we tested the different methods using synthetic models, and experimental data, taking noise and error propagation into account. Our study shows that the choice of the inversion strategy highly depends on the nature and magnitude of noise, whereas the choice of the regularization term strongly influences the resulting image according to the a priori assumption. This study was developed under the scope of the European project ALERT (GOCE-CT-2004-505329).
The NAIMS cooperative pilot project: Design, implementation and future directions.
Oh, Jiwon; Bakshi, Rohit; Calabresi, Peter A; Crainiceanu, Ciprian; Henry, Roland G; Nair, Govind; Papinutto, Nico; Constable, R Todd; Reich, Daniel S; Pelletier, Daniel; Rooney, William; Schwartz, Daniel; Tagge, Ian; Shinohara, Russell T; Simon, Jack H; Sicotte, Nancy L
2017-10-01
The North American Imaging in Multiple Sclerosis (NAIMS) Cooperative represents a network of 27 academic centers focused on accelerating the pace of magnetic resonance imaging (MRI) research in multiple sclerosis (MS) through idea exchange and collaboration. Recently, NAIMS completed its first project evaluating the feasibility of implementation and reproducibility of quantitative MRI measures derived from scanning a single MS patient using a high-resolution 3T protocol at seven sites. The results showed the feasibility of utilizing advanced quantitative MRI measures in multicenter studies and demonstrated the importance of careful standardization of scanning protocols, central image processing, and strategies to account for inter-site variability.
Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zeng, Xin-An
2015-01-01
There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.
NASA Astrophysics Data System (ADS)
Ströhl, Florian; Wong, Hovy H. W.; Holt, Christine E.; Kaminski, Clemens F.
2018-01-01
Fluorescence anisotropy imaging microscopy (FAIM) measures the depolarization properties of fluorophores to deduce molecular changes in their environment. For successful FAIM, several design principles have to be considered and a thorough system-specific calibration protocol is paramount. One important calibration parameter is the G factor, which describes the system-induced errors for different polarization states of light. The determination and calibration of the G factor is discussed in detail in this article. We present a novel measurement strategy, which is particularly suitable for FAIM with high numerical aperture objectives operating in TIRF illumination mode. The method makes use of evanescent fields that excite the sample with a polarization direction perpendicular to the image plane. Furthermore, we have developed an ImageJ/Fiji plugin, AniCalc, for FAIM data processing. We demonstrate the capabilities of our TIRF-FAIM system by measuring β -actin polymerization in human embryonic kidney cells and in retinal neurons.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Image-guided endobronchial ultrasound
NASA Astrophysics Data System (ADS)
Higgins, William E.; Zang, Xiaonan; Cheirsilp, Ronnarit; Byrnes, Patrick; Kuhlengel, Trevor; Bascom, Rebecca; Toth, Jennifer
2016-03-01
Endobronchial ultrasound (EBUS) is now recommended as a standard procedure for in vivo verification of extraluminal diagnostic sites during cancer-staging bronchoscopy. Yet, physicians vary considerably in their skills at using EBUS effectively. Regarding existing bronchoscopy guidance systems, studies have shown their effectiveness in the lung-cancer management process. With such a system, a patient's X-ray computed tomography (CT) scan is used to plan a procedure to regions of interest (ROIs). This plan is then used during follow-on guided bronchoscopy. Recent clinical guidelines for lung cancer, however, also dictate using positron emission tomography (PET) imaging for identifying suspicious ROIs and aiding in the cancer-staging process. While researchers have attempted to use guided bronchoscopy systems in tandem with PET imaging and EBUS, no true EBUS-centric guidance system exists. We now propose a full multimodal image-based methodology for guiding EBUS. The complete methodology involves two components: 1) a procedure planning protocol that gives bronchoscope movements appropriate for live EBUS positioning; and 2) a guidance strategy and associated system graphical user interface (GUI) designed for image-guided EBUS. We present results demonstrating the operation of the system.
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Hainsworth, A. H.; Lee, S.; Patel, A.; Poon, W. W.; Knight, A. E.
2018-01-01
Aims The spatial resolution of light microscopy is limited by the wavelength of visible light (the ‘diffraction limit’, approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Methods Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8–32 nm) and for SOFI (effective pixel size 80 nm). Results In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Conclusions Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. PMID:28696566
Hainsworth, A H; Lee, S; Foot, P; Patel, A; Poon, W W; Knight, A E
2018-06-01
The spatial resolution of light microscopy is limited by the wavelength of visible light (the 'diffraction limit', approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8-32 nm) and for SOFI (effective pixel size 80 nm). In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. © 2017 British Neuropathological Society.
Automatic detection of the inner ears in head CT images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.
2018-03-01
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
Matrix-Assisted Laser Desorption Ionization Imaging Mass Spectrometry: In Situ Molecular Mapping
Angel, Peggi M.; Caprioli, Richard M.
2013-01-01
Matrix-assisted laser desorption ionization imaging mass spectrometry (IMS) is a relatively new imaging modality that allows mapping of a wide range of biomolecules within a thin tissue section. The technology uses a laser beam to directly desorb and ionize molecules from discrete locations on the tissue that are subsequently recorded in a mass spectrometer. IMS is distinguished by the ability to directly measure molecules in situ ranging from small metabolites to proteins, reporting hundreds to thousands of expression patterns from a single imaging experiment. This article reviews recent advances in IMS technology, applications, and experimental strategies that allow it to significantly aid in the discovery and understanding of molecular processes in biological and clinical samples. PMID:23259809
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
Atlas-based segmentation of brainstem regions in neuromelanin-sensitive magnetic resonance images
NASA Astrophysics Data System (ADS)
Puigvert, Marc; Castellanos, Gabriel; Uranga, Javier; Abad, Ricardo; Fernández-Seara, María. A.; Pastor, Pau; Pastor, María. A.; Muñoz-Barrutia, Arrate; Ortiz de Solórzano, Carlos
2015-03-01
We present a method for the automatic delineation of two neuromelanin rich brainstem structures -substantia nigra pars compacta (SN) and locus coeruleus (LC)- in neuromelanin sensitive magnetic resonance images of the brain. The segmentation method uses a dynamic multi-image reference atlas and a pre-registration atlas selection strategy. To create the atlas, a pool of 35 images of healthy subjects was pair-wise pre-registered and clustered in groups using an affinity propagation approach. Each group of the atlas is represented by a single exemplar image. Each new target image to be segmented is registered to the exemplars of each cluster. Then all the images of the highest performing clusters are enrolled into the final atlas, and the results of the registration with the target image are propagated using a majority voting approach. All registration processes used combined one two-stage affine and one elastic B-spline algorithm, to account for global positioning, region selection and local anatomic differences. In this paper, we present the algorithm, with emphasis in the atlas selection method and the registration scheme. We evaluate the performance of the atlas selection strategy using 35 healthy subjects and 5 Parkinson's disease patients. Then, we quantified the volume and contrast ratio of neuromelanin signal of these structures in 47 normal subjects and 40 Parkinson's disease patients to confirm that this method can detect neuromelanin-containing neurons loss in Parkinson's disease patients and could eventually be used for the early detection of SN and LC damage.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Munn, Zachary; Jordan, Zoe
When presenting to an imaging department, the person who is to be imaged is often in a vulnerable state, and out of their comfort zone. It is the role of the medical imaging technician to produce a high quality image and facilitate patient care throughout the imaging process. Qualitative research is necessary to better inform the medical imaging technician and to help them to understand the experience of the person being imaged. Some issues that have been identified in the literature include fear, claustrophobia, dehumanisation, and an uncomfortable or unusual experience. There is now a small but worthwhile qualitative literature base focusing on the patient experience in high technology imaging. There is no current qualitative synthesis of the literature on the patient experience in high technology imaging. It is therefore timely and worthwhile to produce a systematic review to identify and summarise the existent literature exploring the patient experience of high technology imaging. To identify the patient experience of high technology medical imaging. Studies that were of a qualitative design that explored the phenomenon of interest, the patient experience of high technology medical imaging. Participants included anyone who had undergone one of these procedures. The search strategy aimed to find both published and unpublished studies, and was conducted over a period from June - September 2010. No time limits were imposed on this search strategy. A three-step search strategy was utilised in this review. All studies that met the criteria were selected for retrieval. They were then assessed by two independent reviewers for methodological validity prior to inclusion in the review using standardised critical appraisal instruments from the Joanna Briggs Institute Qualitative Assessment and Review Instrument. Data was extracted from papers included in the review using the standardised data extraction tool from the Joanna Briggs Institute Qualitative Assessment and Review Instrument. Research findings were pooled using the Qualitative Assessment and Review Instrument. Following the search and critical appraisal processes, 15 studies were identified that were deemed of suitable quality to be included in the review. From these 15 studies, 127 findings were extracted, forming 33 categories and 11 synthesised findings. These synthesised findings related to the patient experience, the emotions they felt (whether negative or positive), the need for support and information, and highlighted the importance of imaging to the patient. The synthesised findings in this review highlight the diverse, unique and challenging ways in which people experience imaging with MRI and CT scanners. All health professionals involved in imaging need to be aware of the different ways each patient may experience imaging, and provide them with ongoing support and information. The implications for practice are derived directly from the results of the meta-synthesis, and each of the 11 synthesised findings. There is still scope for further high methodological qualitative studies to be conducted in this field, particularly in the field of nuclear medicine imaging and Positron Emission Tomography. Further studies may be conducted in certain patient groups, and in certain age ranges. No studies were found assessing the experience of children undergoing high technology imaging.
A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data.
Leping Chen; Daoxiang An; Xiaotao Huang; Zhimin Zhou
2017-11-01
In the last few years, interest in circular synthetic aperture radar (CSAR) acquisitions has arisen as a consequence of the potential achievement of 3D reconstructions over 360° azimuth angle variation. In real-world scenarios, full 3D reconstructions of arbitrary targets need multi-pass data, which makes the processing complex, money-consuming, and time expending. In this paper, we propose a processing strategy for the 3D reconstruction of vehicle, which can avoid using multi-pass data by introducing a priori information of vehicle's shape. Besides, the proposed strategy just needs the single-pass single-polarization CSAR data to perform vehicle's 3D reconstruction, which makes the processing much more economic and efficient. First, an analysis of the distribution of attributed scattering centers from vehicle facet model is presented. And the analysis results show that a smooth and continuous basic outline of vehicle could be extracted from the peak curve of a noncoherent processing image. Second, the 3D location of vehicle roofline is inferred from layover with empirical insets of the basic outline. At last, the basic line and roofline of the vehicle are used to estimate the vehicle's 3D information and constitute the vehicle's 3D outline. The simulated and measured data processing results prove the correctness and effectiveness of our proposed strategy.
Processes in arithmetic strategy selection: a fMRI study.
Taillan, Julien; Ardiale, Eléonore; Anton, Jean-Luc; Nazarian, Bruno; Félician, Olivier; Lemaire, Patrick
2015-01-01
This neuroimaging (functional magnetic resonance imaging) study investigated neural correlates of strategy selection. Young adults performed an arithmetic task in two different conditions. In both conditions, participants had to provide estimates of two-digit multiplication problems like 54 × 78. In the choice condition, participants had to select the better of two available rounding strategies, rounding-up (RU) strategy (i.e., doing 60 × 80 = 4,800) or rounding-down (RD) strategy (i.e., doing 50 × 70 = 3,500 to estimate product of 54 × 78). In the no-choice condition, participants did not have to select strategy on each problem but were told which strategy to use; they executed RU and RD strategies each on a series of problems. Participants also had a control task (i.e., providing correct products of multiplication problems like 40 × 50). Brain activations and performance were analyzed as a function of these conditions. Participants were able to frequently choose the better strategy in the choice condition; they were also slower when they executed the difficult RU than the easier RD. Neuroimaging data showed greater brain activations in right anterior cingulate cortex (ACC), dorso-lateral prefrontal cortex (DLPFC), and angular gyrus (ANG), when selecting (relative to executing) the better strategy on each problem. Moreover, RU was associated with more parietal cortex activation than RD. These results suggest an important role of fronto-parietal network in strategy selection and have important implications for our further understanding and modeling cognitive processes underlying strategy selection.
Processes in arithmetic strategy selection: a fMRI study
Taillan, Julien; Ardiale, Eléonore; Anton, Jean-Luc; Nazarian, Bruno; Félician, Olivier; Lemaire, Patrick
2015-01-01
This neuroimaging (functional magnetic resonance imaging) study investigated neural correlates of strategy selection. Young adults performed an arithmetic task in two different conditions. In both conditions, participants had to provide estimates of two-digit multiplication problems like 54 × 78. In the choice condition, participants had to select the better of two available rounding strategies, rounding-up (RU) strategy (i.e., doing 60 × 80 = 4,800) or rounding-down (RD) strategy (i.e., doing 50 × 70 = 3,500 to estimate product of 54 × 78). In the no-choice condition, participants did not have to select strategy on each problem but were told which strategy to use; they executed RU and RD strategies each on a series of problems. Participants also had a control task (i.e., providing correct products of multiplication problems like 40 × 50). Brain activations and performance were analyzed as a function of these conditions. Participants were able to frequently choose the better strategy in the choice condition; they were also slower when they executed the difficult RU than the easier RD. Neuroimaging data showed greater brain activations in right anterior cingulate cortex (ACC), dorso-lateral prefrontal cortex (DLPFC), and angular gyrus (ANG), when selecting (relative to executing) the better strategy on each problem. Moreover, RU was associated with more parietal cortex activation than RD. These results suggest an important role of fronto-parietal network in strategy selection and have important implications for our further understanding and modeling cognitive processes underlying strategy selection. PMID:25698995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Medin, P; Yordy, J
2014-06-01
Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO,more » was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit.« less
ERIC Educational Resources Information Center
Woodruff, Allison; Rosenholtz, Ruth; Morrison, Julie B.; Faulring, Andrew; Pirolli, Peter
2002-01-01
Discussion of Web search strategies focuses on a comparative study of textual and graphical summarization mechanisms applied to search engine results. Suggests that thumbnail images (graphical summaries) can increase efficiency in processing results, and that enhanced thumbnails (augmented with readable textual elements) had more consistent…
Mental Images and the Modification of Learning Defects.
ERIC Educational Resources Information Center
Patten, Bernard M.
Because human memory and thought involve extremely complex processes, it is possible to employ unusual modalities and specific visual strategies for remembering and problem-solving to assist patients with memory defects. This three-part paper discusses some of the research in the field of human memory and describes practical applications of these…
Optimization of Visual Information Presentation for Visual Prosthesis.
Guo, Fei; Yang, Yuan; Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.
Optimization of Visual Information Presentation for Visual Prosthesis
Gao, Yong
2018-01-01
Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769
Noninvasive Molecular Imaging of Disease Activity in Atherosclerosis
Aikawa, Elena; Newby, David E.; Tarkin, Jason M.; Rudd, James H.F.; Narula, Jagat; Fayad, Zahi A.
2016-01-01
Major focus has been placed on the identification of vulnerable plaques as a means of improving the prediction of myocardial infarction. However, this strategy has recently been questioned on the basis that the majority of these individual coronary lesions do not in fact go on to cause clinical events. Attention is, therefore, shifting to alternative imaging modalities that might provide a more complete pan-coronary assessment of the atherosclerotic disease process. These include markers of disease activity with the potential to discriminate between patients with stable burnt-out disease that is no longer metabolically active and those with active atheroma, faster disease progression, and increased risk of infarction. This review will examine how novel molecular imaging approaches can provide such assessments, focusing on inflammation and microcalcification activity, the importance of these processes to coronary atherosclerosis, and the advantages and challenges posed by these techniques. PMID:27390335
NASA Astrophysics Data System (ADS)
Day-Lewis, F. D.
2014-12-01
Geophysical imaging (e.g., electrical, radar, seismic) can provide valuable information for the characterization of hydrologic properties and monitoring of hydrologic processes, as evidenced in the rapid growth of literature on the subject. Geophysical imaging has been used for monitoring tracer migration and infiltration, mapping zones of focused groundwater/surface-water exchange, and verifying emplacement of amendments for bioremediation. Despite the enormous potential for extraction of hydrologic information from geophysical images, there also is potential for misinterpretation and over-interpretation. These concerns are particularly relevant when geophysical results are used within quantitative frameworks, e.g., conversion to hydrologic properties through petrophysical relations, geostatistical estimation and simulation conditioned to geophysical inversions, and joint inversion. We review pitfalls to interpretation associated with limited image resolution, spatially variable image resolution, incorrect data weighting, errors in the timing of measurements, temporal smearing resulting from changes during data acquisition, support-volume/scale effects, and incorrect assumptions or approximations involved in modeling geophysical or other jointly inverted data. A series of numerical and field-based examples illustrate these potential problems. Our goal in this talk is to raise awareness of common pitfalls and present strategies for recognizing and avoiding them.
Celler, Katherine; Fujita, Miki; Kawamura, Eiko; Ambrose, Chris; Herburger, Klaus; Wasteneys, Geoffrey O.
2016-01-01
Microtubules are required throughout plant development for a wide variety of processes, and different strategies have evolved to visualize and analyze them. This chapter provides specific methods that can be used to analyze microtubule organization and dynamic properties in plant systems and summarizes the advantages and limitations for each technique. We outline basic methods for preparing samples for immunofluorescence labelling, including an enzyme-based permeabilization method, and a freeze-shattering method, which generates microfractures in the cell wall to provide antibodies access to cells in cuticle-laden aerial organs such as leaves. We discuss current options for live cell imaging of MTs with fluorescently tagged proteins (FPs), and provide chemical fixation, high pressure freezing/freeze substitution, and post-fixation staining protocols for preserving MTs for transmission electron microscopy and tomography. PMID:26498784
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salter, B.
2016-06-15
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedict, S.
2016-06-15
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
MO-E-BRB-00: PANEL DISCUSSION: SBRT/SRS Case Studies - Lung
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
In this interactive session, lung SBRT patient cases will be presented to highlight real-world considerations for ensuring safe and accurate treatment delivery. An expert panel of speakers will discuss challenges specific to lung SBRT including patient selection, patient immobilization techniques, 4D CT simulation and respiratory motion management, target delineation for treatment planning, online treatment alignment, and established prescription regimens and OAR dose limits. Practical examples of cases, including the patient flow thought the clinical process are presented and audience participation will be encouraged. This panel session is designed to provide case demonstration and review for lung SBRT in terms ofmore » (1) clinical appropriateness in patient selection, (2) strategies for simulation, including 4D and respiratory motion management, and (3) applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent, and (4) image guidance in treatment delivery. Learning Objectives: Understand the established requirements for patient selection in lung SBRT Become familiar with the various immobilization strategies for lung SBRT, including technology for respiratory motion management Understand the benefits and pitfalls of applying multi imaging modality (4D CT imaging, MRI, PET) for tumor volume delineation and motion extent determination for lung SBRT Understand established prescription regimes and OAR dose limits.« less
WE-H-207B-04: Strategies for Adaptive RT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, O.
2016-06-15
In recent years, steady progress has been made towards the implementation of MRI in external beam radiation therapy for processes ranging from treatment simulation to in-room guidance. Novel procedures relying mostly on MR data are currently implemented in the clinic. This session will cover topics such as (a) commissioning and quality control of the MR in-room imagers and simulators specific to RT, (b) treatment planning requirements, constraints and challenges when dealing with various MR data, (c) quantification of organ motion with an emphasis on treatment delivery guidance, and (d) MR-driven strategies for adaptive RT workflows. The content of the sessionmore » was chosen to address both educational and practical key aspects of MR guidance. Learning Objectives: Good understanding of MR testing recommended for in-room MR imaging as well as image data validation for RT chain (e.g. image transfer, filtering for consistency, spatial accuracy, manipulation for task specific); Familiarity with MR-based planning procedures: motivation, core workflow requirements, current status, challenges; Overview of the current methods for the quantification of organ motion; Discussion on approaches for adaptive treatment planning and delivery. T. Stanescu - License agreement with Modus Medical Devices to develop a phantom for the quantification of MR image system-related distortions.; T. Stanescu, N/A.« less
Recent advances in photoluminescence detection of fingerprints.
Menzel, E R
2001-10-02
Photoluminescence detection of latent fingerprints has over the last quarter century brought about a new level of fingerprint detection sensitivity. The current state of the art is briefly reviewed to set the stage for upcoming new fingerprint processing strategies. These are designed for suppression of background fluorescence from articles holding latent prints, an often serious problem. The suppression of the background involves time-resolved imaging, which is dealt with from the perspective of instrumentation as well as the design of fingerprint treatment strategies. These focus on lanthanide chelates, nanocrystals, and nanocomposites functionalized to label fingerprints.
Image Theory: Policies, Goals, Strategies and Tactics in Decision Making.
1986-03-01
Harvard Busin *; s Review. July/August, 49-61. Mintzberg, H., Rausingham, D. & Theoret, A. (1976). The structure of unstructured decision processes...coveEuDEo lmagle Theory: Policies, Goals, Strategies Technical Report 6. EFRIGDG REPORT NUME-f 7 A~NOR~eJTR 86-3 ATO, I CONTRACT OR GRANT NUMBER( s ) Lee Roy...Identity by block niumber) De’Cis ,ion Makinlg Doubt P0 li ci es Uncerta inty L I Li CTIs SbDject ive Probability Tact ic~ s C o 00 AOSTHACT (Conan.uo an
Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando
2009-01-01
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134
Neural bases of a specific strategy for visuospatial processing in rugby players.
Sekiguchi, Atsushi; Yokoyama, Satoru; Kasahara, Satoshi; Yomogida, Yukihito; Takeuchi, Hikaru; Ogawa, Takeshi; Taki, Yasuyuki; Niwa, Shin-Ichi; Kawashima, Ryuta
2011-10-01
Rugby is one of the most tactically complex sports. Rugby coaching theory suggests that rugby players need to possess various cognitive abilities. A previous study claimed that rugby players have high visuospatial awareness, which is induced by a strategy described as taking a "bird's eye view." To examine if there were differential cortical networks related to visuospatial processing tasks among top-level rugby players and control novices, we compared brain activities during a visuospatial processing task between 20 male top-level rugby players (Top) and 20 control novice males (Novice) using functional magnetic resonance imaging (fMRI). To avoid the effect of differential behavioral performances on brain activation, we recruited novices whose visuospatial ability was expected to match that of the rugby players. We adopted a 3-D mental rotation task during fMRI scanning as a visuospatial processing task. Significantly greater activations from baseline were observed for the Top group than for the Novice group in the right superior parietal lobe and lateral occipital cortex. Significantly greater deactivations from baseline were observed for the Top group than for the Novice group in the right medial prefrontal cortex. The discrepancy between psychobehavioral outputs and the fMRI results suggested the existence of a cognitive strategy among top-level rugby players that differs from that among control novices. The greater activation of the right superior parietal lobe and lateral occipital cortex in top-level rugby players suggested a strategy involving visuospatial cognitive processing with respect to the bird's eye view. In addition, the right medial prefrontal cortex is known to be a part of the default mode networks, suggesting an additional cognitive load for the Top group when using the bird's-eye-view strategy. This further supported the existence of a specific cognitive strategy among top-level rugby players.
Monitoring eye movements to investigate the picture superiority effect in spatial memory.
Cattaneo, Zaira; Rosen, Mitchell; Vecchi, Tomaso; Pelz, Jeff B
2008-01-01
Spatial memory is usually better for iconic than for verbal material. Our aim was to assess whether such effect is related to the way iconic and verbal targets are viewed when people have to memorize their locations. Eye movements were recorded while participants memorized the locations of images or words. Images received fewer, but longer, gazes than words. Longer gazes on images might reflect greater attention devoted to images due to their higher sensorial distinctiveness and/or generation with images of an additional phonological code beyond the visual code immediately available. We found that words were scanned mainly from left to right while a more heterogeneous scanning strategy characterized encoding of images. This suggests that iconic configurations tend to be maintained as global integrated representations in which all the item/location pairs are simultaneously present whilst verbal configurations are maintained through more sequential processes.
Camera sensor arrangement for crop/weed detection accuracy in agronomic images.
Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo
2013-04-02
In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.
Nesterets, Yakov I; Gureyev, Timur E; Mayo, Sheridan C; Stevenson, Andrew W; Thompson, Darren; Brown, Jeremy M C; Kitchen, Marcus J; Pavlov, Konstantin M; Lockie, Darren; Brun, Francesco; Tromba, Giuliana
2015-11-01
Results are presented of a recent experiment at the Imaging and Medical beamline of the Australian Synchrotron intended to contribute to the implementation of low-dose high-sensitivity three-dimensional mammographic phase-contrast imaging, initially at synchrotrons and subsequently in hospitals and medical imaging clinics. The effect of such imaging parameters as X-ray energy, source size, detector resolution, sample-to-detector distance, scanning and data processing strategies in the case of propagation-based phase-contrast computed tomography (CT) have been tested, quantified, evaluated and optimized using a plastic phantom simulating relevant breast-tissue characteristics. Analysis of the data collected using a Hamamatsu CMOS Flat Panel Sensor, with a pixel size of 100 µm, revealed the presence of propagation-based phase contrast and demonstrated significant improvement of the quality of phase-contrast CT imaging compared with conventional (absorption-based) CT, at medically acceptable radiation doses.
Massa, Sam; Vikani, Niravkumar; Betti, Cecilia; Ballet, Steven; Vanderhaegen, Saskia; Steyaert, Jan; Descamps, Benedicte; Vanhove, Christian; Bunschoten, Anton; van Leeuwen, Fijs W B; Hernot, Sophie; Caveliers, Vicky; Lahoutte, Tony; Muyldermans, Serge; Xavier, Catarina; Devoogdt, Nick
2016-09-01
A generic site-specific conjugation method that generates a homogeneous product is of utmost importance in tracer development for molecular imaging and therapy. We explored the protein-ligation capacity of the enzyme Sortase A to label camelid single-domain antibody-fragments, also known as nanobodies. The versatility of the approach was demonstrated by conjugating independently three different imaging probes: the chelating agents CHX-A"-DTPA and NOTA for single-photon emission computed tomography (SPECT) with indium-111 and positron emission tomography (PET) with gallium-68, respectively, and the fluorescent dye Cy5 for fluorescence reflectance imaging (FRI). After a straightforward purification process, homogeneous single-conjugated tracer populations were obtained in high yield (30-50%). The enzymatic conjugation did not affect the affinity of the tracers, nor the radiolabeling efficiency or spectral characteristics. In vivo, the tracers enabled the visualization of human epidermal growth factor receptor 2 (HER2) expressing BT474M1-tumors with high contrast and specificity as soon as 1 h post injection in all three imaging modalities. These data demonstrate Sortase A-mediated conjugation as a valuable strategy for the development of site-specifically labeled camelid single-domain antibody-fragments for use in multiple molecular imaging modalities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Rezakova, M V; Mazhirina, K G; Pokrovskiy, M A; Savelov, A A; Savelova, O A; Shtark, M B
2013-04-01
Using functional magnetic resonance imaging technique, we performed online brain mapping of gamers, practiced to voluntary (cognitively) control their heart rate, the parameter that operated a competitive virtual gameplay in the adaptive feedback loop. With the default start picture, the regions of interest during the formation of optimal cognitive strategy were as follows: Brodmann areas 19, 37, 39 and 40, i.e. cerebellar structures (vermis, amygdala, pyramids, clivus). "Localization" concept of the contribution of the cerebellum to cognitive processes is discussed.
Commercial Eyes in Space: Implications for U.S. Military Operations in 2030
2008-03-01
r C om m er ci al Im ag er y C om pa ny Su bj ec t Figure 2: Notional Satellite Remote Sensing Flow and...Government will most likely continue to rely on commercial sensors to supplement national intelligence 14 A dv er sa ry U se r C om m er ci al Im ag er y...C om pa ny Su bj ec t Potential Counter ISR Strategies for 2030 Image Request Tasking Recv’d Satellite Tasked Image Processed & Stored
Data-processing strategies for nano-tomography with elemental specification
NASA Astrophysics Data System (ADS)
Liu, Yijin; Cats, Korneel H.; Nelson Weker, Johanna; Andrews, Joy C.; Weckhuysen, Bert M.; Pianetta, Piero
2013-10-01
Combining the energy tunability provided by synchrotron X-ray sources with transmission X-ray microscopy, the morphology of materials can be resolved in 3D at spatial resolution down to 30 nm with elemental/chemical specification. In order to study the energy dependence of the absorption coefficient over the investigated volume, the tomographic reconstruction and image registration (before and/or after the tomographic reconstruction) are critical. We show in this paper the comparison of two different data processing strategies and conclude that the signal to noise ratio (S/N) in the final result can be improved via performing tomographic reconstruction prior to the evaluation of energy dependence. Our result echoes the dose fractionation theorem, and is particularly helpful when the element of interest has low concentration.
Xu, Jeff S; Huang, Jiwei; Qin, Ruogu; Hinkle, George H; Povoski, Stephen P; Martin, Edward W; Xu, Ronald X
2010-03-01
Accurate assessment of tumor boundaries and recognition of occult disease are important oncologic principles in cancer surgeries. However, existing imaging modalities are not optimized for intraoperative cancer imaging applications. We developed a nanobubble (NB) contrast agent for cancer targeting and dual-mode imaging using optical and ultrasound (US) modalities. The contrast agent was fabricated by encapsulating the Texas Red dye in poly (lactic-co-glycolic acid) (PLGA) NBs and conjugating NBs with cancer-targeting ligands. Both one-step and three-step cancer-targeting strategies were tested on the LS174T human colon cancer cell line. For the one-step process, NBs were conjugated with the humanized HuCC49 Delta C(H)2 antibody to target the over-expressed TAG-72 antigen. For the three-step process, cancer cells were targeted by successive application of the biotinylated HuCC49 Delta C(H)2 antibody, streptavidin, and the biotinylated NBs. Both one-step and three-step processes successfully targeted the cancer cells with high binding affinity. NB-assisted dual-mode imaging was demonstrated on a gelatin phantom that embedded multiple tumor simulators at different NB concentrations. Simultaneous fluorescence and US images were acquired for these tumor simulators and linear correlations were observed between the fluorescence/US intensities and the NB concentrations. Our research demonstrated the technical feasibility of using the dual-mode NB contrast agent for cancer targeting and simultaneous fluorescence/US imaging. (c) 2009 Elsevier Ltd. All rights reserved.
Chen, Zhenwei; Zhang, Lei; Zhang, Guo
2016-01-01
Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level. PMID:27649207
Chen, Zhenwei; Zhang, Lei; Zhang, Guo
2016-09-17
Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level.
The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.
Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam
2015-04-07
The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.
Fixational Eye Movements in the Earliest Stage of Metazoan Evolution
Bielecki, Jan; Høeg, Jens T.; Garm, Anders
2013-01-01
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur. PMID:23776673
Boccia, M; Piccardi, L; Palermo, L; Nemmi, F; Sulpizio, V; Galati, G; Guariglia, C
2014-09-05
Visual mental imagery is a process that draws on different cognitive abilities and is affected by the contents of mental images. Several studies have demonstrated that different brain areas subtend the mental imagery of navigational and non-navigational contents. Here, we set out to determine whether there are distinct representations for navigational and geographical images. Specifically, we used a Spatial Compatibility Task (SCT) to assess the mental representation of a familiar navigational space (the campus), a familiar geographical space (the map of Italy) and familiar objects (the clock). Twenty-one participants judged whether the vertical or the horizontal arrangement of items was correct. We found that distinct representational strategies were preferred to solve different categories on the SCT, namely, the horizontal perspective for the campus and the vertical perspective for the clock and the map of Italy. Furthermore, we found significant effects due to individual differences in the vividness of mental images and in preferences for verbal versus visual strategies, which selectively affect the contents of mental images. Our results suggest that imagining a familiar navigational space is somewhat different from imagining a familiar geographical space. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Liu, Haiyun; Tian, Tian; Ji, Dandan; Ren, Na; Ge, Shenguang; Yan, Mei; Yu, Jinghua
2016-11-15
In situ imaging of miRNA in living cells could help us to monitor the miRNA expression in real time and obtain accurate information for studying miRNA related bioprocesses and disease. Given the low-level expression of miRNA, amplification strategies for intracellular miRNA are imperative. Here, we propose an amplification strategy with a non-destructive enzyme-free manner in living cells using catalyzed hairpin assembly (CHA) based on graphene oxide (GO) for cellular miRNA imaging. The enzyme-free CHA exhibits stringent recognition and excellent signal amplification of miRNA in the living cells. GO is a good candidate as a fluorescence quencher and cellular carrier. Taking the advantages of the CHA and GO, we can monitor the miRNA at low level in living cells with a simple, sensitive and real-time manner. Finally, imaging of miRNAs in the different expression cells is realized. The novel method could supply an effective tool to visualize intracellular low-level miRNAs and help us to further understand the role of miRNAs in cellular processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Fixational eye movements in the earliest stage of metazoan evolution.
Bielecki, Jan; Høeg, Jens T; Garm, Anders
2013-01-01
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.
A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.
Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi
2015-01-01
In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.
D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances
NASA Astrophysics Data System (ADS)
Piras, M.; Di Pietra, V.; Visintini, D.
2017-08-01
The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.
High Density or Urban Sprawl: What Works Best in Biology?
Oreopoulos, John; Gray-Owen, Scott D; Yip, Christopher M
2017-02-28
With new approaches in imaging-from new tools or reagents to processing algorithms-come unique opportunities and challenges to our understanding of biological processes, structures, and dynamics. Although innovations in super-resolution imaging are affording novel perspectives into how molecules structurally associate and localize in response to, or in order to initiate, specific signaling events in the cell, questions arise as to how to interpret these observations in the context of biological function. Just as each neighborhood in a city has its own unique vibe, culture, and indeed density, recent work has shown that membrane receptor behavior and action is governed by their localization and association state. There is tremendous potential in developing strategies for tracking how the populations of these molecular neighborhoods change dynamically.
Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey
NASA Astrophysics Data System (ADS)
Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.
2018-01-01
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
Virtual environments from panoramic images
NASA Astrophysics Data System (ADS)
Chapman, David P.; Deacon, Andrew
1998-12-01
A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.
Guo, Lifang; Tian, Minggang; Feng, Ruiqing; Zhang, Ge; Zhang, Ruoyao; Li, Xuechen; Liu, Zhiqiang; He, Xiuquan; Sun, Jing Zhi; Yu, Xiaoqiang
2018-04-04
Lipid droplets (LDs) with unique interfacial architecture not only play crucial roles in protecting a cell from lipotoxicity and lipoapoptosis but also closely relate with many diseases such as fatty liver and diabetes. Thus, as one of the important applied biomaterials, fluorescent probes with ultrahigh selectivity for in situ and high-fidelity imaging of LDs in living cells and tissues are critical to elucidate relevant physiological and pathological events as well as detect related diseases. However, available probes only utilizing LDs' waterless neutral cores but ignoring the unique phospholipid monolayer interfaces exhibit low selectivity. They cannot differentiate neutral cores of LDs from intracellular other lipophilic microenvironments, which results in extensively cloud-like background noise and severely limited their bioapplications. Herein, to design LD probes with ultrahigh selectivity, the exceptional interfacial architecture of LDs is considered adequately and thus an interface-targeting strategy is proposed for the first time. According to the novel strategy, we have developed two amphipathic fluorescent probes (N-Cy and N-Py) by introducing different cations into a lipophilic fluorophore (nitrobenzoxadiazole (NBD)). Consequently, their cationic moiety precisely locates the interfaces through electrostatic interaction and simultaneously NBD entirely embeds into the waterless core via hydrophobic interaction. Thus, high-fidelity and background-free fluorescence imaging of LDs are expectably realized in living cells in situ. Moreover, LDs in turbid tissues like skeletal muscle slices have been clearly imaged (up to 82 μm depth) by a two-photon microscope. Importantly, using N-Cy, we not only intuitively monitored the variations of LDs in number, size, and morphology but also clearly revealed their abnormity in hepatic tissues resulting from fatty liver. Therefore, these unique probes provide excellent imaging tools for elucidating LD-related physiological and pathological processes and the interface-targeting strategy possesses universal significance for designing probes with ultrahigh selectivity.
Budak, Umit; Şengür, Abdulkadir; Guo, Yanhui; Akbulut, Yaman
2017-12-01
Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.
The Tonya Harding Controversy: An Analysis of Image Restoration Strategies.
ERIC Educational Resources Information Center
Benoit, William L.; Hanczor, Robert S.
1994-01-01
Analyzes Tonya Harding's defense of her image in "Eye to Eye with Connie Chung," applying the theory of image restoration discourse. Finds that the principal strategies employed in her behalf were bolstering, denial, and attacking her accuser, but that these strategies were not developed very effectively in this instance. (SR)
3D automatic anatomy recognition based on iterative graph-cut-ASM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.
2010-02-01
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.
NASA Astrophysics Data System (ADS)
Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo
2015-05-01
The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self-learning strategies were implemented with very promising results, demonstrating the feasibility of using low-cost high-speed infrared imagers in advancing towards a real-time / in-line zero-defect production systems.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service
Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha
2017-01-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169
The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; MacKay, Craig
2014-08-01
Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.
Automated microscopy for high-content RNAi screening
2010-01-01
Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920
Siaki, Leilani A; Loescher, Lois J; Trego, Lori L
2013-03-01
This article presents a discussion of development of a mid-range theory of risk perception. Unhealthy behaviours contribute to the development of health inequalities worldwide. The link between perceived risk and successful health behaviour change is inconclusive, particularly in vulnerable populations. This may be attributed to inattention to culture. The synthesis strategy of theory building guided the process using three methods: (1) a systematic review of literature published between 2000-2011 targeting perceived risk in vulnerable populations; (2) qualitative and (3) quantitative data from a study of Samoan Pacific Islanders at high risk of cardiovascular disease and diabetes. Main concepts of this theory include risk attention, appraisal processes, cognition, and affect. Overarching these concepts is health-world view: cultural ways of knowing, beliefs, values, images, and ideas. This theory proposes the following: (1) risk attention varies based on knowledge of the health risk in the context of health-world views; (2) risk appraisals are influenced by affect, health-world views, cultural customs, and protocols that intersect with the health risk; (3) strength of cultural beliefs, values, and images (cultural identity) mediate risk attention and risk appraisal influencing the likelihood that persons will engage in health-promoting behaviours that may contradict cultural customs/protocols. Interventions guided by a culturally sensitive mid-range theory may improve behaviour-related health inequalities in vulnerable populations. The synthesis strategy is an intensive process for developing a culturally sensitive mid-range theory. Testing of the theory will ascertain its usefulness for reducing health inequalities in vulnerable groups. © 2012 Blackwell Publishing Ltd.
Challenges for data storage in medical imaging research.
Langer, Steve G
2011-04-01
Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.
Mulshine, James L; Avila, Rick; Yankelevitz, David; Baer, Thomas M; Estépar, Raul San Jose; Ambrose, Laurie Fenton; Aldigé, Carolyn R
2015-05-01
The Prevent Cancer Foundation Lung Cancer Workshop XI: Tobacco-Induced Disease: Advances in Policy, Early Detection and Management was held in New York, NY on May 16 and 17, 2014. The two goals of the Workshop were to define strategies to drive innovation in precompetitive quantitative research on the use of imaging to assess new therapies for management of early lung cancer and to discuss a process to implement a national program to provide high quality computed tomography imaging for lung cancer and other tobacco-induced disease. With the central importance of computed tomography imaging for both early detection and volumetric lung cancer assessment, strategic issues around the development of imaging and ensuring its quality are critical to ensure continued progress against this most lethal cancer.
Probing the brain with molecular fMRI.
Ghosh, Souparno; Harvey, Peter; Simon, Jacob C; Jasanoff, Alan
2018-06-01
One of the greatest challenges of modern neuroscience is to incorporate our growing knowledge of molecular and cellular-scale physiology into integrated, organismic-scale models of brain function in behavior and cognition. Molecular-level functional magnetic resonance imaging (molecular fMRI) is a new technology that can help bridge these scales by mapping defined microscopic phenomena over large, optically inaccessible regions of the living brain. In this review, we explain how MRI-detectable imaging probes can be used to sensitize noninvasive imaging to mechanistically significant components of neural processing. We discuss how a combination of innovative probe design, advanced imaging methods, and strategies for brain delivery can make molecular fMRI an increasingly successful approach for spatiotemporally resolved studies of diverse neural phenomena, perhaps eventually in people. Copyright © 2018 Elsevier Ltd. All rights reserved.
Light activated microbubbles for imaging and microsurgery
NASA Astrophysics Data System (ADS)
Cavigli, Lucia; Micheletti, Filippo; Tortoli, Paolo; Centi, Sonia; Lai, Sarah; Borri, Claudia; Rossi, Francesca; Ratto, Fulvio; Pini, Roberto
2017-03-01
Imaging and microsurgery procedures based on the photoacoustic effect have recently attracted much attention for cancer treatment. Light absorption in the nanosecond regime triggers thermoelastic processes that induce ultrasound emission and even cavitation. The ultrasound waves may be detected to reconstruct images, while cavitation may be exploited to kill malignant cells. The potential of gold nanorods as contrast agents for photoacoustic imaging has been extensively investigated, but still little is known about their use to trigger cavitation. Here, we investigated the influence of environment thermal properties on the ability of gold nanorods to trigger cavitation by probing the photoacoustic emission as a function of the excitation fluence. We are confident that these results will provide useful directions to the development of new strategies for therapies based on the photoacoustic effect.
NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images
Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.
2007-01-01
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152
NeuronMetrics: software for semi-automated processing of cultured neuron images.
Narro, Martha L; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L
2007-03-23
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of approximately 60 2D images is 1.0-2.5 h, from a folder of images to a table of numeric data. NeuronMetrics' output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery.
The Principal's Role in Marketing the School: Subjective Interpretations and Personal Influences
ERIC Educational Resources Information Center
Oplatka, Izhar
2007-01-01
The literature on educational marketing to date has been concerned with the ways by which schools market and promote themselves in the community, their strategies to maintain and enhance their image, and the factors affecting parents and children and the processes they undergo when choosing their junior high and high school. Yet, there remains a…
ERIC Educational Resources Information Center
Perrachione, Tyler K.; Wong, Patrick C. M.
2007-01-01
Brain imaging studies of voice perception often contrast activation from vocal and verbal tasks to identify regions uniquely involved in processing voice. However, such a strategy precludes detection of the functional relationship between speech and voice perception. In a pair of experiments involving identifying voices from native and foreign…
Designing Instruction for the Web: Incorporating New Conceptions of the Learning Process.
ERIC Educational Resources Information Center
Hunt, Nancy P.
New technologies such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) have led to recent discoveries about how the brain works and how people learn. The interactive capabilities of World Wide Web-based instructional strategies can be employed to better match how we teach with how we know students learn. This paper…
Improved Space Object Observation Techniques Using CMOS Detectors
NASA Astrophysics Data System (ADS)
Schildknecht, T.; Hinze, A.; Schlatter, P.; Silha, J.; Peltonen, J.; Santti, T.; Flohrer, T.
2013-08-01
CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contain their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. Presently applied and proposed optical observation strategies for space debris surveys and space surveillance applications had to be analyzed. The major design drivers were identified and potential benefits from using available and future CMOS sensors were assessed. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, the characteristics of a particular CMOS sensor available at the Zimmerwald observatory were analyzed by performing laboratory test measurements.
ERIC Educational Resources Information Center
Khalil, Mohammed K.; Paas, Fred; Johnson, Tristan E.; Su, Yung K.; Payer, Andrew F.
2008-01-01
This research is an effort to best utilize the interactive anatomical images for instructional purposes based on cognitive load theory. Three studies explored the differential effects of three computer-based instructional strategies that use anatomical cross-sections to enhance the interpretation of radiological images. These strategies include:…
The influence of encoding strategy on episodic memory and cortical activity in schizophrenia.
Bonner-Jackson, Aaron; Haut, Kristen; Csernansky, John G; Barch, Deanna M
2005-07-01
Recent work suggests that episodic memory deficits in schizophrenia may be related to disturbances of encoding or retrieval. Schizophrenia patients appear to benefit from instruction in episodic memory strategies. We tested the hypothesis that providing effective encoding strategies to schizophrenia patients enhances encoding-related brain activity and recognition performance. Seventeen schizophrenia patients and 26 healthy comparison subjects underwent functional magnetic resonance imaging scans while performing incidental encoding tasks of words and faces. Subjects were required to make either deep (abstract/concrete) or shallow (alphabetization) judgments for words and deep (gender) judgments for faces, followed by subsequent recognition tests. Schizophrenia and comparison subjects recognized significantly more words encoded deeply than shallowly, activated regions in inferior frontal cortex (Brodmann area 45/47) typically associated with deep and successful encoding of words, and showed greater left frontal activation for the processing of words compared with faces. However, during deep encoding and material-specific processing (words vs. faces), participants with schizophrenia activated regions not activated by control subjects, including several in prefrontal cortex. Our findings suggest that a deficit in use of effective strategies influences episodic memory performance in schizophrenia and that abnormalities in functional brain activation persist even when such strategies are applied.
A memory-efficient staining algorithm in 3D seismic modelling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Yang, Lu
2017-08-01
The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.
Competing in the brave new (deregulated) world: Service innovation and brand strategy for utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, D.; Lathrop, S.; Wolf, A.
This paper will address ways utilities can gain a competitive advantage despite industry turbulence. It details how to create successful brand and innovation strategies and how to link these in order to create successful new products and services for customers. After gaining a solid understanding of industry trends, the first step is to develop an ideal brand image and create a brand strategy around it. Next, companies must determine how to roll out and leverage brands to targeted customer segments over time through a brand architecture. Then, they must build an innovation strategy, defining where and how to apply newmore » technologies and develop new products and services. This strategy is linked to the brand architecture through a product architecture. Finally, companies must be able to successfully develop new products and services through a well-planned innovation process.« less
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images
NASA Astrophysics Data System (ADS)
Eid, Ahmed; Farag, Aly
2005-12-01
The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.
NASA Astrophysics Data System (ADS)
Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin
2018-04-01
Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
Enterprise-wide PACS: beyond radiology, an architecture to manage all medical images.
Bandon, David; Lovis, Christian; Geissbühler, Antoine; Vallée, Jean-Paul
2005-08-01
Picture archiving and communication systems (PACS) have the vocation to manage all medical images acquired within the hospital. To address the various situations encountered in the imaging specialties, the traditional architecture used for the radiology department has to evolve. We present our preliminarily results toward an enterprise-wide PACS intended to support all kind of image production in medicine, from biomolecular images to whole-body pictures. Our solution is based on an existing radiologic PACS system from which images are distributed through an electronic patient record to all care facilities. This platform is enriched with a flexible integration framework supporting digital image communication in medicine (DICOM) and DICOM-XML formats. In addition, a generic workflow engine highly customizable is used to drive work processes. Echocardiology; hematology; ear, nose, and throat; and dermatology, including wounds, follow-up is the first implemented extensions outside of radiology. We also propose a global strategy for further developments based on three possible architectures for an enterprise-wide PACS.
An image overall complexity evaluation method based on LSD line detection
NASA Astrophysics Data System (ADS)
Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo
2017-04-01
In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
A New Strategy to Land Precisely on the Northern Plains of Mars
NASA Technical Reports Server (NTRS)
Cheng, Yang; Huertas, Andres
2010-01-01
During the Phoenix mission landing site selection process, the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) images revealed widely spread and dense rock fields in the northern plains. Automatic rock mapping and subsequent statistical analyses showed 30-90% CFA (cumulative fractional area) covered by rocks larger than 1 meter in dense rock fields around craters. Less dense rock fields had 5-30% rock coverage in terrain away from craters. Detectable meter-scale boulders were found nearly everywhere. These rocks present a risk to spacecraft safety during landing. However, they are the most salient topographic features in this region, and can be good landmarks for spacecraft localization during landing. In this paper we present a novel strategy that uses abundance of rocks in northern plains for spacecraft localization. The paper discusses this approach in three sections: a rock-based landmark terrain relative navigation (TRN) algorithm; the TRN algorithm feasibility; and conclusions.
Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing
2014-06-16
Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas.
Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing
2014-01-01
Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas. PMID:24936948
Einstein, Andrew J.; Wolff, Steven D.; Manheimer, Eric D.; Thompson, James; Terry, Sylvia; Uretsky, Seth; Pilip, Adalbert; Peters, M. Robert
2009-01-01
Radiation dose from coronary computed tomography angiography may be reduced using a sequential scanning protocol rather than a conventional helical scanning protocol. Here we compare radiation dose and image quality from coronary computed tomography angiography in a single center between an initial period during which helical scanning with electrocardiographically-controlled tube current modulation was used for all patients (n=138) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=261). Using the sequential-if-appropriate strategy, sequential scanning was employed in 86.2% of patients. Compared to the helical-only strategy, this strategy was associated with a 65.1% dose reduction (mean dose-length product of 305.2 vs. 875.1 and mean effective dose of 14.9 mSv vs. 5.2 mSv, respectively), with no significant change in overall image quality, step artifacts, motion artifacts, or perceived image noise. For the 225 patients undergoing sequential scanning, the dose-length product was 201.9 ± 90.0 mGy·cm, while for patients undergoing helical scanning under either strategy, the dose-length product was 890.9 ± 293.3 mGy·cm (p<0.0001), corresponding to mean effective doses of 3.4 mSv and 15.1 mSv, respectively, a 77.5% reduction. Image quality was significantly greater for the sequential studies, reflecting the poorer image quality in patients undergoing helical scanning in the sequential-if-appropriate strategy. In conclusion, a sequential-if-appropriate diagnostic strategy reduces dose markedly compared to a helical-only strategy, with no significant difference in image quality. PMID:19892048
Cost-effectiveness of angiographic imaging in isolated perimesencephalic subarachnoid hemorrhage.
Kalra, Vivek B; Wu, Xiao; Forman, Howard P; Malhotra, Ajay
2014-12-01
The purpose of this study is to perform a comprehensive cost-effectiveness analysis of all possible permutations of computed tomographic angiography (CTA) and digital subtraction angiography imaging strategies for both initial diagnosis and follow-up imaging in patients with perimesencephalic subarachnoid hemorrhage on noncontrast CT. Each possible imaging strategy was evaluated in a decision tree created with TreeAge Pro Suite 2014, with parameters derived from a meta-analysis of 40 studies and literature values. Base case and sensitivity analyses were performed to assess the cost-effectiveness of each strategy. A Monte Carlo simulation was conducted with distributional variables to evaluate the robustness of the optimal strategy. The base case scenario showed performing initial CTA with no follow-up angiographic studies in patients with perimesencephalic subarachnoid hemorrhage to be the most cost-effective strategy ($5422/quality adjusted life year). Using a willingness-to-pay threshold of $50 000/quality adjusted life year, the most cost-effective strategy based on net monetary benefit is CTA with no follow-up when the sensitivity of initial CTA is >97.9%, and CTA with CTA follow-up otherwise. The Monte Carlo simulation reported CTA with no follow-up to be the optimal strategy at willingness-to-pay of $50 000 in 99.99% of the iterations. Digital subtraction angiography, whether at initial diagnosis or as part of follow-up imaging, is never the optimal strategy in our model. CTA without follow-up imaging is the optimal strategy for evaluation of patients with perimesencephalic subarachnoid hemorrhage when modern CT scanners and a strict definition of perimesencephalic subarachnoid hemorrhage are used. Digital subtraction angiography and follow-up imaging are not optimal as they carry complications and associated costs. © 2014 American Heart Association, Inc.
Re-engineering the process of medical imaging physics and technology education and training.
Sprawls, Perry
2005-09-01
The extensive availability of digital technology provides an opportunity for enhancing both the effectiveness and efficiency of virtually all functions in the process of medical imaging physics and technology education and training. This includes degree granting academic programs within institutions and a wide spectrum of continuing education lifelong learning activities. Full achievement of the advantages of technology-enhanced education (e-learning, etc.) requires an analysis of specific educational activities with respect to desired outcomes and learning objectives. This is followed by the development of strategies and resources that are based on established educational principles. The impact of contemporary technology comes from its ability to place learners into enriched learning environments. The full advantage of a re-engineered and implemented educational process involves changing attitudes and functions of learning facilitators (teachers) and resource allocation and sharing both within and among institutions.
Multiparametric Imaging of Organ System Interfaces
Vandoorne, Katrien; Nahrendorf, Matthias
2017-01-01
Cardiovascular diseases are a consequence of genetic and environmental risk factors that together generate arterial wall and cardiac pathologies. Blood vessels connect multiple systems throughout the entire body and allow organs to interact via circulating messengers. These same interactions facilitate nervous and metabolic system influence on cardiovascular health. Multiparametric imaging offers the opportunity to study these interfacing systems’ distinct processes, to quantify their interactions and to explore how these contribute to cardiovascular disease. Noninvasive multiparametric imaging techniques are emerging tools that can further our understanding of this complex and dynamic interplay. PET/MRI and multichannel optical imaging are particularly promising because they can simultaneously sample multiple biomarkers. Preclinical multiparametric diagnostics could help discover clinically relevant biomarker combinations pivotal for understanding cardiovascular disease. Interfacing systems important to cardiovascular disease include the immune, nervous and hematopoietic systems. These systems connect with ‘classical’ cardiovascular organs, like the heart and vasculature, and with the brain. The dynamic interplay between these systems and organs enables processes such as hemostasis, inflammation, angiogenesis, matrix remodeling, metabolism and fibrosis. As the opportunities provided by imaging expand, mapping interconnected systems will help us decipher the complexity of cardiovascular disease and monitor novel therapeutic strategies. PMID:28360260
Wang, Lei; Tian, Wei; Shi, Yongmin
2017-08-07
The morphology and structure of plumbing systems can provide key information on the eruption rate and style of basalt lava fields. The most powerful way to study subsurface geo-bodies is to use industrial 3D reflection seismological imaging. However, strategies to image subsurface volcanoes are very different from that of oil and gas reservoirs. In this study, we process seismic data cubes from the Northern Tarim Basin, China, to illustrate how to visualize sills through opacity rendering techniques and how to image the conduits by time-slicing. In the first case, we isolated probes by the seismic horizons marking the contacts between sills and encasing strata, applying opacity rendering techniques to extract sills from the seismic cube. The resulting detailed sill morphology shows that the flow direction is from the dome center to the rim. In the second seismic cube, we use time-slices to image the conduits, which corresponds to marked discontinuities within the encasing rocks. A set of time-slices obtained at different depths show that the Tarim flood basalts erupted from central volcanoes, fed by separate pipe-like conduits.
Advances in Small Animal Imaging Systems
NASA Astrophysics Data System (ADS)
Loudos, George K.
2007-11-01
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to an increased interest in in vivo laboratory animal imaging during the past few years. For this purpose, new instrumentation, data acquisition strategies, and image processing and reconstruction techniques are being developed, researched and evaluated. The aim of this article is to give a short overview of the state of the art technologies for high resolution and high sensitivity molecular imaging techniques, primarily positron emission tomography (PET) and single photon emission computed tomography (SPECT). The basic needs of small animal imaging will be described. The evolution in instrumentation in the past two decades, as well as the commercially available systems will be overviewed. Finally, the new trends in detector technology and preliminary results from challenging applications will be presented. For more details a number of references are provided.
Interstitial ablation and imaging of soft tissue using miniaturized ultrasound arrays
NASA Astrophysics Data System (ADS)
Makin, Inder R. S.; Gallagher, Laura A.; Mast, T. Douglas; Runk, Megan M.; Faidi, Waseem; Barthe, Peter G.; Slayton, Michael H.
2004-05-01
A potential alternative to extracorporeal, noninvasive HIFU therapy is minimally invasive, interstitial ultrasound ablation that can be performed laparoscopically or percutaneously. Research in this area at Guided Therapy Systems and Ethicon Endo-Surgery has included development of miniaturized (~3 mm diameter) linear ultrasound arrays capable of high power for bulk tissue ablation as well as broad bandwidth for imaging. An integrated control system allows therapy planning and automated treatment guided by real-time interstitial B-scan imaging. Image quality, challenging because of limited probe dimensions and channel count, is aided by signal processing techniques that improve image definition and contrast. Simulations of ultrasonic heat deposition, bio-heat transfer, and tissue modification provide understanding and guidance for development of treatment strategies. Results from in vitro and in vivo ablation experiments, together with corresponding simulations, will be described. Using methods of rotational scanning, this approach is shown to be capable of clinically relevant ablation rates and volumes.
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
Quantitative assessment of Cerenkov luminescence for radioguided brain tumor resection surgery
NASA Astrophysics Data System (ADS)
Klein, Justin S.; Mitchell, Gregory S.; Cherry, Simon R.
2017-05-01
Cerenkov luminescence imaging (CLI) is a developing imaging modality that detects radiolabeled molecules via visible light emitted during the radioactive decay process. We used a Monte Carlo based computer simulation to quantitatively investigate CLI compared to direct detection of the ionizing radiation itself as an intraoperative imaging tool for assessment of brain tumor margins. Our brain tumor model consisted of a 1 mm spherical tumor remnant embedded up to 5 mm in depth below the surface of normal brain tissue. Tumor to background contrast ranging from 2:1 to 10:1 were considered. We quantified all decay signals (e±, gamma photon, Cerenkov photons) reaching the brain volume surface. CLI proved to be the most sensitive method for detecting the tumor volume in both imaging and non-imaging strategies as assessed by contrast-to-noise ratio and by receiver operating characteristic output of a channelized Hotelling observer.
An improved monomeric infrared fluorescent protein for neuronal and tumour brain imaging.
Yu, Dan; Gustafson, William Clay; Han, Chun; Lafaye, Céline; Noirclerc-Savoye, Marjolaine; Ge, Woo-Ping; Thayer, Desiree A; Huang, Hai; Kornberg, Thomas B; Royant, Antoine; Jan, Lily Yeh; Jan, Yuh Nung; Weiss, William A; Shu, Xiaokun
2014-05-15
Infrared fluorescent proteins (IFPs) are ideal for in vivo imaging, and monomeric versions of these proteins can be advantageous as protein tags or for sensor development. In contrast to GFP, which requires only molecular oxygen for chromophore maturation, phytochrome-derived IFPs incorporate biliverdin (BV) as the chromophore. However, BV varies in concentration in different cells and organisms. Here we engineered cells to express the haeme oxygenase responsible for BV biosynthesis and a brighter monomeric IFP mutant (IFP2.0). Together, these tools improve the imaging capabilities of IFP2.0 compared with monomeric IFP1.4 and dimeric iRFP. By targeting IFP2.0 to the plasma membrane, we demonstrate robust labelling of neuronal processes in Drosophila larvae. We also show that this strategy improves the sensitivity when imaging brain tumours in whole mice. Our work shows promise in the application of IFPs for protein labelling and in vivo imaging.
NASA Astrophysics Data System (ADS)
Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang
2017-12-01
The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.
Image Decoding of Photonic Crystal Beads Array in the Microfluidic Chip for Multiplex Assays
Yuan, Junjie; Zhao, Xiangwei; Wang, Xiaoxia; Gu, Zhongze
2014-01-01
Along with the miniaturization and intellectualization of biomedical instruments, the increasing demand of health monitoring at anywhere and anytime elevates the need for the development of point of care testing (POCT). Photonic crystal beads (PCBs) as one kind of good encoded microcarriers can be integrated with microfluidic chips in order to realize cost-effective and high sensitive multiplex bioassays. However, there are difficulties in analyzing them towards automated analysis due to the characters of the PCBs and the unique detection manner. In this paper, we propose a strategy to take advantage of automated image processing for the color decoding of the PCBs array in the microfluidic chip for multiplex assays. By processing and alignment of two modal images of epi-fluorescence and epi-white light, every intact bead in the image is accurately extracted and decoded by PC colors, which stand for the target species. This method, which shows high robustness and accuracy under various configurations, eliminates the high hardware requirement of spectroscopy analysis and user-interaction software, and provides adequate supports for the general automated analysis of POCT based on PCBs array. PMID:25341876
3D characterization of trans- and inter-lamellar fatigue crack in (α + β) Ti alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babout, Laurent, E-mail: Laurent.babout@p.lodz.pl; Jopek, Łukasz; Preuss, Michael
2014-12-15
This paper presents a three dimensional image processing strategy that has been developed to quantitatively analyze and correlate the path of a fatigue crack with the lamellar microstructure found in Ti-6246. The analysis is carried out on X-ray microtomography images acquired in situ during uniaxial fatigue testing. The crack, the primary β-grain boundaries and the α lamellae have been segmented separately and merged for the first time to allow a better characterization and understanding of their mutual interaction. This has particularly emphasized the role of translamellar crack growth at a very high propagation angle with regard to the lamellar orientation,more » supporting the central role of colonies favorably oriented for basal 〈a〉 slip to guide the crack in the fully lamellar microstructure of Ti alloy. - Highlights: • 3D tomography images reveal strong short fatigue crack interaction with α lamellae. • Proposed 3D image processing methodology makes their segmentation possible. • Crack-lamellae orientation maps show prevalence of translamellar cracking. • Angle study comforts the influence of basal/prismatic slip on crack path.« less
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
Arsenovic, Paul T; Bathula, Kranthidhar; Conway, Daniel E
2017-04-11
The LINC complex has been hypothesized to be the critical structure that mediates the transfer of mechanical forces from the cytoskeleton to the nucleus. Nesprin-2G is a key component of the LINC complex that connects the actin cytoskeleton to membrane proteins (SUN domain proteins) in the perinuclear space. These membrane proteins connect to lamins inside the nucleus. Recently, a Förster Resonance Energy Transfer (FRET)-force probe was cloned into mini-Nesprin-2G (Nesprin-TS (tension sensor)) and used to measure tension across Nesprin-2G in live NIH3T3 fibroblasts. This paper describes the process of using Nesprin-TS to measure LINC complex forces in NIH3T3 fibroblasts. To extract FRET information from Nesprin-TS, an outline of how to spectrally unmix raw spectral images into acceptor and donor fluorescent channels is also presented. Using open-source software (ImageJ), images are pre-processed and transformed into ratiometric images. Finally, FRET data of Nesprin-TS is presented, along with strategies for how to compare data across different experimental groups.
Soares, Fabiano Araujo; Carvalho, João Luiz Azevedo; Miosso, Cristiano Jacques; de Andrade, Marcelino Monteiro; da Rocha, Adson Ferreira
2015-09-17
In surface electromyography (surface EMG, or S-EMG), conduction velocity (CV) refers to the velocity at which the motor unit action potentials (MUAPs) propagate along the muscle fibers, during contractions. The CV is related to the type and diameter of the muscle fibers, ion concentration, pH, and firing rate of the motor units (MUs). The CV can be used in the evaluation of contractile properties of MUs, and of muscle fatigue. The most popular methods for CV estimation are those based on maximum likelihood estimation (MLE). This work proposes an algorithm for estimating CV from S-EMG signals, using digital image processing techniques. The proposed approach is demonstrated and evaluated, using both simulated and experimentally-acquired multichannel S-EMG signals. We show that the proposed algorithm is as precise and accurate as the MLE method in typical conditions of noise and CV. The proposed method is not susceptible to errors associated with MUAP propagation direction or inadequate initialization parameters, which are common with the MLE algorithm. Image processing -based approaches may be useful in S-EMG analysis to extract different physiological parameters from multichannel S-EMG signals. Other new methods based on image processing could also be developed to help solving other tasks in EMG analysis, such as estimation of the CV for individual MUs, localization and tracking of innervation zones, and study of MU recruitment strategies.
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
NASA Astrophysics Data System (ADS)
Fan, Lei
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Warped document image correction method based on heterogeneous registration strategies
NASA Astrophysics Data System (ADS)
Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan
2013-03-01
With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.
Implementation of Enterprise Imaging Strategy at a Chinese Tertiary Hospital.
Li, Shanshan; Liu, Yao; Yuan, Yifang; Li, Jia; Wei, Lan; Wang, Yuelong; Fei, Xiaolu
2018-01-04
Medical images have become increasingly important in clinical practice and medical research, and the need to manage images at the hospital level has become urgent in China. To unify patient identification in examinations from different medical specialties, increase convenient access to medical images under authentication, and make medical images suitable for further artificial intelligence investigations, we implemented an enterprise imaging strategy by adopting an image integration platform as the main tool at Xuanwu Hospital. Workflow re-engineering and business system transformation was also performed to ensure the quality and content of the imaging data. More than 54 million medical images and approximately 1 million medical reports were integrated, and uniform patient identification, images, and report integration were made available to the medical staff and were accessible via a mobile application, which were achieved by implementing the enterprise imaging strategy. However, to integrate all medical images of different specialties at a hospital and ensure that the images and reports are qualified for data mining, some further policy and management measures are still needed.
Guo, Kun; Soornack, Yoshi; Settle, Rebecca
2018-03-05
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.
Improving waveform inversion using modified interferometric imaging condition
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2017-12-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
Improving waveform inversion using modified interferometric imaging condition
NASA Astrophysics Data System (ADS)
Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen
2018-02-01
Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.
Data Processing for the Space-Based Desis Hyperspectral Sensor
NASA Astrophysics Data System (ADS)
Carmona, E.; Avbelj, J.; Alonso, K.; Bachmann, M.; Cerra, D.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Kerr, G.; Knodt, U.; Krutz, D.; Krawcyk, H.; Makarau, A.; Miller, R.; Müller, R.; Perkins, R.; Walter, I.
2017-05-01
The German Aerospace Center (DLR) and Teledyne Brown Engineering (TBE) have established a collaboration to develop and operate a new space-based hyperspectral sensor, the DLR Earth Sensing Imaging Spectrometer (DESIS). DESIS will provide spacebased hyperspectral data in the VNIR with high spectral resolution and near-global coverage. While TBE provides the platform and infrastructure for operation of the DESIS instrument on the International Space Station, DLR is responsible for providing the instrument and the processing software. The DESIS instrument is equipped with novel characteristics for an imaging spectrometer such high spectral resolution (2.55 nm), a mirror pointing unit or a CMOS sensor operated in rolling shutter mode. We present here an overview of the DESIS instrument and its processing chain, emphasizing the effect of the novel characteristics of DESIS in the data processing and final data products. Furthermore, we analyse in more detail the effect of the rolling shutter on the DESIS data and possible mitigation/correction strategies.
Laule, Cornelia; Vavasour, Irene M; Shahinfard, Elham; Mädler, Burkhard; Zhang, Jing; Li, David K B; MacKay, Alex L; Sirrs, Sandra M
2018-05-01
Late-onset adult Krabbe disease is a very rare demyelinating leukodystrophy, affecting less than 1 in a million people. Hematopoietic stem cell transplantation (HSCT) strategies can stop the accumulation of toxic metabolites that damage myelin-producing cells. We used quantitative advanced imaging metrics to longitudinally assess the impact of HSCT on brain abnormalities in adult-onset Krabbe disease. A 42-year-old female with late-onset Krabbe disease and an age/sex-matched healthy control underwent annual 3T MRI (baseline was immediately prior to HSCT for the Krabbe subject). Imaging included conventional scans, myelin water imaging, diffusion tensor imaging, and magnetic resonance spectroscopy. Brain abnormalities far beyond those visible on conventional imaging were detected, suggesting a global pathological process occurs in Krabbe disease with adult-onset etiology, with myelin being more affected than axons, and evidence of wide-spread gliosis. After HSCT, our patient showed clinical stability in all measures, as well as improvement in gait, dysarthria, and pseudobulbar affect at 7.5 years post-transplant. No MRI evidence of worsening demyelination and axonal loss was observed up to 4 years post-allograft. Clinical evidence and stability of advanced MR measures related to myelin and axons supports HSCT as an effective treatment strategy for stopping progression associated with late-onset Krabbe disease. Copyright © 2018 by the American Society of Neuroimaging.
Gender-related differences in lateralization of hippocampal activation and cognitive strategy.
Frings, Lars; Wagner, Kathrin; Unterrainer, Josef; Spreer, Joachim; Halsband, Ulrike; Schulze-Bonhage, Andreas
2006-03-20
Gender-related differences in brain activation patterns and their lateralization associated with cognitive functions have been reported in the field of language, emotion, and working memory. Differences have been hypothesized to be due to different cognitive strategies. The aim of the present study was to test whether lateralization of brain activation in the hippocampi during memory processing differs between the sexes. We acquired functional magnetic resonance imaging data from healthy female and male study participants performing a spatial memory task and quantitatively assessed the lateralization of hippocampal activation in each participant. Hippocampal activation was significantly more left lateralized in women, and more right lateralized in men. Correspondingly, women rated their strategy as being more verbal than men did.
Justification of automated decision-making: medical explanations as medical arguments.
Shankar, R. D.; Musen, M. A.
1999-01-01
People use arguments to justify their claims. Computer systems use explanations to justify their conclusions. We are developing WOZ, an explanation framework that justifies the conclusions of a clinical decision-support system. WOZ's central component is the explanation strategy that decides what information justifies a claim. The strategy uses Toulmin's argument structure to define pieces of information and to orchestrate their presentation. WOZ uses explicit models that abstract the core aspects of the framework such as the explanation strategy. In this paper, we present the use of arguments, the modeling of explanations, and the explanation process used in WOZ. WOZ exploits the wealth of naturally occurring arguments, and thus can generate convincing medical explanations. Images Figure 5 PMID:10566388
Multifunctional quantum dots and liposome complexes in drug delivery
Wang, Qi; Chao, Yimin
2018-01-01
Incorporating both diagnostic and therapeutic functions into a single nanoscale system is an effective modern drug delivery strategy. Combining liposomes with semiconductor quantum dots (QDs) has great potential to achieve such dual functions, referred to in this review as a liposomal QD hybrid system (L-QD). Here we review the recent literature dealing with the design and application of L-QD for advances in bio-imaging and drug delivery. After a summary of L-QD synthesis processes and evaluation of their properties, we will focus on their multifunctional applications, ranging from in vitro cell imaging to theranostic drug delivery approaches. PMID:28866655
Multifunctional quantum dots and liposome complexes in drug delivery.
Wang, Qi; Chao, Yi-Min
2017-09-03
Incorporating both diagnostic and therapeutic functions into a single nanoscale system is an effective modern drug delivery strategy. Combining liposomes with semiconductor quantum dots (QDs) has great potential to achieve such dual functions, referred to in this review as a liposomal QD hybrid system (L-QD). Here we review the recent literature dealing with the design and application of L-QD for advances in bio-imaging and drug delivery. After a summary of L-QD synthesis processes and evaluation of their properties, we will focus on their multifunctional applications, ranging from in vitro cell imaging to theranostic drug delivery approaches.
Workshop on imaging science development for cancer prevention and preemption.
Kelloff, Gary J; Sullivan, Daniel C; Baker, Houston; Clarke, Lawrence P; Nordstrom, Robert; Tatum, James L; Dorfman, Gary S; Jacobs, Paula; Berg, Christine D; Pomper, Martin G; Birrer, Michael J; Tempero, Margaret; Higley, Howard R; Petty, Brenda Gumbs; Sigman, Caroline C; Maley, Carlo; Sharma, Prateek; Wax, Adam; Ginsberg, Gregory G; Dannenberg, Andrew J; Hawk, Ernest T; Messing, Edward M; Grossman, H Barton; Harisinghani, Mukesh; Bigio, Irving J; Griebel, Donna; Henson, Donald E; Fabian, Carol J; Ferrara, Katherine; Fantini, Sergio; Schnall, Mitchell D; Zujewski, Jo Anne; Hayes, Wendy; Klein, Eric A; DeMarzo, Angelo; Ocak, Iclal; Ketterling, Jeffrey A; Tempany, Clare; Shtern, Faina; Parnes, Howard L; Gomez, Jorge; Srivastava, Sudhir; Szabo, Eva; Lam, Stephen; Seibel, Eric J; Massion, Pierre; McLennan, Geoffrey; Cleary, Kevin; Suh, Robert; Burt, Randall W; Pfeiffer, Ruth M; Hoffman, John M; Roy, Hemant K; Wang, Thomas; Limburg, Paul J; El-Deiry, Wafik S; Papadimitrakopoulou, Vali; Hittelman, Walter N; MacAulay, Calum; Veltri, Robert W; Solomon, Diane; Jeronimo, Jose; Richards-Kortum, Rebecca; Johnson, Karen A; Viner, Jaye L; Stratton, Steven P; Rajadhyaksha, Milind; Dhawan, Atam
2007-01-01
The concept of intraepithelial neoplasm (IEN) as a near-obligate precursor of cancers has generated opportunities to examine drug or device intervention strategies that may reverse or retard the sometimes lengthy process of carcinogenesis. Chemopreventive agents with high therapeutic indices, well-monitored for efficacy and safety, are greatly needed, as is development of less invasive or minimally disruptive visualization and assessment methods to safely screen nominally healthy but at-risk patients, often for extended periods of time and at repeated intervals. Imaging devices, alone or in combination with anticancer drugs, may also provide novel interventions to treat or prevent precancer.
Heterobimetallic Complexes for Theranostic Applications.
Fernández-Moreira, Vanesa; Gimeno, M Concepción
2018-03-07
The design of more efficient anticancer drugs requires a deeper understanding of their biodistribution and mechanism of action. Cell imaging agents could help to gain insight into biological processes and, consequently, the best strategy for attaining suitable scaffolds in which both biological and imaging properties are maximized. A new concept arises in this field that is the combination of two metal fragments as collaborative partners to provide the precise emissive properties to visualize the cell as well as the optimum cytotoxic activity to build more potent and selective chemotherapeutic agents. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Navigation Strategies for Primitive Solar System Body Rendezvous and Proximity Operations
NASA Technical Reports Server (NTRS)
Getzandanner, Kenneth M.
2011-01-01
A wealth of scientific knowledge regarding the composition and evolution of the solar system can be gained through reconnaissance missions to primitive solar system bodies. This paper presents analysis of a baseline navigation strategy designed to address the unique challenges of primitive body navigation. Linear covariance and Monte Carlo error analysis was performed on a baseline navigation strategy using simulated data from a· design reference mission (DRM). The objective of the DRM is to approach, rendezvous, and maintain a stable orbit about the near-Earth asteroid 4660 Nereus. The outlined navigation strategy and resulting analyses, however, are not necessarily limited to this specific target asteroid as they may he applicable to a diverse range of mission scenarios. The baseline navigation strategy included simulated data from Deep Space Network (DSN) radiometric tracking and optical image processing (OpNav). Results from the linear covariance and Monte Carlo analyses suggest the DRM navigation strategy is sufficient to approach and perform proximity operations in the vicinity of the target asteroid with meter-level accuracy.
Li, Jinhui; Ji, Yifei; Zhang, Yongsheng; Zhang, Qilei; Huang, Haifeng; Dong, Zhen
2018-04-10
Spaceborne synthetic aperture radar (SAR) missions operating at low frequencies, such as L-band or P-band, are significantly influenced by the ionosphere. As one of the serious ionosphere effects, Faraday rotation (FR) is a remarkable distortion source for the polarimetric SAR (PolSAR) application. Various published FR estimators along with an improved one have been introduced to solve this issue, all of which are implemented by processing a set of PolSAR real data. The improved estimator exhibits optimal robustness based on performance analysis, especially in term of the system noise. However, all published estimators, including the improved estimator, suffer from a potential FR angle (FRA) ambiguity. A novel strategy of the ambiguity correction for those FR estimators is proposed and shown as a flow process, which is divided into pixel-level and image-level correction. The former is not yet recognized and thus is considered in particular. Finally, the validation experiments show a prominent performance of the proposed strategy.
Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.
Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui
2017-10-01
To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Using NLP to identify cancer cases in imaging reports drawn from radiology information systems.
Patrick, Jon; Asgari, Pooyan; Li, Min; Nguyen, Dung
2013-01-01
A Natural Language processing (NLP) classifier has been developed for the Victorian and NSW Cancer Registries with the purpose of automatically identifying cancer reports from imaging services, transmitting them to the Registries and then extracting pertinent cancer information. Large scale trials conducted on over 40,000 reports show the sensitivity for identifying reportable cancer reports is above 98% with a specificity above 96%. Detection of tumour stream, report purpose, and a variety of extracted content is generally above 90% specificity. The differences between report layout and authoring strategies across imaging services appear to require different classifiers to retain this high level of accuracy. Linkage of the imaging data with existing registry records (hospital and pathology reports) to derive stage and recurrence of cancer has commenced and shown very promising results.
Synthetic-Focusing Strategies for Real-Time Annular-Array Imaging
Ketterling, Jeffrey A.; Filoux, Erwan
2012-01-01
Annular arrays provide a means to achieve enhanced image quality with a limited number of elements. Synthetic-focusing (SF) strategies that rely on beamforming data from individual transmit-to-receive (TR) element pairs provide a means to improve image quality without specialized TR delay electronics. Here, SF strategies are examined in the context of high-frequency ultrasound (>15 MHz) annular arrays composed of five elements, operating at 18 and 38 MHz. Acoustic field simulations are compared with experimental data acquired from wire and anechoic-sphere phantoms, and the values of lateral beamwidth, SNR, contrast-to-noise ratio (CNR), and depth of field (DOF) are compared as a function of depth. In each case, data were acquired for all TR combinations (25 in total) and processed with SF using all 25 TR pairs and SF with the outer receive channels removed one by one. The results show that removing the outer receive channels led to an overall degradation of lateral resolution, an overall decrease in SNR, and did not reduce the DOF, although the DOF profile decreased in amplitude. The CNR was >1 and remained fairly constant as a function of depth, with a slight decrease in CNR for the case with just the central element receiving. The relative changes between the calculated and measured quantities were nearly identical for the 18- and 38-MHz arrays. B-mode images of the anechoic phantom and an in vivo mouse embryo using full SF with 25 TR pairs or reduced TR-pair approaches showed minimal qualitative difference. PMID:22899130
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs
Mosaliganti, Kishore R.; Gelas, Arnaud; Megason, Sean G.
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo. PMID:24501592
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs.
Mosaliganti, Kishore R; Gelas, Arnaud; Megason, Sean G
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo.
Image contrast mechanisms in dynamic friction force microscopy: Antimony particles on graphite
NASA Astrophysics Data System (ADS)
Mertens, Felix; Göddenhenrich, Thomas; Dietzel, Dirk; Schirmeisen, Andre
2017-01-01
Dynamic Friction Force Microscopy (DFFM) is a technique based on Atomic Force Microscopy (AFM) where resonance oscillations of the cantilever are excited by lateral actuation of the sample. During this process, the AFM tip in contact with the sample undergoes a complex movement which consists of alternating periods of sticking and sliding. Therefore, DFFM can give access to dynamic transition effects in friction that are not accessible by alternative techniques. Using antimony nanoparticles on graphite as a model system, we analyzed how combined influences of friction and topography can effect different experimental configurations of DFFM. Based on the experimental results, for example, contrast inversion between fractional resonance and band excitation imaging strategies to extract reliable tribological information from DFFM images are devised.
NASA Technical Reports Server (NTRS)
Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy
2016-01-01
Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.
Deep learning for tumor classification in imaging mass spectrometry.
Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter
2018-04-01
Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.
Pu, Yuan-Yuan; Sun, Da-Wen
2015-12-01
Mango slices were dried by microwave-vacuum drying using a domestic microwave oven equipped with a vacuum desiccator inside. Two lab-scale hyperspectral imaging (HSI) systems were employed for moisture prediction. The Page and the Two-term thin-layer drying models were suitable to describe the current drying process with a fitting goodness of R(2)=0.978. Partial least square (PLS) was applied to correlate the mean spectrum of each slice and reference moisture content. With three waveband selection strategies, optimal wavebands corresponding to moisture prediction were identified. The best model RC-PLS-2 (Rp(2)=0.972 and RMSEP=4.611%) was implemented into the moisture visualization procedure. Moisture distribution map clearly showed that the moisture content in the central part of the mango slices was lower than that of other parts. The present study demonstrated that hyperspectral imaging was a useful tool for non-destructively and rapidly measuring and visualizing the moisture content during drying process. Copyright © 2015 Elsevier Ltd. All rights reserved.
Parallel ptychographic reconstruction
Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; ...
2014-12-19
Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomore » take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.« less
NASA Astrophysics Data System (ADS)
Märk, Julia; Ruschke, Karen; Dortay, Hakan; Schreiber, Isabelle; Sass, Andrea; Qazi, Taimoor; Pumberger, Matthias; Laufer, Jan
2014-03-01
The capability to image stem cells in vivo in small animal models over extended periods of time is important to furthering our understanding of the processes involved in tissue regeneration. Photoacoustic imaging is suited to this application as it can provide high resolution (tens of microns) absorption-based images of superficial tissues (cm depths). However, stem cells are rare, highly migratory, and can divide into more specialised cells. Genetic labelling strategies are therefore advantageous for their visualisation. In this study, methods for the transfection and viral transduction of mesenchymal stem cells with reporter genes for the co-expression of tyrosinase and a fluorescent protein (mCherry). Initial photoacoustic imaging experiments of tyrosinase expressing cells in small animal models of tissue regeneration were also conducted. Lentiviral transduction methods were shown to result in stable expression of tyrosinase and mCherry in mesenchymal stem cells. The results suggest that photoacoustic imaging using reporter genes is suitable for the study of stem cell driven tissue regeneration in small animals.
Verma, Nishant; Cowperthwaite, Matthew C.; Burnett, Mark G.; Markey, Mia K.
2013-01-01
Abstract Differentiating treatment-induced necrosis from tumor recurrence is a central challenge in neuro-oncology. These 2 very different outcomes after brain tumor treatment often appear similarly on routine follow-up imaging studies. They may even manifest with similar clinical symptoms, further confounding an already difficult process for physicians attempting to characterize a new contrast-enhancing lesion appearing on a patient's follow-up imaging. Distinguishing treatment necrosis from tumor recurrence is crucial for diagnosis and treatment planning, and therefore, much effort has been put forth to develop noninvasive methods to differentiate between these disparate outcomes. In this article, we review the latest developments and key findings from research studies exploring the efficacy of structural and functional imaging modalities for differentiating treatment necrosis from tumor recurrence. We discuss the possibility of computational approaches to investigate the usefulness of fine-grained imaging characteristics that are difficult to observe through visual inspection of images. We also propose a flexible treatment-planning algorithm that incorporates advanced functional imaging techniques when indicated by the patient's routine follow-up images and clinical condition. PMID:23325863
Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.
Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain
2010-03-01
Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (
NASA Astrophysics Data System (ADS)
Newman, Gregory A.; Commer, Michael
2009-07-01
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Low bandwidth eye tracker for scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia
2012-02-01
The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.
Giampietro, Vincent; Van den Eynde, Frederique; Davies, Helen; Lounes, Naima; Andrew, Christopher; Dalton, Jeffrey; Simmons, Andrew; Williams, Steven C.R.; Baron-Cohen, Simon; Tchanturia, Kate
2013-01-01
The behavioural literature in anorexia nervosa and autism spectrum disorders has indicated an overlap in cognitive profiles. One such domain is the enhancement of local processing over global processing. While functional imaging studies of autism spectrum disorder have revealed differential neural patterns compared to controls in response to tests of local versus global processing, no studies have explored such effects in anorexia nervosa. This study uses functional magnetic resonance imaging in conjunction with the embedded figures test, to explore the neural correlates of this enhanced attention to detail in the largest anorexia nervosa cohort to date. On the embedded figures tests participants are required to indicate which of two complex figures contains a simple geometrical shape. The findings indicate that whilst healthy controls showed greater accuracy on the task than people with anorexia nervosa, different brain regions were recruited. Healthy controls showed greater activation in the precuneus whilst people with anorexia nervosa showed greater activation in the fusiform gyrus. This suggests that different cognitive strategies were used to perform the task, i.e. healthy controls demonstrated greater emphasis on visuospatial searching and people with anorexia nervosa employed a more object recognition-based approach. This is in accordance with previous findings in autism spectrum disorder using a similar methodology and has implications for therapies addressing the appropriate adjustment of cognitive strategies in anorexia nervosa. PMID:23691129
Fonville, Leon; Lao-Kaim, Nick P; Giampietro, Vincent; Van den Eynde, Frederique; Davies, Helen; Lounes, Naima; Andrew, Christopher; Dalton, Jeffrey; Simmons, Andrew; Williams, Steven C R; Baron-Cohen, Simon; Tchanturia, Kate
2013-01-01
The behavioural literature in anorexia nervosa and autism spectrum disorders has indicated an overlap in cognitive profiles. One such domain is the enhancement of local processing over global processing. While functional imaging studies of autism spectrum disorder have revealed differential neural patterns compared to controls in response to tests of local versus global processing, no studies have explored such effects in anorexia nervosa. This study uses functional magnetic resonance imaging in conjunction with the embedded figures test, to explore the neural correlates of this enhanced attention to detail in the largest anorexia nervosa cohort to date. On the embedded figures tests participants are required to indicate which of two complex figures contains a simple geometrical shape. The findings indicate that whilst healthy controls showed greater accuracy on the task than people with anorexia nervosa, different brain regions were recruited. Healthy controls showed greater activation in the precuneus whilst people with anorexia nervosa showed greater activation in the fusiform gyrus. This suggests that different cognitive strategies were used to perform the task, i.e. healthy controls demonstrated greater emphasis on visuospatial searching and people with anorexia nervosa employed a more object recognition-based approach. This is in accordance with previous findings in autism spectrum disorder using a similar methodology and has implications for therapies addressing the appropriate adjustment of cognitive strategies in anorexia nervosa.
Bioresponsive probes for molecular imaging: concepts and in vivo applications.
van Duijnhoven, Sander M J; Robillard, Marc S; Langereis, Sander; Grüll, Holger
2015-01-01
Molecular imaging is a powerful tool to visualize and characterize biological processes at the cellular and molecular level in vivo. In most molecular imaging approaches, probes are used to bind to disease-specific biomarkers highlighting disease target sites. In recent years, a new subset of molecular imaging probes, known as bioresponsive molecular probes, has been developed. These probes generally benefit from signal enhancement at the site of interaction with its target. There are mainly two classes of bioresponsive imaging probes. The first class consists of probes that show direct activation of the imaging label (from "off" to "on" state) and have been applied in optical imaging and magnetic resonance imaging (MRI). The other class consists of probes that show specific retention of the imaging label at the site of target interaction and these probes have found application in all different imaging modalities, including photoacoustic imaging and nuclear imaging. In this review, we present a comprehensive overview of bioresponsive imaging probes in order to discuss the various molecular imaging strategies. The focus of the present article is the rationale behind the design of bioresponsive molecular imaging probes and their potential in vivo application for the detection of endogenous molecular targets in pathologies such as cancer and cardiovascular disease. Copyright © 2015 John Wiley & Sons, Ltd.
Rosewall, Tara; Yan, Jing; Alasti, Hamideh; Cerase, Carla; Bayley, Andrew
2017-04-01
Inclusion of multiple independently moving clinical target volumes (CTVs) in the irradiated volume causes an image guidance conundrum. The purpose of this research was to use high risk prostate cancer as a clinical example to evaluate a 'compromise' image alignment strategy. The daily pre-treatment orthogonal EPI for 14 consecutive patients were included in this analysis. Image matching was performed by aligning to the prostate only, the bony pelvis only and using the 'compromise' strategy. Residual CTV surrogate displacements were quantified for each of the alignment strategies. Analysis of the 388 daily fractions indicated surrogate displacements were well-correlated in all directions (r 2 = 0.95 (LR), 0.67 (AP) and 0.59 (SI). Differences between the surrogates displacements (95% range) were -0.4 to 1.8 mm (LR), -1.2 to 5.2 mm (SI) and -1.2 to 5.2 mm (AP). The distribution of the residual displacements was significantly smaller using the 'compromise' strategy, compared to the other strategies (p 0.005). The 'compromise' strategy ensured the CTV was encompassed by the PTV in all fractions, compared to 47 PTV violations when aligned to prostate only. This study demonstrated the feasibility of a compromise position image guidance strategy to accommodate simultaneous displacements of two independently moving CTVs. Application of this strategy was facilitated by correlation between the CTV displacements and resulted in no geometric excursions of the CTVs beyond standard sized PTVs. This simple image guidance strategy may also be applicable to other disease sites that concurrently irradiate multiple CTVs, such as head and neck, lung and cervix cancer. © 2016 The Royal Australian and New Zealand College of Radiologists.
Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.
2015-01-01
Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205
NASA Astrophysics Data System (ADS)
Li, Bang-Jian; Wang, Quan-Bao; Duan, Deng-Ping; Chen, Ji-An
2018-05-01
Intensity saturation can cause decorrelation phenomenon and decrease the measurement accuracy in digital image correlation (DIC). In the paper, the grey intensity adjustment strategy is proposed to improve the measurement accuracy of DIC considering the effect of intensity saturation. First, the grey intensity adjustment strategy is described in detail, which can recover the truncated grey intensities of the saturated pixels and reduce the decorrelation phenomenon. The simulated speckle patterns are then employed to demonstrate the efficacy of the proposed strategy, which indicates that the displacement accuracy can be improved by about 40% by the proposed strategy. Finally, the true experimental image is used to show the feasibility of the proposed strategy, which indicates that the displacement accuracy can be increased by about 10% by the proposed strategy.
The Effect of Compliance-Gaining Strategy Choice and Communicator Style on Sales Success.
ERIC Educational Resources Information Center
Parrish-Sprowl, John; And Others
1994-01-01
Explores the relationship among compliance-gaining strategy choice, communicator image, and sales person effectiveness. Finds no statistically significant relationship between the use of compliance-gaining strategies and sales success, but indicates a link between communicator image and sales success. (SR)
An approach to develop an algorithm to detect the climbing height in radial-axial ring rolling
NASA Astrophysics Data System (ADS)
Husmann, Simon; Hohmann, Magnus; Kuhlenkötter, Bernd
2017-10-01
Radial-axial ring rolling is the mainly used forming process to produce seamless rings, which are applied in miscellaneous industries like the energy sector, the aerospace technology or in the automotive industry. Due to the simultaneously forming in two opposite rolling gaps and the fact that ring rolling is a mass forming process, different errors could occur during the rolling process. Ring climbing is one of the most occurring process errors leading to a distortion of the ring's cross section and a deformation of the rings geometry. The conventional sensors of a radial-axial rolling machine could not detect this error. Therefore, it is a common strategy to roll a slightly bigger ring, so that random occurring process errors could be reduce afterwards by removing the additional material. The LPS installed an image processing system to the radial rolling gap of their ring rolling machine to enable the recognition and measurement of climbing rings and by this, to reduce the additional material. This paper presents the algorithm which enables the image processing system to detect the error of a climbing ring and ensures comparable reliable results for the measurement of the climbing height of the rings.
The Influence of Encoding Strategy on Episodic Memory and Cortical Activity in Schizophrenia
Haut, Kristen; Csernansky, John G.; Barch, Deanna M.
2005-01-01
Background: Recent work suggests that episodic memory deficits in schizophrenia may be related to disturbances of encoding or retrieval. Schizophrenia patients appear to benefit from instruction in episodic memory strategies. We tested the hypothesis that providing effective encoding strategies to schizophrenia patients enhances encoding-related brain activity and recognition performance. Methods: Seventeen schizophrenia patients and 26 healthy comparison subjects underwent functional magnetic resonance imaging scans while performing incidental encoding tasks of words and faces. Subjects were required to make either deep (abstract/concrete) or shallow (alphabetization) judgments for words and deep (gender) judgments for faces, followed by subsequent recognition tests. Results: Schizophrenia and comparison subjects recognized significantly more words encoded deeply than shallowly, activated regions in inferior frontal cortex (Brodmann area 45/47) typically associated with deep and successful encoding of words, and showed greater left frontal activation for the processing of words compared with faces. However, during deep encoding and material-specific processing (words vs. faces), participants with schizophrenia activated regions not activated by control subjects, including several in prefrontal cortex. Conclusions: Our findings suggest that a deficit in use of effective strategies influences episodic memory performance in schizophrenia and that abnormalities in functional brain activation persist even when such strategies are applied. PMID:15992522
NASA Astrophysics Data System (ADS)
Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin
2017-03-01
Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.
Lectures on Advanced Technologies
1987-01-01
we are now building, such real - time information will greatly change strategies, tactics, and weapon systems ; it will drive the development of a family...in real - time (approximately seven seconds), process a satellite image. The system was recently demonstrated at White Sands Missile Range. This system ... time and talents by coming to Annapolis and participating in our Advanced Technologies Seminar program. Arthur E. Bock Professor Emeritus Naval Systems
An Integrated Processing Strategy for Mountain Glacier Motion Monitoring Based on SAR Images
NASA Astrophysics Data System (ADS)
Ruan, Z.; Yan, S.; Liu, G.; LV, M.
2017-12-01
Mountain glacier dynamic variables are important parameters in studies of environment and climate change in High Mountain Asia. Due to the increasing events of abnormal glacier-related hazards, research of monitoring glacier movements has attracted more interest during these years. Glacier velocities are sensitive and changing fast under complex conditions of high mountain regions, which implies that analysis of glacier dynamic changes requires comprehensive and frequent observations with relatively high accuracy. Synthetic aperture radar (SAR) has been successfully exploited to detect glacier motion in a number of previous studies, usually with pixel-tracking and interferometry methods. However, the traditional algorithms applied to mountain glacier regions are constrained by the complex terrain and diverse glacial motion types. Interferometry techniques are prone to fail in mountain glaciers because of their narrow size and the steep terrain, while pixel-tracking algorithm, which is more robust in high mountain areas, is subject to accuracy loss. In order to derive glacier velocities continually and efficiently, we propose a modified strategy to exploit SAR data information for mountain glaciers. In our approach, we integrate a set of algorithms for compensating non-glacial-motion-related signals which exist in the offset values retrieved by sub-pixel cross-correlation of SAR image pairs. We exploit modified elastic deformation model to remove the offsets associated with orbit and sensor attitude, and for the topographic residual offset we utilize a set of operations including DEM-assisted compensation algorithm and wavelet-based algorithm. At the last step of the flow, an integrated algorithm combining phase and intensity information of SAR images will be used to improve regional motion results failed in cross-correlation related processing. The proposed strategy is applied to the West Kunlun Mountain and Muztagh Ata region in western China using ALOS/PALSAR data. The results show that the strategy can effectively improve the accuracy of velocity estimation by reducing the mean and standard deviation values from 0.32 m and 0.4 m to 0.16 m. It is proved to be highly appropriate for monitoring glacier motion over a widely varying range of ice velocities with a relatively high accuracy.
Molecular Imaging: Current Status and Emerging Strategies
Pysz, Marybeth A.; Gambhir, Sanjiv S.; Willmann, Jürgen K.
2011-01-01
In vivo molecular imaging has a great potential to impact medicine by detecting diseases in early stages (screening), identifying extent of disease, selecting disease- and patient-specific therapeutic treatment (personalized medicine), applying a directed or targeted therapy, and measuring molecular-specific effects of treatment. Current clinical molecular imaging approaches primarily use PET- or SPECT-based techniques. In ongoing preclinical research novel molecular targets of different diseases are identified and, sophisticated and multifunctional contrast agents for imaging these molecular targets are developed along with new technologies and instrumentation for multimodality molecular imaging. Contrast-enhanced molecular ultrasound with molecularly-targeted contrast microbubbles is explored as a clinically translatable molecular imaging strategy for screening, diagnosing, and monitoring diseases at the molecular level. Optical imaging with fluorescent molecular probes and ultrasound imaging with molecularly-targeted microbubbles are attractive strategies since they provide real-time imaging, are relatively inexpensive, produce images with high spatial resolution, and do not involve exposure to ionizing irradiation. Raman spectroscopy/microscopy has emerged as a molecular optical imaging strategy for ultrasensitive detection of multiple biomolecules/biochemicals with both in vivo and ex vivo versatility. Photoacoustic imaging is a hybrid of optical and ultrasound modalities involving optically-excitable molecularly-targeted contrast agents and quantitative detection of resulting oscillatory contrast agent movement with ultrasound. Current preclinical findings and advances in instrumentation such as endoscopes and microcatheters suggest that these molecular imaging modalities have numerous clinical applications and will be translated into clinical use in the near future. PMID:20541650
Wu, Jinglong; Chen, Kewei; Imajyo, Satoshi; Ohno, Seiichiro; Kanazawa, Susumu
2013-01-01
In human visual cortex, the primary visual cortex (V1) is considered to be essential for visual information processing; the fusiform face area (FFA) and parahippocampal place area (PPA) are considered as face-selective region and places-selective region, respectively. Recently, a functional magnetic resonance imaging (fMRI) study showed that the neural activity ratios between V1 and FFA were constant as eccentricities increasing in central visual field. However, in wide visual field, the neural activity relationships between V1 and FFA or V1 and PPA are still unclear. In this work, using fMRI and wide-view present system, we tried to address this issue by measuring neural activities in V1, FFA and PPA for the images of faces and houses aligning in 4 eccentricities and 4 meridians. Then, we further calculated ratio relative to V1 (RRV1) as comparing the neural responses amplitudes in FFA or PPA with those in V1. We found V1, FFA, and PPA showed significant different neural activities to faces and houses in 3 dimensions of eccentricity, meridian, and region. Most importantly, the RRV1s in FFA and PPA also exhibited significant differences in 3 dimensions. In the dimension of eccentricity, both FFA and PPA showed smaller RRV1s at central position than those at peripheral positions. In meridian dimension, both FFA and PPA showed larger RRV1s at upper vertical positions than those at lower vertical positions. In the dimension of region, FFA had larger RRV1s than PPA. We proposed that these differential RRV1s indicated FFA and PPA might have different processing strategies for encoding the wide field visual information from V1. These different processing strategies might depend on the retinal position at which faces or houses are typically observed in daily life. We posited a role of experience in shaping the information processing strategies in the ventral visual cortex. PMID:23991147
Robles, Francisco E.; Fischer, Martin C.; Warren, Warren S.
2016-01-01
Stimulated Raman scattering (SRS) enables fast, high resolution imaging of chemical constituents important to biological structures and functional processes, both in a label-free manner and using exogenous biomarkers. While this technology has shown remarkable potential, it is currently limited to point scanning and can only probe a few Raman bands at a time (most often, only one). In this work we take a fundamentally different approach to detecting the small nonlinear signals based on dispersion effects that accompany the loss/gain processes in SRS. In this proof of concept, we demonstrate that the dispersive measurements are more robust to noise compared to amplitude-based measurements, which then permit spectral or spatial multiplexing (potentially both, simultaneously). Finally, we illustrate how this method may enable different strategies for biochemical imaging using phase microscopy and optical coherence tomography. PMID:26832279
Systemic localization of seven major types of carbohydrates on cell membranes by dSTORM imaging.
Chen, Junling; Gao, Jing; Zhang, Min; Cai, Mingjun; Xu, Haijiao; Jiang, Junguang; Tian, Zhiyuan; Wang, Hongda
2016-07-25
Carbohydrates on the cell surface control intercellular interactions and play a vital role in various physiological processes. However, their systemic distribution patterns are poorly understood. Through the direct stochastic optical reconstruction microscopy (dSTORM) strategy, we systematically revealed that several types of representative carbohydrates are found in clustered states. Interestingly, the results from dual-color dSTORM imaging indicate that these carbohydrate clusters are prone to connect with one another and eventually form conjoined platforms where different functional glycoproteins aggregate (e.g., epidermal growth factor receptor, (EGFR) and band 3 protein). A thorough understanding of the ensemble distribution of carbohydrates on the cell surface paves the way for elucidating the structure-function relationship of cell membranes and the critical roles of carbohydrates in various physiological and pathological cell processes.
Systemic localization of seven major types of carbohydrates on cell membranes by dSTORM imaging
Chen, Junling; Gao, Jing; Zhang, Min; Cai, Mingjun; Xu, Haijiao; Jiang, Junguang; Tian, Zhiyuan; Wang, Hongda
2016-01-01
Carbohydrates on the cell surface control intercellular interactions and play a vital role in various physiological processes. However, their systemic distribution patterns are poorly understood. Through the direct stochastic optical reconstruction microscopy (dSTORM) strategy, we systematically revealed that several types of representative carbohydrates are found in clustered states. Interestingly, the results from dual-color dSTORM imaging indicate that these carbohydrate clusters are prone to connect with one another and eventually form conjoined platforms where different functional glycoproteins aggregate (e.g., epidermal growth factor receptor, (EGFR) and band 3 protein). A thorough understanding of the ensemble distribution of carbohydrates on the cell surface paves the way for elucidating the structure-function relationship of cell membranes and the critical roles of carbohydrates in various physiological and pathological cell processes. PMID:27453176
Using modern imaging techniques to old HST data: a summary of the ALICE program.
NASA Astrophysics Data System (ADS)
Choquet, Elodie; Soummer, Remi; Perrin, Marshall; Pueyo, Laurent; Hagan, James Brendan; Zimmerman, Neil; Debes, John Henry; Schneider, Glenn; Ren, Bin; Milli, Julien; Wolff, Schuyler; Stark, Chris; Mawet, Dimitri; Golimowski, David A.; Hines, Dean C.; Roberge, Aki; Serabyn, Eugene
2018-01-01
Direct imaging of extrasolar systems is a powerful technique to study the physical properties of exoplanetary systems and understand their formation and evolution mechanisms. The detection and characterization of these objects are challenged by their high contrast with their host star. Several observing strategies and post-processing algorithms have been developed for ground-based high-contrast imaging instruments, enabling the discovery of directly-imaged and spectrally-characterized exoplanets. The Hubble Space Telescope (HST), pioneer in directly imaging extrasolar systems, has yet been often limited to the detection of bright debris disks systems, with sensitivity limited by the difficulty to implement an optimal PSF subtraction stategy, which is readily offered on ground-based telescopes in pupil tracking mode.The Archival Legacy Investigations of Circumstellar Environments (ALICE) program is a consistent re-analysis of the 10 year old coronagraphic archive of HST's NICMOS infrared imager. Using post-processing methods developed for ground-based observations, we used the whole archive to calibrate PSF temporal variations and improve NICMOS's detection limits. We have now delivered ALICE-reprocessed science products for the whole NICMOS archival data back to the community. These science products, as well as the ALICE pipeline, were used to prototype the JWST coronagraphic data and reduction pipeline. The ALICE program has enabled the detection of 10 faint debris disk systems never imaged before in the near-infrared and several substellar companion candidates, which we are all in the process of characterizing through follow-up observations with both ground-based facilities and HST-STIS coronagraphy. In this publication, we provide a summary of the results of the ALICE program, advertise its science products and discuss the prospects of the program.
Optimization of a fast optical CT scanner for nPAG gel dosimetry
NASA Astrophysics Data System (ADS)
Vandecasteele, Jan; DeDeene, Yves
2009-05-01
A fast laser scanning optical CT scanner was constructed and optimized at the Ghent university. The first images acquired were contaminated with several imaging artifacts. The origins of the artifacts were investigated. Performance characteristics of different components were measured such as the laser spot size, light attenuation by the lenses and the dynamic range of the photo-detector. The need for a differential measurement using a second photo-detector was investigated. Post processing strategies to compensate for hardware related errors were developed. Drift of the laser and of the detector was negligible. Incorrectly refractive index matching was dealt with by developing an automated matching process. When scratches on the water bath and phantom container are present, these pose a post processing challenge to eliminate the resulting artifacts from the reconstructed images Secondary laser spots due to multiple reflections need to be further investigated. The time delay in the control of the galvanometer and detector was dealt with using black strips that serve as markers of the projection position. Still some residual ringing artifacts are present. Several small volumetric test phantoms were constructed to obtain an overall picture of the accuracy.
NASA Astrophysics Data System (ADS)
Newman, Gregory A.
2014-01-01
Many geoscientific applications exploit electrostatic and electromagnetic fields to interrogate and map subsurface electrical resistivity—an important geophysical attribute for characterizing mineral, energy, and water resources. In complex three-dimensional geologies, where many of these resources remain to be found, resistivity mapping requires large-scale modeling and imaging capabilities, as well as the ability to treat significant data volumes, which can easily overwhelm single-core and modest multicore computing hardware. To treat such problems requires large-scale parallel computational resources, necessary for reducing the time to solution to a time frame acceptable to the exploration process. The recognition that significant parallel computing processes must be brought to bear on these problems gives rise to choices that must be made in parallel computing hardware and software. In this review, some of these choices are presented, along with the resulting trade-offs. We also discuss future trends in high-performance computing and the anticipated impact on electromagnetic (EM) geophysics. Topics discussed in this review article include a survey of parallel computing platforms, graphics processing units to multicore CPUs with a fast interconnect, along with effective parallel solvers and associated solver libraries effective for inductive EM modeling and imaging.
Hernández-Martin, Estefania; Marcano, Francisco; Casanova, Oscar; Modroño, Cristian; Plata-Bello, Julio; González-Mora, Jose Luis
2017-01-01
Abstract. Diffuse optical tomography (DOT) measures concentration changes in both oxy- and deoxyhemoglobin providing three-dimensional images of local brain activations. A pilot study, which compares both DOT and functional magnetic resonance imaging (fMRI) volumes through t-maps given by canonical statistical parametric mapping (SPM) processing for both data modalities, is presented. The DOT series were processed using a method that is based on a Bayesian filter application on raw DOT data to remove physiological changes and minimum description length application index to select a number of singular values, which reduce the data dimensionality during image reconstruction and adaptation of DOT volume series to normalized standard space. Therefore, statistical analysis is performed with canonical SPM software in the same way as fMRI analysis is done, accepting DOT volumes as if they were fMRI volumes. The results show the reproducibility and ruggedness of the method to process DOT series on group analysis using cognitive paradigms on the prefrontal cortex. Difficulties such as the fact that scalp–brain distances vary between subjects or cerebral activations are difficult to reproduce due to strategies used by the subjects to solve arithmetic problems are considered. T-images given by fMRI and DOT volume series analyzed in SPM show that at the functional level, both DOT and fMRI measures detect the same areas, although DOT provides complementary information to fMRI signals about cerebral activity. PMID:28386575
Kozhevnikov, Maria; Dhond, Rupali P.
2012-01-01
Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003
What is the role of imaging in the clinical diagnosis of osteoarthritis and disease management?
Wang, Xia; Oo, Win Min; Linklater, James M
2018-05-01
While OA is predominantly diagnosed on the basis of clinical criteria, imaging may aid with differential diagnosis in clinically suspected cases. While plain radiographs are traditionally the first choice of imaging modality, MRI and US also have a valuable role in assessing multiple pathologic features of OA, although each has particular advantages and disadvantages. Although modern imaging modalities provide the capability to detect a wide range of osseous and soft tissue (cartilage, menisci, ligaments, synovitis, effusion) OA-related structural damage, this extra information has not yet favourably influenced the clinical decision-making and management process. Imaging is recommended if there are unexpected rapid changes in clinical outcomes to determine whether it relates to disease severity or an additional diagnosis. On developing specific treatments, imaging serves as a sensitive tool to measure treatment response. This narrative review aims to describe the role of imaging modalities to aid in OA diagnosis, disease progression and management. It also provides insight into the use of these modalities in finding targeted treatment strategies in clinical research.
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
Clark, Kait; Fleck, Mathias S; Mitroff, Stephen R
2011-01-01
Recent research has shown that avid action video game players (VGPs) outperform non-video game players (NVGPs) on a variety of attentional and perceptual tasks. However, it remains unknown exactly why and how such differences arise; while some prior research has demonstrated that VGPs' improvements stem from enhanced basic perceptual processes, other work indicates that they can stem from enhanced attentional control. The current experiment used a change-detection task to explore whether top-down strategies can contribute to VGPs' improved abilities. Participants viewed alternating presentations of an image and a modified version of the image and were tasked with detecting and localizing the changed element. Consistent with prior claims of enhanced perceptual abilities, VGPs were able to detect the changes while requiring less exposure to the change than NVGPs. Further analyses revealed this improved change detection performance may result from altered strategy use; VGPs employed broader search patterns when scanning scenes for potential changes. These results complement prior demonstrations of VGPs' enhanced bottom-up perceptual benefits by providing new evidence of VGPs' potentially enhanced top-down strategic benefits. Copyright © 2010 Elsevier B.V. All rights reserved.
Design and implementation of GRID-based PACS in a hospital with multiple imaging departments
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo
2008-03-01
Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.
NASA Astrophysics Data System (ADS)
Parisi, P.; Mani, A.; Perry-Sullivan, C.; Kopp, J.; Simpson, G.; Renis, M.; Padovani, M.; Severgnini, C.; Piacentini, P.; Piazza, P.; Beccalli, A.
2009-12-01
After-develop inspection (ADI) and photo-cell monitoring (PM) are part of a comprehensive lithography process monitoring strategy. Capturing defects of interest (DOI) in the lithography cell rather than at later process steps shortens the cycle time and allows for wafer re-work, reducing overall cost and improving yield. Low contrast DOI and multiple noise sources make litho inspection challenging. Broadband brightfield inspectors provide the highest sensitivity to litho DOI and are traditionally used for ADI and PM. However, a darkfield imaging inspector has shown sufficient sensitivity to litho DOI, providing a high-throughput option for litho defect monitoring. On the darkfield imaging inspector, a very high sensitivity inspection is used in conjunction with advanced defect binning to detect pattern issues and other DOI and minimize nuisance defects. For ADI, this darkfield inspection methodology enables the separation and tracking of 'color variation' defects that correlate directly to CD variations allowing a high-sampling monitor for focus excursions, thereby reducing scanner re-qualification time. For PM, the darkfield imaging inspector provides sensitivity to critical immersion litho defects at a lower cost-of-ownership. This paper describes litho monitoring methodologies developed and implemented for flash devices for 65nm production and 45nm development using the darkfield imaging inspector.
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
Variational stereo imaging of oceanic waves with statistical constraints.
Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise
2013-11-01
An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
Design of a rear anamorphic attachment for digital cinematography
NASA Astrophysics Data System (ADS)
Cifuentes, A.; Valles, A.
2008-09-01
Digital taking systems for HDTV and now for the film industry present a particularly challenging design problem for rear adapters in general. The thick 3-channel prism block in the camera provides an important challenge in the design. In this paper the design of a 1.33x rear anamorphic attachment is presented. The new design departs significantly from the traditional Bravais condition due to the thick dichroic prism block. Design strategies for non-rotationally symmetric systems and fields of view are discussed. Anamorphic images intrinsically have a lower contrast and less resolution than their rotationally symmetric counterparts, therefore proper image evaluation must be considered. The interpretation of the traditional image quality methods applied to anamorphic images is also discussed in relation to the design process. The final design has a total track less than 50 mm, maintaining the telecentricity of the digital prime lens and taking full advantage of the f/1.4 prism block.
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup
2017-06-01
This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.
Segmenting human from photo images based on a coarse-to-fine scheme.
Lu, Huchuan; Fang, Guoliang; Shao, Xinqing; Li, Xuelong
2012-06-01
Human segmentation in photo images is a challenging and important problem that finds numerous applications ranging from album making and photo classification to image retrieval. Previous works on human segmentation usually demand a time-consuming training phase for complex shape-matching processes. In this paper, we propose a straightforward framework to automatically recover human bodies from color photos. Employing a coarse-to-fine strategy, we first detect a coarse torso (CT) using the multicue CT detection algorithm and then extract the accurate region of the upper body. Then, an iterative multiple oblique histogram algorithm is presented to accurately recover the lower body based on human kinematics. The performance of our algorithm is evaluated on our own data set (contains 197 images with human body region ground truth data), VOC 2006, and the 2010 data set. Experimental results demonstrate the merits of the proposed method in segmenting a person with various poses.
Method for characterizing mask defects using image reconstruction from X-ray diffraction patterns
Hau-Riege, Stefan Peter [Fremont, CA
2007-05-01
The invention applies techniques for image reconstruction from X-ray diffraction patterns on the three-dimensional imaging of defects in EUVL multilayer films. The reconstructed image gives information about the out-of-plane position and the diffraction strength of the defect. The positional information can be used to select the correct defect repair technique. This invention enables the fabrication of defect-free (since repaired) X-ray Mo--Si multilayer mirrors. Repairing Mo--Si multilayer-film defects on mask blanks is a key for the commercial success of EUVL. It is known that particles are added to the Mo--Si multilayer film during the fabrication process. There is a large effort to reduce this contamination, but results are not sufficient, and defects continue to be a major mask yield limiter. All suggested repair strategies need to know the out-of-plane position of the defects in the multilayer.
Experimental Demonstration of Adaptive Infrared Multispectral Imaging using Plasmonic Filter Array.
Jang, Woo-Yong; Ku, Zahyun; Jeon, Jiyeon; Kim, Jun Oh; Lee, Sang Jun; Park, James; Noyola, Michael J; Urbas, Augustine
2016-10-10
In our previous theoretical study, we performed target detection using a plasmonic sensor array incorporating the data-processing technique termed "algorithmic spectrometry". We achieved the reconstruction of a target spectrum by extracting intensity at multiple wavelengths with high resolution from the image data obtained from the plasmonic array. The ultimate goal is to develop a full-scale focal plane array with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios. Each plasmonic filter was designed using periodic circular holes perforated through a gold layer, and an enhanced target detection strategy was proposed to refine the original spectrometry concept for spatial and spectral computation of the data measured from the plasmonic array. Both the spectrum of blackbody radiation and a metal ring object at multiple wavelengths were successfully reconstructed using the weighted superposition of plasmonic output images as specified in the proposed detection strategy. In addition, plasmonic filter arrays were theoretically tested on a target at extremely high temperature as a challenging scenario for the detection scheme.
Zhou, Lili; Clifford Chao, K S; Chang, Jenghwa
2012-11-01
Simulated projection images of digital phantoms constructed from CT scans have been widely used for clinical and research applications but their quality and computation speed are not optimal for real-time comparison with the radiography acquired with an x-ray source of different energies. In this paper, the authors performed polyenergetic forward projections using open computing language (OpenCL) in a parallel computing ecosystem consisting of CPU and general purpose graphics processing unit (GPGPU) for fast and realistic image formation. The proposed polyenergetic forward projection uses a lookup table containing the NIST published mass attenuation coefficients (μ∕ρ) for different tissue types and photon energies ranging from 1 keV to 20 MeV. The CT images of interested sites are first segmented into different tissue types based on the CT numbers and converted to a three-dimensional attenuation phantom by linking each voxel to the corresponding tissue type in the lookup table. The x-ray source can be a radioisotope or an x-ray generator with a known spectrum described as weight w(n) for energy bin E(n). The Siddon method is used to compute the x-ray transmission line integral for E(n) and the x-ray fluence is the weighted sum of the exponential of line integral for all energy bins with added Poisson noise. To validate this method, a digital head and neck phantom constructed from the CT scan of a Rando head phantom was segmented into three (air, gray∕white matter, and bone) regions for calculating the polyenergetic projection images for the Mohan 4 MV energy spectrum. To accelerate the calculation, the authors partitioned the workloads using the task parallelism and data parallelism and scheduled them in a parallel computing ecosystem consisting of CPU and GPGPU (NVIDIA Tesla C2050) using OpenCL only. The authors explored the task overlapping strategy and the sequential method for generating the first and subsequent DRRs. A dispatcher was designed to drive the high-degree parallelism of the task overlapping strategy. Numerical experiments were conducted to compare the performance of the OpenCL∕GPGPU-based implementation with the CPU-based implementation. The projection images were similar to typical portal images obtained with a 4 or 6 MV x-ray source. For a phantom size of 512 × 512 × 223, the time for calculating the line integrals for a 512 × 512 image panel was 16.2 ms on GPGPU for one energy bin in comparison to 8.83 s on CPU. The total computation time for generating one polyenergetic projection image of 512 × 512 was 0.3 s (141 s for CPU). The relative difference between the projection images obtained with the CPU-based and OpenCL∕GPGPU-based implementations was on the order of 10(-6) and was virtually indistinguishable. The task overlapping strategy was 5.84 and 1.16 times faster than the sequential method for the first and the subsequent digitally reconstruction radiographies, respectively. The authors have successfully built digital phantoms using anatomic CT images and NIST μ∕ρ tables for simulating realistic polyenergetic projection images and optimized the processing speed with parallel computing using GPGPU∕OpenCL-based implementation. The computation time was fast (0.3 s per projection image) enough for real-time IGRT (image-guided radiotherapy) applications.
Niedzwiecki, Megan M; Austin, Christine; Remark, Romain; Merad, Miriam; Gnjatic, Sacha; Estrada-Gutierrez, Guadalupe; Espejel-Nuñez, Aurora; Borboa-Olivares, Hector; Guzman-Huerta, Mario; Wright, Rosalind J; Wright, Robert O; Arora, Manish
2016-04-01
Fetal exposure to essential and toxic metals can influence life-long health trajectories. The placenta regulates chemical transmission from maternal circulation to the fetus and itself exhibits a complex response to environmental stressors. The placenta can thus be a useful matrix to monitor metal exposures and stress responses in utero, but strategies to explore the biologic effects of metal mixtures in this organ are not well-developed. In this proof-of-concept study, we used laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to measure the distributions of multiple metals in placental tissue from a low-birth-weight pregnancy, and we developed an approach to identify the components of metal mixtures that colocalized with biological response markers. Our novel workflow, which includes custom-developed software tools and algorithms for spatial outlier identification and background subtraction in multidimensional elemental image stacks, enables rapid image processing and seamless integration of data from elemental imaging and immunohistochemistry. Using quantitative spatial statistics, we identified distinct patterns of metal accumulation at sites of inflammation. Broadly, our multiplexed approach can be used to explore the mechanisms mediating complex metal exposures and biologic responses within placentae and other tissue types. Our LA-ICP-MS image processing workflow can be accessed through our interactive R Shiny application 'shinyImaging', which is available at or through our laboratory's website, .
Dynamic updating atlas for heart segmentation with a nonlinear field-based model.
Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng
2017-09-01
Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.
Dissociated neural processing for decisions in managers and non-managers.
Caspers, Svenja; Heim, Stefan; Lucas, Marc G; Stephan, Egon; Fischer, Lorenz; Amunts, Katrin; Zilles, Karl
2012-01-01
Functional neuroimaging studies of decision-making so far mainly focused on decisions under uncertainty or negotiation with other persons. Dual process theory assumes that, in such situations, decision making relies on either a rapid intuitive, automated or a slower rational processing system. However, it still remains elusive how personality factors or professional requirements might modulate the decision process and the underlying neural mechanisms. Since decision making is a key task of managers, we hypothesized that managers, facing higher pressure for frequent and rapid decisions than non-managers, prefer the heuristic, automated decision strategy in contrast to non-managers. Such different strategies may, in turn, rely on different neural systems. We tested managers and non-managers in a functional magnetic resonance imaging study using a forced-choice paradigm on word-pairs. Managers showed subcortical activation in the head of the caudate nucleus, and reduced hemodynamic response within the cortex. In contrast, non-managers revealed the opposite pattern. With the head of the caudate nucleus being an initiating component for process automation, these results supported the initial hypothesis, hinting at automation during decisions in managers. More generally, the findings reveal how different professional requirements might modulate cognitive decision processing.
A concept for non-invasive temperature measurement during injection moulding processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Christian; Spekowius, Marcel, E-mail: spekowius@ikv.rwth-aachen.de; Wipperfürth, Jens
2016-03-09
Current models of the injection moulding process insufficiently consider the thermal interactions between melt, solidified material and the mould. A detailed description requires a deep understanding of the underlying processes and a precise observation of the temperature. Because todays measurement concepts do not allow a non-invasive analysis it is necessary to find new measurement techniques for temperature measurements during the manufacturing process. In this work we present the idea of a set up for a tomographic ultrasound measurement of the temperature field inside a plastics melt. The goal is to identify a concept that can be installed on a specializedmore » mould for the injection moulding process. The challenges are discussed and the design of a prototype is shown. Special attention is given to the spatial arrangement of the sensors. Besides the design of a measurement set up a reconstruction strategy for the ultrasound signals is required. We present an approach in which an image processing algorithm can be used to calculate a temperature distribution from the ultrasound scans. We discuss a reconstruction strategy in which the ultrasound signals are converted into a spartial temperature distribution by using pvT curves that are obtained by dilatometer measurements.« less
Dissociated Neural Processing for Decisions in Managers and Non-Managers
Caspers, Svenja; Heim, Stefan; Lucas, Marc G.; Stephan, Egon; Fischer, Lorenz; Amunts, Katrin; Zilles, Karl
2012-01-01
Functional neuroimaging studies of decision-making so far mainly focused on decisions under uncertainty or negotiation with other persons. Dual process theory assumes that, in such situations, decision making relies on either a rapid intuitive, automated or a slower rational processing system. However, it still remains elusive how personality factors or professional requirements might modulate the decision process and the underlying neural mechanisms. Since decision making is a key task of managers, we hypothesized that managers, facing higher pressure for frequent and rapid decisions than non-managers, prefer the heuristic, automated decision strategy in contrast to non-managers. Such different strategies may, in turn, rely on different neural systems. We tested managers and non-managers in a functional magnetic resonance imaging study using a forced-choice paradigm on word-pairs. Managers showed subcortical activation in the head of the caudate nucleus, and reduced hemodynamic response within the cortex. In contrast, non-managers revealed the opposite pattern. With the head of the caudate nucleus being an initiating component for process automation, these results supported the initial hypothesis, hinting at automation during decisions in managers. More generally, the findings reveal how different professional requirements might modulate cognitive decision processing. PMID:22927984
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
Images of god in relation to coping strategies of palliative cancer patients.
van Laarhoven, Hanneke W M; Schilderman, Johannes; Vissers, Kris C; Verhagen, Constans A H H V M; Prins, Judith
2010-10-01
Religious coping is important for end-of-life treatment preferences, advance care planning, adjustment to stress, and quality of life. The currently available religious coping instruments draw on a religious and spiritual background that presupposes a very specific image of God, namely God as someone who personally interacts with people. However, according to empirical research, people may have various images of God that may or may not exist simultaneously. It is unknown whether one's belief in a specific image of God is related to the way one copes with a life-threatening disease. To examine the relation between adherence to a personal, a nonpersonal, and/or an unknowable image of God and coping strategies in a group of Dutch palliative cancer patients who were no longer receiving antitumor treatments. In total, 68 palliative care patients completed and returned the questionnaires on Images of God and the COPE-Easy. In the regression analysis, a nonpersonal image of God was a significant positive predictor for the coping strategies seeking advice and information (β=0.339, P<0.01), seeking moral support (β=0.262, P<0.05), and denial (β=0.26, P<0.05), and a negative predictor for the coping strategy humor (β=-0.483, P<0.01). A personal image of God was a significant positive predictor for the coping strategy turning to religion (β=0.608, P<0.01). Age was the most important sociodemographic predictor for coping and had negative predictive value for seeking advice and information (β=-0.268, P<0.05) and seeking moral support (β=-0.247, P<0.05). A nonpersonal image of God is a more relevant predictor for different coping strategies in Dutch palliative cancer patients than a personal or an unknowable image of God. Copyright © 2010 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Convolution kernels for multi-wavelength imaging
NASA Astrophysics Data System (ADS)
Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.
2016-12-01
Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at
Santiesteban, Daniela Y; Kubelick, Kelsey; Dhada, Kabir S; Dumani, Diego; Suggs, Laura; Emelianov, Stanislav
2016-03-01
The past three decades have seen numerous advances in tissue engineering and regenerative medicine (TERM) therapies. However, despite the successes there is still much to be done before TERM therapies become commonplace in clinic. One of the main obstacles is the lack of knowledge regarding complex tissue engineering processes. Imaging strategies, in conjunction with exogenous contrast agents, can aid in this endeavor by assessing in vivo therapeutic progress. The ability to uncover real-time treatment progress will help shed light on the complex tissue engineering processes and lead to development of improved, adaptive treatments. More importantly, the utilized exogenous contrast agents can double as therapeutic agents. Proper use of these Monitoring/Imaging and Regenerative Agents (MIRAs) can help increase TERM therapy successes and allow for clinical translation. While other fields have exploited similar particles for combining diagnostics and therapy, MIRA research is still in its beginning stages with much of the current research being focused on imaging or therapeutic applications, separately. Advancing MIRA research will have numerous impacts on achieving clinical translations of TERM therapies. Therefore, it is our goal to highlight current MIRA progress and suggest future research that can lead to effective TERM treatments.
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
Revisit, revamp and revitalize your business plan: part 1.
Waldron, David
2011-01-01
The diagnostic imaging department has a pivotal role within the hospital and its pillar services. Understanding this role and also understanding the population served helps to further define and justify the "what" and "why" of the business plan. Understand the market capacity and how market needs can be satisfied. Develop a "go-to-market" strategy, which is the part of the business plan where it is decided how to share that message with the market. In the aftermath of healthcare reform and the economic recession, investing in new imaging technology has never been under greater scrutiny. A three step process for developing support is provided.
Enzyme-Activated Fluorogenic Probes for Live-Cell and in Vivo Imaging.
Chyan, Wen; Raines, Ronald T
2018-06-20
Fluorogenic probes, small-molecule sensors that unmask brilliant fluorescence upon exposure to specific stimuli, are powerful tools for chemical biology. Those probes that respond to enzymatic activity illuminate the complex dynamics of biological processes at a level of spatiotemporal detail and sensitivity unmatched by other techniques. Here, we review recent advances in enzyme-activated fluorogenic probes for biological imaging. We organize our survey by enzyme classification, with emphasis on fluorophore masking strategies, modes of enzymatic activation, and the breadth of current and future applications. Key challenges such as probe selectivity and spectroscopic requirements are described alongside of therapeutic, diagnostic, and theranostic opportunities.
Visually representing reality: aesthetics and accessibility aspects
NASA Astrophysics Data System (ADS)
van Nes, Floris L.
2009-02-01
This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.
Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging
Joshi, Bishnu P.; Wang, Thomas D.
2010-01-01
Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839
SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.
Li, Dengwang; Yin, Yong
2012-06-01
In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.
Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering
NASA Astrophysics Data System (ADS)
Barnes, Nick; Scott, Adele F.; Lieby, Paulette; Petoe, Matthew A.; McCarthy, Chris; Stacey, Ashley; Ayton, Lauren N.; Sinclair, Nicholas C.; Shivdasani, Mohit N.; Lovell, Nigel H.; McDermott, Hugh J.; Walker, Janine G.; BVA Consortium,the
2016-06-01
Objective. One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. Approach. Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. Main results. Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP (p≤slant 0.025) or with system off (p\\lt 0.0001). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance (p=0.004) compared to WV, scrambled and system off on the grating acuity task. Significance. Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses to enhance performance. ClinicalTrials.gov Identifier: NCT01603576.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments.
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-10-27
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Gennarelli, Gianluca; Al Khatib, Obada; Soldovieri, Francesco
2017-01-01
Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF) source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS) and non-line of sight (NLOS) conditions. Numerical results based on full-wave synthetic data are reported to support the analysis. PMID:29077071
Araki, Tadashi; Kumar, P Krishna; Suri, Harman S; Ikeda, Nobutaka; Gupta, Ajay; Saba, Luca; Rajan, Jeny; Lavra, Francesco; Sharma, Aditya M; Shafique, Shoaib; Nicolaides, Andrew; Laird, John R; Suri, Jasjit S
2016-07-01
The degree of stenosis in the carotid artery can be predicted using automated carotid lumen diameter (LD) measured from B-mode ultrasound images. Systolic velocity-based methods for measurement of LD are subjective. With the advancement of high resolution imaging, image-based methods have started to emerge. However, they require robust image analysis for accurate LD measurement. This paper presents two different algorithms for automated segmentation of the lumen borders in carotid ultrasound images. Both algorithms are modeled as a two stage process. Stage one consists of a global-based model using scale-space framework for the extraction of the region of interest. This stage is common to both algorithms. Stage two is modeled using a local-based strategy that extracts the lumen interfaces. At this stage, the algorithm-1 is modeled as a region-based strategy using a classification framework, whereas the algorithm-2 is modeled as a boundary-based approach that uses the level set framework. Two sets of databases (DB), Japan DB (JDB) (202 patients, 404 images) and Hong Kong DB (HKDB) (50 patients, 300 images) were used in this study. Two trained neuroradiologists performed manual LD tracings. The mean automated LD measured was 6.35 ± 0.95 mm for JDB and 6.20 ± 1.35 mm for HKDB. The precision-of-merit was: 97.4 % and 98.0 % w.r.t to two manual tracings for JDB and 99.7 % and 97.9 % w.r.t to two manual tracings for HKDB. Statistical tests such as ANOVA, Chi-Squared, T-test, and Mann-Whitney test were conducted to show the stability and reliability of the automated techniques.
ERIC Educational Resources Information Center
Price, Norman T.
2013-01-01
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active…
Small-window parametric imaging based on information entropy for ultrasound tissue characterization
Tsui, Po-Hsiang; Chen, Chin-Kuo; Kuo, Wen-Hung; Chang, King-Jen; Fang, Jui; Ma, Hsiang-Yang; Chou, Dean
2017-01-01
Constructing ultrasound statistical parametric images by using a sliding window is a widely adopted strategy for characterizing tissues. Deficiency in spatial resolution, the appearance of boundary artifacts, and the prerequisite data distribution limit the practicability of statistical parametric imaging. In this study, small-window entropy parametric imaging was proposed to overcome the above problems. Simulations and measurements of phantoms were executed to acquire backscattered radiofrequency (RF) signals, which were processed to explore the feasibility of small-window entropy imaging in detecting scatterer properties. To validate the ability of entropy imaging in tissue characterization, measurements of benign and malignant breast tumors were conducted (n = 63) to compare performances of conventional statistical parametric (based on Nakagami distribution) and entropy imaging by the receiver operating characteristic (ROC) curve analysis. The simulation and phantom results revealed that entropy images constructed using a small sliding window (side length = 1 pulse length) adequately describe changes in scatterer properties. The area under the ROC for using small-window entropy imaging to classify tumors was 0.89, which was higher than 0.79 obtained using statistical parametric imaging. In particular, boundary artifacts were largely suppressed in the proposed imaging technique. Entropy enables using a small window for implementing ultrasound parametric imaging. PMID:28106118
Small-window parametric imaging based on information entropy for ultrasound tissue characterization
NASA Astrophysics Data System (ADS)
Tsui, Po-Hsiang; Chen, Chin-Kuo; Kuo, Wen-Hung; Chang, King-Jen; Fang, Jui; Ma, Hsiang-Yang; Chou, Dean
2017-01-01
Constructing ultrasound statistical parametric images by using a sliding window is a widely adopted strategy for characterizing tissues. Deficiency in spatial resolution, the appearance of boundary artifacts, and the prerequisite data distribution limit the practicability of statistical parametric imaging. In this study, small-window entropy parametric imaging was proposed to overcome the above problems. Simulations and measurements of phantoms were executed to acquire backscattered radiofrequency (RF) signals, which were processed to explore the feasibility of small-window entropy imaging in detecting scatterer properties. To validate the ability of entropy imaging in tissue characterization, measurements of benign and malignant breast tumors were conducted (n = 63) to compare performances of conventional statistical parametric (based on Nakagami distribution) and entropy imaging by the receiver operating characteristic (ROC) curve analysis. The simulation and phantom results revealed that entropy images constructed using a small sliding window (side length = 1 pulse length) adequately describe changes in scatterer properties. The area under the ROC for using small-window entropy imaging to classify tumors was 0.89, which was higher than 0.79 obtained using statistical parametric imaging. In particular, boundary artifacts were largely suppressed in the proposed imaging technique. Entropy enables using a small window for implementing ultrasound parametric imaging.
Multi-texture local ternary pattern for face recognition
NASA Astrophysics Data System (ADS)
Essa, Almabrok; Asari, Vijayan
2017-05-01
In imagery and pattern analysis domain a variety of descriptors have been proposed and employed for different computer vision applications like face detection and recognition. Many of them are affected under different conditions during the image acquisition process such as variations in illumination and presence of noise, because they totally rely on the image intensity values to encode the image information. To overcome these problems, a novel technique named Multi-Texture Local Ternary Pattern (MTLTP) is proposed in this paper. MTLTP combines the edges and corners based on the local ternary pattern strategy to extract the local texture features of the input image. Then returns a spatial histogram feature vector which is the descriptor for each image that we use to recognize a human being. Experimental results using a k-nearest neighbors classifier (k-NN) on two publicly available datasets justify our algorithm for efficient face recognition in the presence of extreme variations of illumination/lighting environments and slight variation of pose conditions.
Gregg, Chelsea L; Recknagel, Andrew K; Butcher, Jonathan T
2015-01-01
Tissue morphogenesis and embryonic development are dynamic events challenging to quantify, especially considering the intricate events that happen simultaneously in different locations and time. Micro- and more recently nano-computed tomography (micro/nanoCT) has been used for the past 15 years to characterize large 3D fields of tortuous geometries at high spatial resolution. We and others have advanced micro/nanoCT imaging strategies for quantifying tissue- and organ-level fate changes throughout morphogenesis. Exogenous soft tissue contrast media enables visualization of vascular lumens and tissues via extravasation. Furthermore, the emergence of antigen-specific tissue contrast enables direct quantitative visualization of protein and mRNA expression. Micro-CT X-ray doses appear to be non-embryotoxic, enabling longitudinal imaging studies in live embryos. In this chapter we present established soft tissue contrast protocols for obtaining high-quality micro/nanoCT images and the image processing techniques useful for quantifying anatomical and physiological information from the data sets.
Gyftopoulos, Soterios; Guja, Kip E; Subhas, Naveen; Virk, Mandeep S; Gold, Heather T
2017-12-01
The purpose of this study was to determine the value of magnetic resonance imaging (MRI) and ultrasound-based imaging strategies in the evaluation of a hypothetical population with a symptomatic full-thickness supraspinatus tendon (FTST) tear using formal cost-effectiveness analysis. A decision analytic model from the health care system perspective for 60-year-old patients with symptoms secondary to a suspected FTST tear was used to evaluate the incremental cost-effectiveness of 3 imaging strategies during a 2-year time horizon: MRI, ultrasound, and ultrasound followed by MRI. Comprehensive literature search and expert opinion provided data on cost, probability, and quality of life estimates. The primary effectiveness outcome was quality-adjusted life-years (QALYs) through 2 years, with a willingness-to-pay threshold set to $100,000/QALY gained (2016 U.S. dollars). Costs and health benefits were discounted at 3%. Ultrasound was the least costly strategy ($1385). MRI was the most effective (1.332 QALYs). Ultrasound was the most cost-effective strategy but was not dominant. The incremental cost-effectiveness ratio for MRI was $22,756/QALY gained, below the willingness-to-pay threshold. Two-way sensitivity analysis demonstrated that MRI was favored over the other imaging strategies over a wide range of reasonable costs. In probabilistic sensitivity analysis, MRI was the preferred imaging strategy in 78% of the simulations. MRI and ultrasound represent cost-effective imaging options for evaluation of the patient thought to have a symptomatic FTST tear. The results indicate that MRI is the preferred strategy based on cost-effectiveness criteria, although the decision between MRI and ultrasound for an imaging center is likely to be dependent on additional factors, such as available resources and workflow. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
The Role of Motion Concepts in Understanding Non-Motion Concepts
Khatin-Zadeh, Omid; Banaruee, Hassan; Khoshsima, Hooshang; Marmolejo-Ramos, Fernando
2017-01-01
This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems. PMID:29240715
Cognitive and Neural Effects of Semantic Encoding Strategy Training in Older Adults
Anderson, B. A.; Barch, D. M.; Jacoby, L. L.
2012-01-01
Prior research suggests that older adults are less likely than young adults to use effective learning strategies during intentional encoding. This functional magnetic resonance imaging (fMRI) study investigated whether training older adults to use semantic encoding strategies can increase their self-initiated use of these strategies and improve their recognition memory. The effects of training on older adults' brain activity during intentional encoding were also examined. Training increased older adults' self-initiated semantic encoding strategy use and eliminated pretraining age differences in recognition memory following intentional encoding. Training also increased older adults' brain activity in the medial superior frontal gyrus, right precentral gyrus, and left caudate during intentional encoding. In addition, older adults' training-related changes in recognition memory were strongly correlated with training-related changes in brain activity in prefrontal and left lateral temporal regions associated with semantic processing and self-initiated verbal encoding strategy use in young adults. These neuroimaging results demonstrate that semantic encoding strategy training can alter older adults' brain activity patterns during intentional encoding and suggest that young and older adults may use the same network of brain regions to support self-initiated use of verbal encoding strategies. PMID:21709173
Single-Drop Raman Imaging Exposes the Trace Contaminants in Milk.
Tan, Zong; Lou, Ting-Ting; Huang, Zhi-Xuan; Zong, Jing; Xu, Ke-Xin; Li, Qi-Feng; Chen, Da
2017-08-02
Better milk safety control can offer important means to promote public health. However, few technologies can detect different types of contaminants in milk simultaneously. In this regard, the present work proposes a single-drop Raman imaging (SDRI) strategy for semiquantitation of multiple hazardous factors in milk solutions. By developing SDRI strategy that incorporates the coffee-ring effect (a natural phenomenon often presents in a condensed circle pattern after a drop evaporated) for sample pretreatment and discrete wavelet transform for spectra processing, the method serves well to expose typical hazardous molecular species in milk products, such as melamine, sodium thiocyanate and lincomycin hydrochloride, with little sample preparation. The detection sensitivity for melamine, sodium thiocyanate, and lincomycin hydrochloride are 0.1 mg kg -1 , 1 mg kg -1 , and 0.1 mg kg -1 , respectively. Theoretically, we establish that the SDRI represents a novel and environment-friendly method that screens the milk safety efficiently, which could be well extended to inspection of other food safety.
NASA Astrophysics Data System (ADS)
Álvarez, Charlens; Martínez, Fabio; Romero, Eduardo
2015-01-01
The pelvic magnetic Resonance images (MRI) are used in Prostate cancer radiotherapy (RT), a process which is part of the radiation planning. Modern protocols require a manual delineation, a tedious and variable activity that may take about 20 minutes per patient, even for trained experts. That considerable time is an important work ow burden in most radiological services. Automatic or semi-automatic methods might improve the efficiency by decreasing the measure times while conserving the required accuracy. This work presents a fully automatic atlas- based segmentation strategy that selects the more similar templates for a new MRI using a robust multi-scale SURF analysis. Then a new segmentation is achieved by a linear combination of the selected templates, which are previously non-rigidly registered towards the new image. The proposed method shows reliable segmentations, obtaining an average DICE Coefficient of 79%, when comparing with the expert manual segmentation, under a leave-one-out scheme with the training database.
Medical Students' Understanding of Directed Questioning by Their Clinical Preceptors.
Lo, Lawrence; Regehr, Glenn
2017-01-01
Phenomenon: Throughout clerkship, preceptors ask medical students questions for both assessment and teaching purposes. However, the cognitive and strategic aspects of students' approaches to managing this situation have not been explored. Without an understanding of how students approach the question and answer activity, medical educators are unable to appreciate how effectively this activity fulfills their purposes of assessment or determine the activity's associated educational effects. A convenience sample of nine 4th-year medical students participated in semistructured one-on-one interviews exploring their approaches to managing situations in which they have been challenged with questions from preceptors to which they do not know the answer. Through an iterative and recursive analytic reading of the interview transcripts, data were coded and organized to identify themes relevant to the students' considerations in answering such questions. Students articulated deliberate strategies for managing the directed questioning activity, which at times focused on the optimization of their learning but always included considerations of image management. Managing image involved projecting not only being knowledgeable but also being teachable. The students indicated that their considerations in selecting an appropriate strategy in a given situation involved their perceptions of their preceptors' intentions and preferences as well as several contextual factors. Insights: The medical students we interviewed were quite sophisticated in their understanding of the social nuances of the directed questioning process and described a variety of contextually invoked strategies to manage the situation and maintain a positive image.
NASA Astrophysics Data System (ADS)
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-01
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry
2015-11-21
In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.
[Basic concept in computer assisted surgery].
Merloz, Philippe; Wu, Hao
2006-03-01
To investigate application of medical digital imaging systems and computer technologies in orthopedics. The main computer-assisted surgery systems comprise the four following subcategories. (1) A collection and recording process for digital data on each patient, including preoperative images (CT scans, MRI, standard X-rays), intraoperative visualization (fluoroscopy, ultrasound), and intraoperative position and orientation of surgical instruments or bone sections (using 3D localises). Data merging based on the matching of preoperative imaging (CT scans, MRI, standard X-rays) and intraoperative visualization (anatomical landmarks, or bone surfaces digitized intraoperatively via 3D localiser; intraoperative ultrasound images processed for delineation of bone contours). (2) In cases where only intraoperative images are used for computer-assisted surgical navigation, the calibration of the intraoperative imaging system replaces the merged data system, which is then no longer necessary. (3) A system that provides aid in decision-making, so that the surgical approach is planned on basis of multimodal information: the interactive positioning of surgical instruments or bone sections transmitted via pre- or intraoperative images, display of elements to guide surgical navigation (direction, axis, orientation, length and diameter of a surgical instrument, impingement, etc. ). And (4) A system that monitors the surgical procedure, thereby ensuring that the optimal strategy defined at the preoperative stage is taken into account. It is possible that computer-assisted orthopedic surgery systems will enable surgeons to better assess the accuracy and reliability of the various operative techniques, an indispensable stage in the optimization of surgery.
WE-E-304-01: SBRT Credentialing: Understanding the Process From Inquiry to Approval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Followill, D.
SBRT is having a dramatic impact on radiation therapy of early-stage, locally advanced cancers. A number of national protocols have been and are being developed to assess the clinical efficacy of SBRT for various anatomical sites, such as lung and spine. Physics credentialing for participating and implementation of trial protocols involve a broad spectrum of requirements from image guidance, motion management, to planning technology and dosimetric constrains. For radiation facilities that do not have extensive experiences in SBRT treatment and protocol credentialing, these complex processes of credentialing and implementation could be very challenging and, sometimes, may lead to ineffective evenmore » unsuccessful execution of these processes. In this proposal, we will provide comprehensive review of some current SBRT protocols, explain the requirements and their underline rationales, illustrate representative failed and successful experiences, related to SBRT credentialing, and discuss strategies for effective SBRT credentialing and implementation. Learning Objectives: Understand requirements and challenges of SBRT credentailing and implentation Discuss processes and strategies of effective SBRT credentailing Discuss practical considerations, potential pitfalls and solutions of SBRT implentation.« less
Advances in targeting strategies for nanoparticles in cancer imaging and therapy.
Yhee, Ji Young; Lee, Sangmin; Kim, Kwangmeyung
2014-11-21
In the last decade, nanoparticles have offered great advances in diagnostic imaging and targeted drug delivery. In particular, nanoparticles have provided remarkable progress in cancer imaging and therapy based on materials science and biochemical engineering technology. Researchers constantly attempted to develop the nanoparticles which can deliver drugs more specifically to cancer cells, and these efforts brought the advances in the targeting strategy of nanoparticles. This minireview will discuss the progress in targeting strategies for nanoparticles focused on the recent innovative work for nanomedicine.
Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.
2013-01-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196
Khalil, Mohammed K; Paas, Fred; Johnson, Tristan E; Su, Yung K; Payer, Andrew F
2008-01-01
This research is an effort to best utilize the interactive anatomical images for instructional purposes based on cognitive load theory. Three studies explored the differential effects of three computer-based instructional strategies that use anatomical cross-sections to enhance the interpretation of radiological images. These strategies include: (1) cross-sectional images of the head that can be superimposed on radiological images, (2) transparent highlighting of anatomical structures in radiological images, and (3) cross-sectional images of the head with radiological images presented side-by-side. Data collected included: (1) time spent on instruction and on solving test questions, (2) mental effort during instruction and test, and (3) students' performance to identify anatomical structures in radiological images. Participants were 28 freshmen medical students (15 males and 13 females) and 208 biology students (190 females and 18 males). All studies used posttest-only control group design, and the collected data were analyzed by either t test or ANOVA. In self-directed computer-based environments, the strategies that used cross sections to improve students' ability to recognize anatomic structures in radiological images showed no significant positive effects. However, when increasing the complexity of the instructional materials, cross-sectional images imposed a higher cognitive load, as indicated by higher investment of mental effort. There is not enough evidence to claim that the simultaneous combination of cross sections and radiological images has no effect on the identification of anatomical structures in radiological images for novices. Further research that control for students' learning and cognitive style is needed to reach an informative conclusion.
How to handle 6GBytes a night and not get swamped
NASA Technical Reports Server (NTRS)
Allsman, R.; Alcock, C.; Axelrod, T.; Bennett, D.; Cook, K.; Park, H.-S.; Griest, K.; Marshall, S.; Perlmutter, S.; Stubbs, C.
1992-01-01
The Macho Project has undertaken a 5 year effort to search for dark matter in the halo of the Galaxy by scanning the Magellanic Clouds for micro-lensing events. Each evening's raw image data will be reduced in real-time into the observed stars' photometric measurements. The actual search for micro-lensing events will be a post-processing operation. The theoretical prediction of the rate of such events necessitates the collection of a large number of repeated exposures. The project designed camera subsystem delivers 64 Mbytes per exposure with exposures typically occurring every 500 seconds. An ideal evening's observing will provide 6 Gbytes of raw image data and 40 Mbytes of reduced photometric measurements. Recognizing the difficulty of digging out from a snowballing cascade of raw data, the project requires the real-time reduction of each evening's data. The software team's implementation strategy centered on this non-negotiable mandate. Accepting the reality that 2 full time people needed to implement the core real-time control and data management system within 6 months, off-the-shelf vendor components were explored to provide quick solutions to the classic needs for file management, data management, and process control. Where vendor solutions were lacking, state-of-the-art models were used for hand tailored subsystems. In particular, petri nets manage process control, memory mapped bulletin boards provide interprocess communication between the multi-tasked processes, and C++ class libraries provide memory mapped, disk resident databases. The differences between the implementation strategy and the final implementation reality are presented. The necessity of validating vendor product claims are explored. Both the successful and hindsight decisions enabling the collection and processing of the nightly data barrage are reviewed.
NASA Astrophysics Data System (ADS)
Price, Norman T.
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.
Children's Memory for Words Under Self-Reported and Induced Imagery Strategies.
ERIC Educational Resources Information Center
Filan, Gary L.; Sullivan, Howard J.
The effectiveness of the use of self-reported imagery strategies on children's subsequent memory performance was studied, and the coding redundancy hypothesis that memory is facilitated by using an encoding procedure in both words and images was tested. The two levels of reported memory strategy (imagize, verbalize) were crossed with "think…
ERIC Educational Resources Information Center
McCabe, Marita P.; Ricciardelli, Lina A.
2001-01-01
Investigated the nature of body image and body change strategies, as well as sociocultural influences on these variables, among a group of 1,266 adolescents. Findings indicated females were less satisfied with their bodies and were more likely to adopt strategies to lose weight, whereas males were likely to adopt strategies to increase weight and…
Autonomous spacecraft rendezvous and docking
NASA Technical Reports Server (NTRS)
Tietz, J. C.; Almand, B. J.
1985-01-01
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Autonomous spacecraft rendezvous and docking
NASA Astrophysics Data System (ADS)
Tietz, J. C.; Almand, B. J.
A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.
Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo
2015-01-01
Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094
Low-dose CT in clinical diagnostics.
Fuentes-Orrego, Jorge M; Sahani, Dushyant V
2013-09-01
Computed tomography (CT) has become key for patient management due to its outstanding capabilities for detecting disease processes and assessing treatment response, which has led to expansion in CT imaging for diagnostic and image-guided therapeutic interventions. Despite these benefits, the growing use of CT has raised concerns as radiation risks associated with radiation exposure. The purpose of this article is to familiarize the reader with fundamental concepts of dose metrics for assessing radiation exposure and weighting radiation-associated risks. The article also discusses general approaches for reducing radiation dose while preserving diagnostic quality. The authors provide additional insight for undertaking protocol optimization, customizing scanning techniques based on the patients' clinical scenario and demographics. Supplemental strategies are postulated using more advanced post-processing techniques for achieving further dose improvements. The technologic offerings of CT are integral to modern medicine and its role will continue to evolve. Although, the estimated risks from low levels of radiation of a single CT exam are uncertain, it is prudent to minimize the dose from CT by applying common sense solutions and using other simple strategies as well as exploiting technologic innovations. These efforts will enable us to take advantage of all the clinical benefits of CT while minimizing the likelihood of harm to patients.
Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines
Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2013-01-01
We address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50× and 85× with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively. PMID:23908562
Computer image generation: Reconfigurability as a strategy in high fidelity space applications
NASA Technical Reports Server (NTRS)
Bartholomew, Michael J.
1989-01-01
The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.
Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope
Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.
2013-01-01
In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668
A new template matching method based on contour information
NASA Astrophysics Data System (ADS)
Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong
2014-11-01
Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.
WE-B-BRC-03: Risk in the Context of Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, E.
Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.
Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei
2017-07-01
The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
A Secure and Efficient Scalable Secret Image Sharing Scheme with Flexible Shadow Sizes.
Xie, Dong; Li, Lixiang; Peng, Haipeng; Yang, Yixian
2017-01-01
In a general (k, n) scalable secret image sharing (SSIS) scheme, the secret image is shared by n participants and any k or more than k participants have the ability to reconstruct it. The scalability means that the amount of information in the reconstructed image scales in proportion to the number of the participants. In most existing SSIS schemes, the size of each image shadow is relatively large and the dealer does not has a flexible control strategy to adjust it to meet the demand of differen applications. Besides, almost all existing SSIS schemes are not applicable under noise circumstances. To address these deficiencies, in this paper we present a novel SSIS scheme based on a brand-new technique, called compressed sensing, which has been widely used in many fields such as image processing, wireless communication and medical imaging. Our scheme has the property of flexibility, which means that the dealer can achieve a compromise between the size of each shadow and the quality of the reconstructed image. In addition, our scheme has many other advantages, including smooth scalability, noise-resilient capability, and high security. The experimental results and the comparison with similar works demonstrate the feasibility and superiority of our scheme.
A multiresolution processing method for contrast enhancement in portal imaging.
Gonzalez-Lopez, Antonio
2018-06-18
Portal images have a unique feature among the imaging modalities used in radiotherapy: they provide direct visualization of the irradiated volumes. However, contrast and spatial resolution are strongly limited due to the high energy of the radiation sources. Because of this, imaging modalities using x-ray energy beams have gained importance in the verification of patient positioning, replacing portal imaging. The purpose of this work was to develop a method for the enhancement of local contrast in portal images. The method operates in the subbands of a wavelet decomposition of the image, re-scaling them in such a way that coefficients in the high and medium resolution subbands are amplified, an approach totally different of those operating on the image histogram, widely used nowadays. Portal images of an anthropomorphic phantom were acquired in an electronic portal imaging device (EPID). Then, different re-scaling strategies were investigated, studying the effects of the scaling parameters on the enhanced images. Also, the effect of using different types of transforms was studied. Finally, the implemented methods were combined with histogram equalization methods like the contrast limited adaptive histogram equalization (CLAHE), and these combinations were compared. Uniform amplification of the detail subbands shows the best results in contrast enhancement. On the other hand, linear re-escalation of the high resolution subbands increases the visibility of fine detail of the images, at the expense of an increase in noise levels. Also, since processing is applied only to detail subbands, not to the approximation, the mean gray level of the image is minimally modified and no further display adjustments are required. It is shown that re-escalation of the detail subbands of portal images can be used as an efficient method for the enhancement of both, the local contrast and the resolution of these images. © 2018 Institute of Physics and Engineering in Medicine.
Real-space processing of helical filaments in SPARX
Behrmann, Elmar; Tao, Guozhi; Stokes, David L.; Egelman, Edward H.; Raunser, Stefan; Penczek, Pawel A.
2012-01-01
We present a major revision of the iterative helical real-space refinement (IHRSR) procedure and its implementation in the SPARX single particle image processing environment. We built on over a decade of experience with IHRSR helical structure determination and we took advantage of the flexible SPARX infrastructure to arrive at an implementation that offers ease of use, flexibility in designing helical structure determination strategy, and high computational efficiency. We introduced the 3D projection matching code which now is able to work with non-cubic volumes, the geometry better suited for long helical filaments, we enhanced procedures for establishing helical symmetry parameters, and we parallelized the code using distributed memory paradigm. Additional feature includes a graphical user interface that facilitates entering and editing of parameters controlling the structure determination strategy of the program. In addition, we present a novel approach to detect and evaluate structural heterogeneity due to conformer mixtures that takes advantage of helical structure redundancy. PMID:22248449
SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bednarz, B; Culberson, W; Bassetti, M
Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-timemore » from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.« less
NASA Astrophysics Data System (ADS)
Fiole, Daniel; Deman, Pierre; Trescos, Yannick; Douady, Julien; Tournier, Jean-Nicolas
2013-02-01
Lung tissue motion arising from breathing and heart beating has been described as the largest annoyance of in vivo imaging. Consequently, infected lung tissue has never been imaged in vivo thus far, and little is known concerning the kinetics of the mucosal immune system at the cellular level. We have developed an optimized post-processing strategy to overcome tissue motion, based upon two-photon and second harmonic generation (SHG) microscopy. In contrast to previously published data, we have freed the lung parenchyma from any strain and depression in order to maintain the lungs under optimal physiological parameters. Excitation beams swept the sample throughout normal breathing and heart movements, allowing the collection of many images. Given that tissue motion is unpredictably, it was essential to sort images of interest. This step was enhanced by using SHG signal from collagen as a reference for sampling and realignment phases. A normalized cross-correlation criterion was used between a manually chosen reference image and rigid transformations of all others. Using CX3CR1+/gfp mice this process allowed the collection of high resolution images of pulmonary dendritic cells (DCs) interacting with Bacillus anthracis spores, a Gram-positive bacteria responsible for anthrax disease. We imaged lung tissue for up to one hour, without interrupting normal lung physiology. Interestingly, our data revealed unexpected interactions between DCs and macrophages, two specialized phagocytes. These contacts may participate in a better coordinate immune response. Our results not only demonstrate the phagocytizing task of lung DCs but also infer a cooperative role of alveolar macrophages and DCs.
Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael
2018-02-01
Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher SSIM (p < 0.01) and lower RMSE (p < 0.01) in the presence of respiratory motion. For patient studies, the MC strategy improved k-t PCA and k-t SLR reconstruction image quality (p < 0.01). The performance of k-t SLR without motion correction demonstrated improved image quality as compared to k-t PCA in the setting of respiratory motion (p < 0.01), while with motion correction there is a trend of better performance in k-t SLR as compared with motion corrected k-t PCA. Our simple and robust rigid motion compensation strategy greatly reduces motion artifacts and improves image quality for standard k-t PCA and k-t SLR techniques in setting of respiratory motion due to imperfect breath-holding.
Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles
2017-05-26
Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.
Development of a pseudo phased array technique using EMATs for DM weld testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cobb, Adam C., E-mail: adam.cobb@swri.org; Fisher, Jay L., E-mail: adam.cobb@swri.org; Shiokawa, Nobuyuki
2015-03-31
Ultrasonic inspection of dissimilar metal (DM) welds in piping with cast austenitic stainless steel (CASS) has been an area ongoing research for many years given its prevalence in the petrochemical and nuclear industries. A typical inspection strategy for pipe welds is to use an ultrasonic phased array system to scan the weld from a sensor located on the outer surface of the pipe. These inspection systems generally refract either longitudinal or shear vertical (SV) waves at varying angles to inspect the weld radially. In DM welds, however, the welding process can produce a columnar grain structure in the CASS materialmore » in a specific orientation. This columnar grain structure can skew ultrasonic waves away from their intended path, especially for SV and longitudinal wave modes. Studies have shown that inspection using the shear horizontal (SH) wave mode significantly reduces the effect of skewing. Electromagnetic acoustic transducers (EMATs) are known to be effective for producing SH waves in field settings. This paper presents an inspection strategy that seeks to reproduce the scanning and imaging capabilities of a commercial phase array system using EMATs. A custom-built EMAT was used to collect data at multiple propagation angles, and a processing strategy known as the synthetic aperture focusing technique (SAFT) was used to combine the data to produce an image. Results are shown using this pseudo phased array technique to inspect samples with a DM weld and artificial defects, demonstrating the potential of this approach in a laboratory setting. Recommendations for future work to transition the technique to the field are also provided.« less
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling
2016-05-01
Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Smart Contrast Agents for Magnetic Resonance Imaging.
Bonnet, Célia S; Tóth, Éva
2016-01-01
By visualizing bioactive molecules or biological parameters in vivo, molecular imaging is searching for information at the molecular level in living organisms. In addition to contributing to earlier and more personalized diagnosis in medicine, it also helps understand and rationalize the molecular factors underlying physiological and pathological processes. In magnetic resonance imaging (MRI), complexes of paramagnetic metal ions, mostly lanthanides, are commonly used to enhance the intrinsic image contrast. They rely either on the relaxation effect of these metal chelates (T(1) agents), or on the phenomenon of paramagnetic chemical exchange saturation transfer (PARACEST agents). In both cases, responsive molecular magnetic resonance imaging probes can be designed to report on various biomarkers of biological interest. In this context, we review recent work in the literature and from our group on responsive T(1) and PARACEST MRI agents for the detection of biogenic metal ions (such as calcium or zinc), enzymatic activities, or neurotransmitter release. These examples illustrate the general strategies that can be applied to create molecular imaging agents with an MRI detectable response to biologically relevant parameters.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
A symmetrical image encryption scheme in wavelet and time domain
NASA Astrophysics Data System (ADS)
Luo, Yuling; Du, Minghui; Liu, Junxiu
2015-02-01
There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.
Wang, Guizhou; Liu, Jianbo; He, Guojin
2013-01-01
This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808
Polarimetric Calibration and Assessment of GF-3 Images in Steppe
NASA Astrophysics Data System (ADS)
Chang, Y.; Yang, J.; Li, P.; Shi, L.; Zhao, L.
2018-04-01
The GaoFen-3 (GF-3) satellite is the first fully polarimetric synthetic aperture radar (PolSAR) satellite in China. It has three fully polarimetric imaging modes and is available for many applications. The system has been taken on several calibration experiments after the launch in Inner Mongolia by the Institute of Electronics, Chinese Academy of Sciences (IECAS), and the polarimetric calibration (PolCAL) strategy of GF-3 are also improved. Therefore, it is necessary to assess the image quality before any further applications. In this paper, we evaluated the polarimetric residual errors of GF-3 images that acquired in July 2017 in a steppe site. The results shows that the crosstalk of these images varies from -36 dB to -46 dB, and the channel imbalance varies from -0.43 dB to 0.55 dB with angle varying from -1.6 to 3.6 degree. We also made a PolCAL experiment to restrain the polarimetric distortion afterwards, and the polarimetric quality of the image got better after the PolCAL processing.
Application of NIR hyperspectral imaging for post-consumer polyolefins recycling
NASA Astrophysics Data System (ADS)
Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe
2012-06-01
An efficient large-scale recycling approach of particulate solid wastes is always accomplished according to the quality of the materials fed to the recycling plant and/or to any possible continuous and reliable control of the different streams inside the processing plants. Processing technologies addressed to recover plastics need to be extremely powerful, since they must be relatively simple to be cost-effective, but also accurate enough to create high-purity products and able to valorize a substantial fraction of the plastic waste materials into useful products of consistent quality in order to be economical. On the other hand, the potential market for such technologies is large and the boost of environmental regulations, and the oil price increase, has made many industries interested both in "general purpose" waste sorting technologies, as well as in developing more specialized sensing devices and/or inspection logics for a better quality assessment of plastic products. In this perspective recycling strategies have to be developed taking into account some specific aspects as i) mixtures complexity: the valuable material has to be extracted from the residue, ii) overall production: the profitability of plastic can be achieved only with mass production and iii) costs: low-cost sorting processes are required. In this paper new analytical strategies, based on hyperspectral imaging in the near infrared field (1000-1700 nm), have been investigated and set up in order to define sorting and/or quality control logics that could be profitably applied, at industrial plant level, for polyolefins recycling.
Smith, Cartney E; Shkumatov, Artem; Withers, Sarah G; Yang, Binxia; Glockner, James F; Misra, Sanjay; Roy, Edward J; Wong, Chun-Ho; Zimmerman, Steven C; Kong, Hyunjoon
2013-11-26
Common methods of loading magnetic resonance imaging (MRI) contrast agents into nanoparticles often suffer from challenges related to particle formation, complex chemical modification/purification steps, and reduced contrast efficiency. This study presents a simple, yet advanced process to address these issues by loading gadolinium, an MRI contrast agent, exclusively on a liposome surface using a polymeric fastener. The fastener, so named for its ability to physically link the two functional components together, consisted of chitosan substituted with diethylenetriaminepentaacetic acid (DTPA) to chelate gadolinium, as well as octadecyl chains to stabilize the modified chitosan on the liposome surface. The assembly strategy, mimicking the mechanisms by which viruses and proteins naturally anchor to a cell, provided greater T1 relaxivity than liposomes loaded with gadolinium in both the interior and outer leaflet. Gadolinium-coated liposomes were ultimately evaluated in vivo using murine ischemia models to highlight the diagnostic capability of the system. Taken together, this process decouples particle assembly and functionalization and, therefore, has considerable potential to enhance imaging quality while alleviating many of the difficulties associated with multifunctional particle fabrication.
Gopakumar, Gopalakrishna Pillai; Swetha, Murali; Sai Siva, Gorthi; Sai Subrahmanyam, Gorthi R K
2018-03-01
The present paper introduces a focus stacking-based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The cell counting problem is addressed as the segmentation problem and we propose a 2-level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria is first of its kind, and it not only improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand-engineered features. The slide images are acquired with a custom-built portable slide scanner made from low-cost, off-the-shelf components and is suitable for point-of-care diagnostics. The proposed approach of employing sophisticated algorithmic processing together with inexpensive instrumentation can potentially benefit clinicians to enable malaria diagnosis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unsupervised Feature Learning With Winner-Takes-All Based STDP
Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.
2018-01-01
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961
True Ortho Generation of Urban Area Using High Resolution Aerial Photos
NASA Astrophysics Data System (ADS)
Hu, Yong; Stanley, David; Xin, Yubin
2016-06-01
The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.
Imaging of Homeostatic, Neoplastic, and Injured Tissues by HA-Based Probes
Veiseh, Mandana; Breadner, Daniel; Ma, Jenny; Akentieva, Natalia; Savani, Rashmin C; Harrison, Rene; Mikilus, David; Collis, Lisa; Gustafson, Stefan; Lee, Ting-Yim; Koropatnick, James; Luyt, Leonard G.; Bissell, Mina J.; Turley, Eva A.
2013-01-01
An increase in hyaluronan (HA) synthesis, cellular uptake, and metabolism occurs during the remodeling of tissue microenvironments following injury and during disease processes such as cancer. We hypothesized that multimodality HA-based probes selectively target and detectably accumulate at sites of high HA metabolism, thus providing a flexible imaging strategy for monitoring disease and repair processes. Kinetic analyses confirmed favorable available serum levels of the probe following intravenous (i.v.) or subcutaneous (s.c.) injection. Nuclear (technetium-HA, 99mTc-HA, and iodine-HA, 125I-HA), optical (fluorescent Texas Red-HA, TR-HA), and magnetic resonance (gadolinium-HA, Gd-HA) probes imaged liver (99mTc-HA), breast cancer cells/xenografts (TR-HA, Gd-HA), and vascular injury (125I-HA, TR-HA). Targeting of HA probes to these sites appeared to result from selective HA receptor-dependent localization. Our results suggest that HA-based probes, which do not require polysaccharide backbone modification to achieve favorable half-life and distribution, can detect elevated HA metabolism in homeostatic, injured, and diseased tissues. PMID:22066590
Smith, Cartney E.; Shkumatov, Artem; Withers, Sarah G.; Glockner, James F.; Misra, Sanjay; Roy, Edward J.; Wong, Chun-Ho; Zimmerman, Steven C.; Kong, Hyunjoon
2013-01-01
Common methods of loading magnetic resonance imaging (MRI) contrast agents into nanoparticles often suffer from challenges related to particle formation, complex chemical modification/purification steps, and reduced contrast efficiency. This study presents a simple, yet advanced process to address these issues by loading gadolinium, an MRI contrast agent, exclusively on a liposome surface using a polymeric fastener. The fastener, so named for its ability to physically link the two functional components together, consisted of chitosan substituted with diethylenetriaminepentaacetic acid (DTPA) to chelate gadolinium, as well as octadecyl chains to stabilize the modified chitosan on the liposome surface. The assembly strategy, mimicking the mechanisms by which viruses and proteins naturally anchor to a cell, provided greater T1 relaxivity than liposomes loaded with gadolinium in both the interior and outer leaflet. Gadolinium-coated liposomes were ultimately evaluated in vivo using murine ischemia models to highlight the diagnostic capability of the system. Taken together, this process decouples particle assembly and functionalization, and therefore has considerable potential to enhance imaging quality while alleviating many of the difficulties associated with multifunctional particle fabrication. PMID:24083377
Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.
Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie
2016-07-01
Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.