MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, R.
This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less
Optical analysis of crystal growth
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Passeur, Andrea; Harper, Sabrina
1994-01-01
Processing and data reduction of holographic images from Spacelab presents some interesting challenges in determining the effects of microgravity on crystal growth processes. Evaluation of several processing techniques, including the Computerized Holographic Image Processing System and the image processing software ITEX150, will provide fundamental information for holographic analysis of the space flight data.
Computers in Public Schools: Changing the Image with Image Processing.
ERIC Educational Resources Information Center
Raphael, Jacqueline; Greenberg, Richard
1995-01-01
The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…
Image-Guided Abdominal Surgery and Therapy Delivery
Galloway, Robert L.; Herrell, S. Duke; Miga, Michael I.
2013-01-01
Image-Guided Surgery has become the standard of care in intracranial neurosurgery providing more exact resections while minimizing damage to healthy tissue. Moving that process to abdominal organs presents additional challenges in the form of image segmentation, image to physical space registration, organ motion and deformation. In this paper, we present methodologies and results for addressing these challenges in two specific organs: the liver and the kidney. PMID:25077012
Parallel Processing of Images in Mobile Devices using BOINC
NASA Astrophysics Data System (ADS)
Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo
2018-04-01
Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.
Latent Image Processing Can Bolster the Value of Quizzes.
ERIC Educational Resources Information Center
Singer, David
1985-01-01
Latent image processing is a method which reveals hidden ink when marked with a special pen. Using multiple-choice items with commercially available latent image transfers can provide immediate feedback on take-home quizzes. Students benefitted from formative evaluation and were challenged to search for alternative solutions and explain unexpected…
Relative Harmony: Achieving Balance in Your Brand Family
ERIC Educational Resources Information Center
Collins, Mary Ellen
2011-01-01
Educational institutions understand the importance of having a positive image among their target audiences, but the process of creating, enhancing, and managing that image remains challenging to many. Confusion over what branding is only adds to the challenge. Consultants define "brand" as promising an experience and delivering on that…
Biomedical image analysis and processing in clouds
NASA Astrophysics Data System (ADS)
Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John
2013-10-01
Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.
NASA Astrophysics Data System (ADS)
Zelazny, A. L.; Walsh, K. F.; Deegan, J. P.; Bundschuh, B.; Patton, E. K.
2015-05-01
The demand for infrared optical elements, particularly those made of chalcogenide materials, is rapidly increasing as thermal imaging becomes affordable to the consumer. The use of these materials in conjunction with established lens manufacturing techniques presents unique challenges relative to the cost sensitive nature of this new market. We explore the process from design to manufacture, and discuss the technical challenges involved. Additionally, facets of the development process including manufacturing logistics, packaging, supply chain management, and qualification are discussed.
Using Storyboarding to Model Gene Expression
ERIC Educational Resources Information Center
Korb, Michele; Colton, Shannon; Vogt, Gina
2015-01-01
Students often find it challenging to create images of complex, abstract biological processes. Using modified storyboards, which contain predrawn images, students can visualize the process and anchor ideas from activities, labs, and lectures. Storyboards are useful in assessing students' understanding of content in larger contexts. They enable…
Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment
NASA Astrophysics Data System (ADS)
Yao, Xi-Wei; Wang, Hengyan; Liao, Zeyang; Chen, Ming-Cheng; Pan, Jian; Li, Jun; Zhang, Kechao; Lin, Xingcheng; Wang, Zhehui; Luo, Zhihuang; Zheng, Wenqiang; Li, Jianzhong; Zhao, Meisheng; Peng, Xinhua; Suter, Dieter
2017-07-01
Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.
A Review on Potential Issues and Challenges in MR Imaging
Kanakaraj, Jagannathan
2013-01-01
Magnetic resonance imaging is a noninvasive technique that has been developed for its excellent depiction of soft tissue contrasts. Instruments capable of ultra-high field strengths, ≥7 Tesla, were recently engineered and have resulted in higher signal-to-noise and higher resolution images. This paper presents various subsystems of the MR imaging systems like the magnet subsystem, gradient subsystem, and also various issues which arise due to the magnet. Further, it also portrays finer details about the RF coils and transceiver and also various limitations of the RF coils and transceiver. Moreover, the concept behind the data processing system and the challenges related to it were also depicted. Finally, the various artifacts associated with the MR imaging were clearly pointed out. It also presents a brief overview about all the challenges related to MR imaging systems. PMID:24381523
Reduce Fluid Experiment System: Flight data from the IML-1 Mission
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Harper, Sabrina
1995-01-01
Processing and data reduction of holographic images from the International Microgravity Laboratory 1 (IML-1) presents some interesting challenges in determining the effects of microgravity on crystal growth processes. Use of several processing techniques, including the Computerized Holographic Image Processing System and the Software Development Package (SDP-151) will provide fundamental information for holographic and schlieren analysis of the space flight data.
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my; Hannan, M.A., E-mail: hannan@eng.ukm.my; Basri, Hassan
Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensormore » intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.« less
Leppanen, Jenni; Cardi, Valentina; Ng, Kah Wee; Paloyelis, Yannis; Stein, Daniel; Tchanturia, Kate; Treasure, Janet
2017-05-01
Anorexia nervosa (AN) is characterised by severe malnutrition as well as intense fear and anxiety around food and eating with associated anomalies in information processing. Previous studies have found that the neuropeptide, oxytocin, can influence eating behaviour, lower the neurobiological stress response and anxiety among clinical populations, and alter attentional processing of food and eating related images in AN. Thirty adult women with AN and twenty-nine healthy comparison (HC) women took part in the current study. The study used double blind, placebo controlled, crossover design to investigate the effects of a single dose of intranasal oxytocin (40 IU) on a standard laboratory smoothie challenge, and on salivary cortisol, anxiety, and attentional bias towards food images before and after the smoothie challenge in AN and HC participants. Attentional bias was assessed using a visual probe task. Relative to placebo intranasal oxytocin reduced salivary cortisol and altered anomalies in attentional bias towards food images in the AN group only. The oxytocin-induced reduction in attentional avoidance of food images correlated with oxytocin induced reduction in salivary cortisol in the AN group before the smoothie challenge. Intranasal oxytocin did not significantly alter subjective feelings of anxiety or intake during the smoothie challenge in the AN or HC groups. Intranasal oxytocin may moderate the automated information processing biases in AN and reduce neurobiological stress. Further investigation of the effects of repeated administration of oxytocin on these processes as well as on eating behaviour and subjective anxiety would be of interest. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Astronomical Image Processing with Hadoop
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-07-01
In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.
Medical image processing on the GPU - past, present and future.
Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M
2013-12-01
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.
Challenges for data storage in medical imaging research.
Langer, Steve G
2011-04-01
Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.
Characterizing challenged Minnesota ballots
NASA Astrophysics Data System (ADS)
Nagy, George; Lopresti, Daniel; Barney Smith, Elisa H.; Wu, Ziyan
2011-01-01
Photocopies of the ballots challenged in the 2008 Minnesota elections, which constitute a public record, were scanned on a high-speed scanner and made available on a public radio website. The PDF files were downloaded, converted to TIF images, and posted on the PERFECT website. Based on a review of relevant image-processing aspects of paper-based election machinery and on additional statistics and observations on the posted sample data, robust tools were developed for determining the underlying grid of the targets on these ballots regardless of skew, clipping, and other degradations caused by high-speed copying and digitization. The accuracy and robustness of a method based on both index-marks and oval targets are demonstrated on 13,435 challenged ballot page images.
Image Processing Algorithms in the Secondary School Programming Education
ERIC Educational Resources Information Center
Gerják, István
2017-01-01
Learning computer programming for students of the age of 14-18 is difficult and requires endurance and engagement. Being familiar with the syntax of a computer language and writing programs in it are challenges for youngsters, not to mention that understanding algorithms is also a big challenge. To help students in the learning process, teachers…
Image segmentation via foreground and background semantic descriptors
NASA Astrophysics Data System (ADS)
Yuan, Ding; Qiang, Jingjing; Yin, Jihao
2017-09-01
In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.
“Pretty Pictures” with the HDI
NASA Astrophysics Data System (ADS)
Buckner, Spencer L.
2017-01-01
The Half-Degree Imager (HDI) has been in use on the 0.9-m WIYN telescope since October 2013. The instrument has well served the consortium as evidenced by the posters in this session and presentations at the concurrent special session held at this meeting. One thing that has been missing from the mix are aesthetically pleasing images for use in publicity and public outreach. Making “pretty pictures” with a scientific instrument such as HDI presents a number of challenges and opportunities. The chief challenge is finding the time to do the basic imaging given the limited telescope time available to users. Most users are understandably reluctant to take time away from imaging for their scientific research to take images whose primary purpose is to make a pretty picture. Fortunately, imaging of some objects to make pretty pictures can be done under sky conditions that are less than ideal when photometric studies would have limited usefulness. Another challenge is the raw HDI images must be converted from an extended FITS format into a normal FITS and a filter line added to the header to make the images usable by most commercially available image processing software. On the plus side, pretty picture images can serve to inspire prospective students into astronomy. Austin Peay State University has a popular astrophotography class that makes use of images taken with the HDI camera to introduce students to basic image processing techniques. The course is taken by both physics majors on the astrophysics track and non-science majors completing the astronomy minor. Pretty pictures can also be used as a recruitment tool to bring students into astronomy. APSU houses physics, biology, chemistry, agriculture and medical technology in the same building and displaying astronomical pictures at strategic locations around the building serves to recruit non-science majors to take more astronomy courses. Finally, the images can be used in publicity and outreach efforts by the university. This poster presents some of the techniques used in processing the images tor aesthetic value and how those images are used in recruitment, publicity and outreach. Several of the finished images in poster-sized prints will be available for viewing.
Analysis of the IJCNN 2011 UTL Challenge
2012-01-13
large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...http //clopinet.com/ul). We made available large datasets from various application domains handwriting recognition, image recognition, video...evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205 50000 HARRY Video 5000 98.1
NASA Astrophysics Data System (ADS)
Robbins, William L.; Conklin, James J.
1995-10-01
Medical images (angiography, CT, MRI, nuclear medicine, ultrasound, x ray) play an increasingly important role in the clinical development and regulatory review process for pharmaceuticals and medical devices. Since medical images are increasingly acquired and archived digitally, or are readily digitized from film, they can be visualized, processed and analyzed in a variety of ways using digital image processing and display technology. Moreover, with image-based data management and data visualization tools, medical images can be electronically organized and submitted to the U.S. Food and Drug Administration (FDA) for review. The collection, processing, analysis, archival, and submission of medical images in a digital format versus an analog (film-based) format presents both challenges and opportunities for the clinical and regulatory information management specialist. The medical imaging 'core laboratory' is an important resource for clinical trials and regulatory submissions involving medical imaging data. Use of digital imaging technology within a core laboratory can increase efficiency and decrease overall costs in the image data management and regulatory review process.
From spoken narratives to domain knowledge: mining linguistic data for medical image understanding.
Guo, Xuan; Yu, Qi; Alm, Cecilia Ovesdotter; Calvelli, Cara; Pelz, Jeff B; Shi, Pengcheng; Haake, Anne R
2014-10-01
Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall correlation>0.4). Clustering results are largely improved based on our anchor concept method (accuracy>70% and mutual information>80%). Physicians' spoken narratives are valuable for the purpose of mining the domain knowledge that physicians use in medical image inspections. We also show that the semantic metrics introduced in the paper can be successfully applied to medical image understanding and allow discussion of additional uses of these metrics. Copyright © 2014 Elsevier B.V. All rights reserved.
Radiomics in radiooncology - Challenging the medical physicist.
Peeken, Jan C; Bernhofer, Michael; Wiestler, Benedikt; Goldberg, Tatyana; Cremers, Daniel; Rost, Burkhard; Wilkens, Jan J; Combs, Stephanie E; Nüsslin, Fridtjof
2018-04-01
Noticing the fast growing translation of artificial intelligence (AI) technologies to medical image analysis this paper emphasizes the future role of the medical physicist in this evolving field. Specific challenges are addressed when implementing big data concepts with high-throughput image data processing like radiomics and machine learning in a radiooncology environment to support clinical decisions. Based on the experience of our interdisciplinary radiomics working group, techniques for processing minable data, extracting radiomics features and associating this information with clinical, physical and biological data for the development of prediction models are described. A special emphasis was placed on the potential clinical significance of such an approach. Clinical studies demonstrate the role of radiomics analysis as an additional independent source of information with the potential to influence the radiooncology practice, i.e. to predict patient prognosis, treatment response and underlying genetic changes. Extending the radiomics approach to integrate imaging, clinical, genetic and dosimetric data ('panomics') challenges the medical physicist as member of the radiooncology team. The new field of big data processing in radiooncology offers opportunities to support clinical decisions, to improve predicting treatment outcome and to stimulate fundamental research on radiation response both of tumor and normal tissue. The integration of physical data (e.g. treatment planning, dosimetric, image guidance data) demands an involvement of the medical physicist in the radiomics approach of radiooncology. To cope with this challenge national and international organizations for medical physics should organize more training opportunities in artificial intelligence technologies in radiooncology. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Traumatic Brain Injury Diffusion Magnetic Resonance Imaging Research Roadmap Development Project
2011-10-01
promising technology on the horizon is the Diffusion Tensor Imaging ( DTI ). Diffusion tensor imaging ( DTI ) is a magnetic resonance imaging (MRI)-based...in the brain. The potential for DTI to improve our understanding of TBI has not been fully explored and challenges associated with non-existent...processing tools, quality control standards, and a shared image repository. The recommendations will be disseminated and pilot tested. A DTI of TBI
Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A
2015-07-01
The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. © 2014 John Wiley & Sons Ltd.
Identifying regions of interest in medical images using self-organizing maps.
Teng, Wei-Guang; Chang, Ping-Lin
2012-10-01
Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.
NASA Astrophysics Data System (ADS)
Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.
2010-01-01
This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.
Improvement of passive THz camera images
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw
2012-10-01
Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.
Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W; Chen, Zhuo Georgia; Fei, Baowei
2015-01-01
Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.
NASA Astrophysics Data System (ADS)
Lu, Guolan; Wang, Dongsheng; Qin, Xulei; Halig, Luma; Muller, Susan; Zhang, Hongzheng; Chen, Amy; Pogue, Brian W.; Chen, Zhuo Georgia; Fei, Baowei
2015-12-01
Hyperspectral imaging (HSI) is an imaging modality that holds strong potential for rapid cancer detection during image-guided surgery. But the data from HSI often needs to be processed appropriately in order to extract the maximum useful information that differentiates cancer from normal tissue. We proposed a framework for hyperspectral image processing and quantification, which includes a set of steps including image preprocessing, glare removal, feature extraction, and ultimately image classification. The framework has been tested on images from mice with head and neck cancer, using spectra from 450- to 900-nm wavelength. The image analysis computed Fourier coefficients, normalized reflectance, mean, and spectral derivatives for improved accuracy. The experimental results demonstrated the feasibility of the hyperspectral image processing and quantification framework for cancer detection during animal tumor surgery, in a challenging setting where sensitivity can be low due to a modest number of features present, but potential for fast image classification can be high. This HSI approach may have potential application in tumor margin assessment during image-guided surgery, where speed of assessment may be the dominant factor.
Image reconstruction for PET/CT scanners: past achievements and future challenges
Tong, Shan; Alessio, Adam M; Kinahan, Paul E
2011-01-01
PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831
Reference software implementation for GIFTS ground data processing
NASA Astrophysics Data System (ADS)
Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.
2006-08-01
Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.
Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images
Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas
2014-01-01
Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM). PMID:24940551
Registration of adaptive optics corrected retinal nerve fiber layer (RNFL) images.
Ramaswamy, Gomathy; Lombardo, Marco; Devaney, Nicholas
2014-06-01
Glaucoma is the leading cause of preventable blindness in the western world. Investigation of high-resolution retinal nerve fiber layer (RNFL) images in patients may lead to new indicators of its onset. Adaptive optics (AO) can provide diffraction-limited images of the retina, providing new opportunities for earlier detection of neuroretinal pathologies. However, precise processing is required to correct for three effects in sequences of AO-assisted, flood-illumination images: uneven illumination, residual image motion and image rotation. This processing can be challenging for images of the RNFL due to their low contrast and lack of clearly noticeable features. Here we develop specific processing techniques and show that their application leads to improved image quality on the nerve fiber bundles. This in turn improves the reliability of measures of fiber texture such as the correlation of Gray-Level Co-occurrence Matrix (GLCM).
Content standards for medical image metadata
NASA Astrophysics Data System (ADS)
d'Ornellas, Marcos C.; da Rocha, Rafael P.
2003-12-01
Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
A Robust Actin Filaments Image Analysis Framework
Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem
2016-01-01
The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746
High-performance image processing on the desktop
NASA Astrophysics Data System (ADS)
Jordan, Stephen D.
1996-04-01
The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.
Astronomy In The Cloud: Using Mapreduce For Image Coaddition
NASA Astrophysics Data System (ADS)
Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-01-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.
Astronomy in the Cloud: Using MapReduce for Image Co-Addition
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-03-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
Rubin, Geoffrey D.; Leipsic, Jonathon; Schoepf, U. Joseph; Fleischmann, Dominik; Napel, Sandy
2015-01-01
Through a marriage of spiral computed tomography (CT) and graphical volumetric image processing, CT angiography was born 20 years ago. Fueled by a series of technical innovations in CT and image processing, over the next 5–15 years, CT angiography toppled conventional angiography, the undisputed diagnostic reference standard for vascular disease for the prior 70 years, as the preferred modality for the diagnosis and characterization of most cardiovascular abnormalities. This review recounts the evolution of CT angiography from its development and early challenges to a maturing modality that has provided unique insights into cardiovascular disease characterization and management. Selected clinical challenges, which include acute aortic syndromes, peripheral vascular disease, aortic stent-graft and transcatheter aortic valve assessment, and coronary artery disease, are presented as contrasting examples of how CT angiography is changing our approach to cardiovascular disease diagnosis and management. Finally, the recently introduced capabilities for multispectral imaging, tissue perfusion imaging, and radiation dose reduction through iterative reconstruction are explored with consideration toward the continued refinement and advancement of CT angiography. PMID:24848958
Stable image acquisition for mobile image processing applications
NASA Astrophysics Data System (ADS)
Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker
2015-02-01
Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.
NASA Astrophysics Data System (ADS)
Jackson, Edward F.
2016-04-01
Over the past decade, there has been an increasing focus on quantitative imaging biomarkers (QIBs), which are defined as "objectively measured characteristics derived from in vivo images as indicators of normal biological processes, pathogenic processes, or response to a therapeutic intervention"1. To evolve qualitative imaging assessments to the use of QIBs requires the development and standardization of data acquisition, data analysis, and data display techniques, as well as appropriate reporting structures. As such, successful implementation of QIB applications relies heavily on expertise from the fields of medical physics, radiology, statistics, and informatics as well as collaboration from vendors of imaging acquisition, analysis, and reporting systems. When successfully implemented, QIBs will provide image-derived metrics with known bias and variance that can be validated with anatomically and physiologically relevant measures, including treatment response (and the heterogeneity of that response) and outcome. Such non-invasive quantitative measures can then be used effectively in clinical and translational research and will contribute significantly to the goals of precision medicine. This presentation will focus on 1) outlining the opportunities for QIB applications, with examples to demonstrate applications in both research and patient care, 2) discussing key challenges in the implementation of QIB applications, and 3) providing overviews of efforts to address such challenges from federal, scientific, and professional organizations, including, but not limited to, the RSNA, NCI, FDA, and NIST. 1Sullivan, Obuchowski, Kessler, et al. Radiology, epub August 2015.
A survey of GPU-based medical image computing techniques
Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming
2012-01-01
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080
3D Texture Features Mining for MRI Brain Tumor Identification
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra
2014-03-01
Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.
Advances in medical image computing.
Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P
2009-01-01
Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
Wide Field Spectroscopy of Diffusing and Interacting DNA Using Tunable Nanoscale Geometries
NASA Astrophysics Data System (ADS)
Scott, Shane; Leith, Jason; Brandao, Hugo; Sehayek, Simon; Hofkirchner, Alexander; Laurin, Jill; Berard, Daniel; Verge, Alexander; Wiseman, Paul; Leslie, Sabrina
2015-03-01
It remains an outstanding challenge to directly image interacting and diffusing biomolecules under physiological conditions. Many biochemical questions can be posed in the form: Does A interact with B? What are the energetics, kinetics, stoichiometry, and cooperativity of this interaction? To tackle this challenge, we use tunable nanoscale confinement to perform wide-field imaging of interacting DNA molecules in free solution, under an extended range of reagent concentrations and interaction rates. We present the integration of ``Convex Lens-induced Confinement (CLiC)'' microscopy with image correlation analysis, simultaneously suppressing background fluorescence and extending imaging times. The measured DNA-DNA interactions would be inaccessible to standard techniques but are important for developing a mechanistic understanding of life-preserving processes such as DNA transcription. NSERC.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
NASA Astrophysics Data System (ADS)
Dlesk, A.; Raeva, P.; Vach, K.
2018-05-01
Processing of analog photogrammetric negatives using current methods brings new challenges and possibilities, for example, creation of a 3D model from archival images which enables the comparison of historical state and current state of cultural heritage objects. The main purpose of this paper is to present possibilities of processing archival analog images captured by photogrammetric camera Rollei 6006 metric. In 1994, the Czech company EuroGV s.r.o. carried out photogrammetric measurements of former limestone quarry the Great America located in the Central Bohemian Region in the Czech Republic. All the negatives of photogrammetric images, complete documentation, coordinates of geodetically measured ground control points, calibration reports and external orientation of images calculated in the Combined Adjustment Program are preserved and were available for the current processing. Negatives of images were scanned and processed using structure from motion method (SfM). The result of the research is a statement of what accuracy is possible to expect from the proposed methodology using Rollei metric images originally obtained for terrestrial intersection photogrammetry while adhering to the proposed methodology.
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
Imaging Heterogeneity in Lung Cancer: Techniques, Applications, and Challenges.
Bashir, Usman; Siddique, Muhammad Musib; Mclean, Emma; Goh, Vicky; Cook, Gary J
2016-09-01
Texture analysis involves the mathematic processing of medical images to derive sets of numeric quantities that measure heterogeneity. Studies on lung cancer have shown that texture analysis may have a role in characterizing tumors and predicting patient outcome. This article outlines the mathematic basis of and the most recent literature on texture analysis in lung cancer imaging. We also describe the challenges facing the clinical implementation of texture analysis. Texture analysis of lung cancer images has been applied successfully to FDG PET and CT scans. Different texture parameters have been shown to be predictive of the nature of disease and of patient outcome. In general, it appears that more heterogeneous tumors on imaging tend to be more aggressive and to be associated with poorer outcomes and that tumor heterogeneity on imaging decreases with treatment. Despite these promising results, there is a large variation in the reported data and strengths of association.
Population-based imaging biobanks as source of big data.
Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian
2017-06-01
Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo
2013-10-01
Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.
Automated processing of zebrafish imaging data: a survey.
Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-09-01
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Automated Processing of Zebrafish Imaging Data: A Survey
Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-01-01
Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
Spinning Disk Confocal Imaging of Neutrophil Migration in Zebrafish
Lam, Pui-ying; Fischer, Robert S; Shin, William D.; Waterman, Clare M; Huttenlocher, Anna
2014-01-01
Live-cell imaging techniques have been substantially improved due to advances in confocal microscopy instrumentation coupled with ultrasensitive detectors. The spinning disk confocal system is capable of generating images of fluorescent live samples with broad dynamic range and high temporal and spatial resolution. The ability to acquire fluorescent images of living cells in vivo on a millisecond timescale allows the dissection of biological processes that have not previously been visualized in a physiologically relevant context. In vivo imaging of rapidly moving cells such as neutrophils can be technically challenging. In this chapter, we describe the practical aspects of imaging neutrophils in zebrafish embryos using spinning disk confocal microscopy. Similar setups can also be applied to image other motile cell types and signaling processes in translucent animals or tissues. PMID:24504955
Advanced Imaging Optics Utilizing Wavefront Coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
Teleretinal Imaging to Screen for Diabetic Retinopathy in the Veterans Health Administration
Cavallerano, Anthony A.; Conlin, Paul R.
2008-01-01
Diabetes is the leading cause of adult vision loss in the United States and other industrialized countries. While the goal of preserving vision in patients with diabetes appears to be attainable, the process of achieving this goal poses a formidable challenge to health care systems. The large increase in the prevalence of diabetes presents practical and logistical challenges to providing quality care to all patients with diabetes. Given this challenge, the Veterans Health Administration (VHA) is increasingly using information technology as a means of improving the efficiency of its clinicians. The VHA has taken advantage of a mature computerized patient medical record system by integrating a program of digital retinal imaging with remote image interpretation (teleretinal imaging) to assist in providing eye care to the nearly 20% of VHA patients with diabetes. We describe this clinical pathway for accessing patients with diabetes in ambulatory care settings, evaluating their retinas for level of diabetic retinopathy with a teleretinal imaging system, and prioritizing their access into an eye and health care program in a timely and appropriate manner. PMID:19885175
Science, Technical Innovation and Applications in Bioacoustics: Summary of a Workshop
2004-07-01
binaural processing have been neglected. From a signal-processing standpoint, we should avoid complex computational methods and instead use massively...design and/or build transducers or arrays with anywhere near the performance and, most importantly, environmental adaptability of animal binaural ...shell Small animal imaging Cardiac Imaging in Mice The Challenge Mouse heart • 7mm diameter • 8 beats /sec Mouse Heart L16-28MHzL5-10MHz Laptop
Hajihosseini, Payman; Anzehaee, Mohammad Mousavi; Behnam, Behzad
2018-05-22
The early fault detection and isolation in industrial systems is a critical factor in preventing equipment damage. In the proposed method, instead of using the time signals of sensors, the 2D image obtained by placing these signals next to each other in a matrix has been used; and then a novel fault detection and isolation procedure has been carried out based on image processing techniques. Different features including texture, wavelet transform, mean and standard deviation of the image accompanied with MLP and RBF neural networks based classifiers have been used for this purpose. Obtained results indicate the notable efficacy and success of the proposed method in detecting and isolating faults of the Tennessee Eastman benchmark process and its superiority over previous techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
New "Gifted" Media Provide Springboards for Discussion
ERIC Educational Resources Information Center
Hyatt, Charles
2018-01-01
Parents of gifted children are often faced with challenges as to how to process images, labels, and stereotypes of youth with special abilities. Just as books can provide healing for the troubled soul by reflecting on the stories of people experiencing similar challenges, cinema and video can help examine one's strengths and weaknesses while…
ERIC Educational Resources Information Center
Langrehr, Don
2003-01-01
Outlines a study in which television advertising supplied the text that college students were challenged to interpret. Explains that the language and images of this advertising posed a complex, cognitive challenge--even to these students at advanced levels of education. Concludes that information processing of television advertising presents a…
(abstract) A High Throughput 3-D Inner Product Processor
NASA Technical Reports Server (NTRS)
Daud, Tuan
1996-01-01
A particularily challenging image processing application is the real time scene acquisition and object discrimination. It requires spatio-temporal recognition of point and resolved objects at high speeds with parallel processing algorithms. Neural network paradigms provide fine grain parallism and, when implemented in hardware, offer orders of magnitude speed up. However, neural networks implemented on a VLSI chip are planer architectures capable of efficient processing of linear vector signals rather than 2-D images. Therefore, for processing of images, a 3-D stack of neural-net ICs receiving planar inputs and consuming minimal power are required. Details of the circuits with chip architectures will be described with need to develop ultralow-power electronics. Further, use of the architecture in a system for high-speed processing will be illustrated.
The Open Microscopy Environment: open image informatics for the biological sciences
NASA Astrophysics Data System (ADS)
Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.
2016-07-01
Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).
USDA-ARS?s Scientific Manuscript database
With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...
USDA-ARS?s Scientific Manuscript database
A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...
Statistical Techniques for Efficient Indexing and Retrieval of Document Images
ERIC Educational Resources Information Center
Bhardwaj, Anurag
2010-01-01
We have developed statistical techniques to improve the performance of document image search systems where the intermediate step of OCR based transcription is not used. Previous research in this area has largely focused on challenges pertaining to generation of small lexicons for processing handwritten documents and enhancement of poor quality…
Automated Analysis of Composition and Style of Photographs and Paintings
ERIC Educational Resources Information Center
Yao, Lei
2013-01-01
Computational aesthetics is a newly emerging cross-disciplinary field with its core situated in traditional research areas such as image processing and computer vision. Using a computer to interpret aesthetic terms for images is very challenging. In this dissertation, I focus on solving specific problems about analyzing the composition and style…
Quantitative real-time analysis of collective cancer invasion and dissemination
NASA Astrophysics Data System (ADS)
Ewald, Andrew J.
2015-05-01
A grand challenge in biology is to understand the cellular and molecular basis of tissue and organ level function in mammals. The ultimate goals of such efforts are to explain how organs arise in development from the coordinated actions of their constituent cells and to determine how molecularly regulated changes in cell behavior alter the structure and function of organs during disease processes. Two major barriers stand in the way of achieving these goals: the relative inaccessibility of cellular processes in mammals and the daunting complexity of the signaling environment inside an intact organ in vivo. To overcome these barriers, we have developed a suite of tissue isolation, three dimensional (3D) culture, genetic manipulation, nanobiomaterials, imaging, and molecular analysis techniques to enable the real-time study of cell biology within intact tissues in physiologically relevant 3D environments. This manuscript introduces the rationale for 3D culture, reviews challenges to optical imaging in these cultures, and identifies current limitations in the analysis of complex experimental designs that could be overcome with improved imaging, imaging analysis, and automated classification of the results of experimental interventions.
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
Navab, Nassir; Fellow, Miccai; Hennersperger, Christoph; Frisch, Benjamin; Fürst, Bernhard
2016-10-01
In the last decade, many researchers in medical image computing and computer assisted interventions across the world focused on the development of the Virtual Physiological Human (VPH), aiming at changing the practice of medicine from classification and treatment of diseases to that of modeling and treating patients. These projects resulted in major advancements in segmentation, registration, morphological, physiological and biomechanical modeling based on state of art medical imaging as well as other sensory data. However, a major issue which has not yet come into the focus is personalizing intra-operative imaging, allowing for optimal treatment. In this paper, we discuss the personalization of imaging and visualization process with particular focus on satisfying the challenging requirements of computer assisted interventions. We discuss such requirements and review a series of scientific contributions made by our research team to tackle some of these major challenges. Copyright © 2016. Published by Elsevier B.V.
Light microscopy applications in systems biology: opportunities and challenges
2013-01-01
Biological systems present multiple scales of complexity, ranging from molecules to entire populations. Light microscopy is one of the least invasive techniques used to access information from various biological scales in living cells. The combination of molecular biology and imaging provides a bottom-up tool for direct insight into how molecular processes work on a cellular scale. However, imaging can also be used as a top-down approach to study the behavior of a system without detailed prior knowledge about its underlying molecular mechanisms. In this review, we highlight the recent developments on microscopy-based systems analyses and discuss the complementary opportunities and different challenges with high-content screening and high-throughput imaging. Furthermore, we provide a comprehensive overview of the available platforms that can be used for image analysis, which enable community-driven efforts in the development of image-based systems biology. PMID:23578051
What difference reveals about similarity.
Sagi, Eyal; Gentner, Dedre; Lovett, Andrew
2012-08-01
Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe how two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises from the multistage nature of the comparison process. Detecting that two images are different can be done in the initial (local-matching) stage, but only for pairs with low overlap; thus, "different" responses are faster for low-similarity than for high-similarity pairs. In contrast, identifying a specific difference generally requires a full structural alignment of the two images, and this alignment process is faster for high-similarity pairs. We described four experiments that demonstrate this dissociation and show that the results can be simulated using the Structure-Mapping Engine. These results pose a significant challenge for nonstructural accounts of similarity comparison and suggest that structural alignment processes play a significant role in visual comparison. Copyright © 2012 Cognitive Science Society, Inc.
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
NASA Astrophysics Data System (ADS)
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
Real-time imaging through strongly scattering media: seeing through turbid media, instantly
Sudarsanam, Sriram; Mathew, James; Panigrahi, Swapnesh; Fade, Julien; Alouini, Mehdi; Ramachandran, Hema
2016-01-01
Numerous everyday situations like navigation, medical imaging and rescue operations require viewing through optically inhomogeneous media. This is a challenging task as photons propagate predominantly diffusively (rather than ballistically) due to random multiple scattering off the inhomogenieties. Real-time imaging with ballistic light under continuous-wave illumination is even more challenging due to the extremely weak signal, necessitating voluminous data-processing. Here we report imaging through strongly scattering media in real-time and at rates several times the critical flicker frequency of the eye, so that motion is perceived as continuous. Two factors contributed to the speedup of more than three orders of magnitude over conventional techniques - the use of a simplified algorithm enabling processing of data on the fly, and the utilisation of task and data parallelization capabilities of typical desktop computers. The extreme simplicity of the technique, and its implementation with present day low-cost technology promises its utility in a variety of devices in maritime, aerospace, rail and road transport, in medical imaging and defence. It is of equal interest to the common man and adventure sportsperson like hikers, divers, mountaineers, who frequently encounter situations requiring realtime imaging through obscuring media. As a specific example, navigation under poor visibility is examined. PMID:27114106
Prescott, Jeffrey William
2013-02-01
The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.
MR imaging of the pelvis: a guide to incidental musculoskeletal findings for abdominal radiologists.
Gaetke-Udager, Kara; Girish, Gandikota; Kaza, Ravi K; Jacobson, Jon; Fessell, David; Morag, Yoav; Jamadar, David
2014-08-01
Occasionally patients who undergo magnetic resonance imaging for presumed pelvic disease demonstrate unexpected musculoskeletal imaging findings in the imaged field. Such incidental findings can be challenging to the abdominal radiologist, who may not be familiar with their appearance or know the appropriate diagnostic considerations. Findings can include both normal and abnormal bone marrow, osseous abnormalities such as Paget's disease, avascular necrosis, osteomyelitis, stress and insufficiency fractures, and athletic pubalgia, benign neoplasms such as enchondroma and bone island, malignant processes such as metastasis and chondrosarcoma, soft tissue processes such as abscess, nerve-related tumors, and chordoma, joint- and bursal-related processes such as sacroiliitis, iliopsoas bursitis, greater trochanteric pain syndrome, and labral tears, and iatrogenic processes such as bone graft or bone biopsy. Though not all-encompassing, this essay will help abdominal radiologists to identify and describe this variety of pelvic musculoskeletal conditions, understand key radiologic findings, and synthesize a differential diagnosis when appropriate.
Fractional domain varying-order differential denoising method
NASA Astrophysics Data System (ADS)
Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran
2014-10-01
Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.
2018-03-01
Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.
Imaging and the new biology: What's wrong with this picture?
NASA Astrophysics Data System (ADS)
Vannier, Michael W.
2004-05-01
The Human Genome has been defined, giving us one part of the equation that stems from the central dogma of molecular biology. Despite this awesome scientific achievement, the correspondence between genomics and imaging is weak, since we cannot predict an organism's phenotype from even perfect knowledge of its genetic complement. Biological knowledge comes in several forms, and the genome is perhaps the best known and most completely understood type. Imaging creates another form of biological information, providing the ability to study morphology, growth and development, metabolic processes, and diseases in vitro and in vivo at many levels of scale. The principal challenge in biomedical imaging for the future lies in the need to reconcile the data provided by one or multiple modalities with other forms of biological knowledge, most importantly the genome, proteome, physiome, and other "-ome's." To date, the imaging science community has not set a high priority on the unification of their results with genomics, proteomics, and physiological functions in most published work. Images are relatively isolated from other forms of biological data, impairing our ability to conceive and address many fundamental questions in research and clinical practice. This presentation will explain the challenge of biological knowledge integration in basic research and clinical applications from the standpoint of imaging and image processing. The impediments to progress, isolation of the imaging community, and mainstream of new and future biological science will be identified, so the critical and immediate need for change can be highlighted.
NASA Technical Reports Server (NTRS)
Tilley, David G.
1988-01-01
The surface wave field produced by Hurricane Josephine was imaged by the L-band SAR aboard the Challenger on October 12, 1984. Exponential trends found in the two-dimensional autocorrelations of speckled image data support an equilibrium theory model of sea surface hydrodynamics. The notions of correlated specular reflection, surface coherence, optimal Doppler parameterization and spatial resolution are discussed within the context of a Poisson-Rayleigh statistical model of the SAR imaging process.
An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.
Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong
2014-08-01
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.
Sentinel-2 ArcGIS Tool for Environmental Monitoring
NASA Astrophysics Data System (ADS)
Plesoianu, Alin; Cosmin Sandric, Ionut; Anca, Paula; Vasile, Alexandru; Calugaru, Andreea; Vasile, Cristian; Zavate, Lucian
2017-04-01
This paper addresses one of the biggest challenges regarding Sentinel-2 data, related to the need of an efficient tool to access and process the large collection of images that are available. Consequently, developing a tool for the automation of Sentinel-2 data analysis is the most immediate need. We developed a series of tools for the automation of Sentinel-2 data download and processing for vegetation health monitoring. The tools automatically perform the following operations: downloading image tiles from ESA's Scientific Hub or other venders (Amazon), pre-processing of the images to extract the 10-m bands, creating image composites, applying a series of vegetation indexes (NDVI, OSAVI, etc.) and performing change detection analyses on different temporal data sets. All of these tools run in a dynamic way in the ArcGIS Platform, without the need of creating intermediate datasets (rasters, layers), as the images are processed on-the-fly in order to avoid data duplication. Finally, they allow complete integration with the ArcGIS environment and workflows
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
Texture Feature Extraction and Classification for Iris Diagnosis
NASA Astrophysics Data System (ADS)
Ma, Lin; Li, Naimin
Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.
Intelligence algorithms for autonomous navigation in a ground vehicle
NASA Astrophysics Data System (ADS)
Petkovsek, Steve; Shakya, Rahul; Shin, Young Ho; Gautam, Prasanna; Norton, Adam; Ahlgren, David J.
2012-01-01
This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence) running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to increase the image processing throughput and minimize the robot's reaction times. The vision software was also modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina
2016-12-01
Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.
NASA Astrophysics Data System (ADS)
Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.
2008-12-01
Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.
Early melanoma diagnosis with mobile imaging.
Do, Thanh-Toan; Zhou, Yiren; Zheng, Haitian; Cheung, Ngai-Man; Koh, Dawn
2014-01-01
We research a mobile imaging system for early diagnosis of melanoma. Different from previous work, we focus on smartphone-captured images, and propose a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to computation and memory constraints. To address these challenges, we propose to localize the skin lesion by combining fast skin detection and fusion of two fast segmentation results. We propose new features to capture color variation and border irregularity which are useful for smartphone-captured images. We also propose a new feature selection criterion to select a small set of good features used in the final lightweight system. Our evaluation confirms the effectiveness of proposed algorithms and features. In addition, we present our system prototype which computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.
Samsi, Siddharth; Krishnamurthy, Ashok K.; Gurcan, Metin N.
2012-01-01
Follicular Lymphoma (FL) is one of the most common non-Hodgkin Lymphoma in the United States. Diagnosis and grading of FL is based on the review of histopathological tissue sections under a microscope and is influenced by human factors such as fatigue and reader bias. Computer-aided image analysis tools can help improve the accuracy of diagnosis and grading and act as another tool at the pathologist’s disposal. Our group has been developing algorithms for identifying follicles in immunohistochemical images. These algorithms have been tested and validated on small images extracted from whole slide images. However, the use of these algorithms for analyzing the entire whole slide image requires significant changes to the processing methodology since the images are relatively large (on the order of 100k × 100k pixels). In this paper we discuss the challenges involved in analyzing whole slide images and propose potential computational methodologies for addressing these challenges. We discuss the use of parallel computing tools on commodity clusters and compare performance of the serial and parallel implementations of our approach. PMID:22962572
Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.
Shaheen, Anjuman; Rajpoot, Kashif
2015-08-01
Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Muhammad, Gholnecsar E.; McArthur, Sherell A.
2015-01-01
Identity formation is a critical process shaping the lives of adolescents and can present distinct challenges for Black adolescent girls who are positioned in society to negotiate ideals of self when presented with false and incomplete images representing Black girlhood. Researchers have found distorted images of Black femininity derived from…
USDA-ARS?s Scientific Manuscript database
It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...
Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images
ERIC Educational Resources Information Center
Perry, Jamie; Kuehn, David; Langlois, Rick
2007-01-01
Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H.; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6–8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods. PMID:24505729
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang
2013-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.
Automated segmentation of three-dimensional MR brain images
NASA Astrophysics Data System (ADS)
Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee
2006-03-01
Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.
Advances in dual-tone development for pitch frequency doubling
NASA Astrophysics Data System (ADS)
Fonseca, Carlos; Somervell, Mark; Scheer, Steven; Kuwahara, Yuhei; Nafus, Kathleen; Gronheid, Roel; Tarutani, Shinji; Enomoto, Yuuichiro
2010-04-01
Dual-tone development (DTD) has been previously proposed as a potential cost-effective double patterning technique1. DTD was reported as early as in the late 1990's2. The basic principle of dual-tone imaging involves processing exposed resist latent images in both positive tone (aqueous base) and negative tone (organic solvent) developers. Conceptually, DTD has attractive cost benefits since it enables pitch doubling without the need for multiple etch steps of patterned resist layers. While the concept for DTD technique is simple to understand, there are many challenges that must be overcome and understood in order to make it a manufacturing solution. Previous work by the authors demonstrated feasibility of DTD imaging for 50nm half-pitch features at 0.80NA (k1 = 0.21) and discussed challenges lying ahead for printing sub-40nm half-pitch features with DTD. While previous experimental results suggested that clever processing on the wafer track can be used to enable DTD beyond 50nm halfpitch, it also suggest that identifying suitable resist materials or chemistries is essential for achieving successful imaging results with novel resist processing methods on the wafer track. In this work, we present recent advances in the search for resist materials that work in conjunction with novel resist processing methods on the wafer track to enable DTD. Recent experimental results with new resist chemistries, specifically designed for DTD, are presented in this work. We also present simulation studies that help and support identifying resist properties that could enable DTD imaging, which ultimately lead to producing viable DTD resist materials.
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Li, Xin; Liu, Shaomin; Xiao, Qin; Ma, Mingguo; Jin, Rui; Che, Tao; Wang, Weizhen; Hu, Xiaoli; Xu, Ziwei; Wen, Jianguang; Wang, Liangxu
2017-01-01
We introduce a multiscale dataset obtained from Heihe Watershed Allied Telemetry Experimental Research (HiWATER) in an oasis-desert area in 2012. Upscaling of eco-hydrological processes on a heterogeneous surface is a grand challenge. Progress in this field is hindered by the poor availability of multiscale observations. HiWATER is an experiment designed to address this challenge through instrumentation on hierarchically nested scales to obtain multiscale and multidisciplinary data. The HiWATER observation system consists of a flux observation matrix of eddy covariance towers, large aperture scintillometers, and automatic meteorological stations; an eco-hydrological sensor network of soil moisture and leaf area index; hyper-resolution airborne remote sensing using LiDAR, imaging spectrometer, multi-angle thermal imager, and L-band microwave radiometer; and synchronical ground measurements of vegetation dynamics, and photosynthesis processes. All observational data were carefully quality controlled throughout sensor calibration, data collection, data processing, and datasets generation. The data are freely available at figshare and the Cold and Arid Regions Science Data Centre. The data should be useful for elucidating multiscale eco-hydrological processes and developing upscaling methods. PMID:28654086
An earth imaging camera simulation using wide-scale construction of reflectance surfaces
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk
2013-10-01
Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.
Towards real-time remote processing of laparoscopic video
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.
2015-03-01
Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.
Pc-Based Floating Point Imaging Workstation
NASA Astrophysics Data System (ADS)
Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin
1989-07-01
The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.
Quantitative imaging of mammalian transcriptional dynamics: from single cells to whole embryos.
Zhao, Ziqing W; White, Melanie D; Bissiere, Stephanie; Levi, Valeria; Plachta, Nicolas
2016-12-23
Probing dynamic processes occurring within the cell nucleus at the quantitative level has long been a challenge in mammalian biology. Advances in bio-imaging techniques over the past decade have enabled us to directly visualize nuclear processes in situ with unprecedented spatial and temporal resolution and single-molecule sensitivity. Here, using transcription as our primary focus, we survey recent imaging studies that specifically emphasize the quantitative understanding of nuclear dynamics in both time and space. These analyses not only inform on previously hidden physical parameters and mechanistic details, but also reveal a hierarchical organizational landscape for coordinating a wide range of transcriptional processes shared by mammalian systems of varying complexity, from single cells to whole embryos.
Onboard Classification of Hyperspectral Data on the Earth Observing One Mission
NASA Technical Reports Server (NTRS)
Chien, Steve; Tran, Daniel; Schaffer, Steve; Rabideau, Gregg; Davies, Ashley Gerard; Doggett, Thomas; Greeley, Ronald; Ip, Felipe; Baker, Victor; Doubleday, Joshua;
2009-01-01
Remote-sensed hyperspectral data represents significant challenges in downlink due to its large data volumes. This paper describes a research program designed to process hyperspectral data products onboard spacecraft to (a) reduce data downlink volumes and (b) decrease latency to provide key data products (often by enabling use of lower data rate communications systems). We describe efforts to develop onboard processing to study volcanoes, floods, and cryosphere, using the Hyperion hyperspectral imager and onboard processing for the Earth Observing One (EO-1) mission as well as preliminary work targeting the Hyperspectral Infrared Imager (HyspIRI) mission.
Research of real-time video processing system based on 6678 multi-core DSP
NASA Astrophysics Data System (ADS)
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
2017-10-01
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes
NASA Technical Reports Server (NTRS)
Agapakis, John E.; Bolstad, Jon
1993-01-01
Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.
Generative Adversarial Networks: An Overview
NASA Astrophysics Data System (ADS)
Creswell, Antonia; White, Tom; Dumoulin, Vincent; Arulkumaran, Kai; Sengupta, Biswa; Bharath, Anil A.
2018-01-01
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
The Commercial Challenges Of Pacs
NASA Astrophysics Data System (ADS)
Vanden Brink, John A.
1984-08-01
The increasing use of digital imaging techniques create a need for improved methods of digital processing, communication and archiving. However, the commercial opportunity is dependent on the resolution of a number of issues. These issues include proof that digital processes are more cost effective than present techniques, implementation of information system support in the imaging activity, implementation of industry standards, conversion of analog images to digital formats, definition of clinical needs, the implications of the purchase decision and technology requirements. In spite of these obstacles, a market is emerging, served by new and existing companies, that may become a $500 million market (U.S.) by 1990 for equipment and supplies.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Image wavelet decomposition and applications
NASA Technical Reports Server (NTRS)
Treil, N.; Mallat, S.; Bajcsy, R.
1989-01-01
The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.
Hyperspectral imaging for simultaneous measurements of two FRET biosensors in pancreatic β-cells.
Elliott, Amicia D; Bedard, Noah; Ustione, Alessandro; Baird, Michelle A; Davidson, Michael W; Tkaczyk, Tomasz; Piston, David W
2017-01-01
Fluorescent protein (FP) biosensors based on Förster resonance energy transfer (FRET) are commonly used to study molecular processes in living cells. There are FP-FRET biosensors for many cellular molecules, but it remains difficult to perform simultaneous measurements of multiple biosensors. The overlapping emission spectra of the commonly used FPs, including CFP/YFP and GFP/RFP make dual FRET measurements challenging. In addition, a snapshot imaging modality is required for simultaneous imaging. The Image Mapping Spectrometer (IMS) is a snapshot hyperspectral imaging system that collects high resolution spectral data and can be used to overcome these challenges. We have previously demonstrated the IMS's capabilities for simultaneously imaging GFP and CFP/YFP-based biosensors in pancreatic β-cells. Here, we demonstrate a further capability of the IMS to image simultaneously two FRET biosensors with a single excitation band, one for cAMP and the other for Caspase-3. We use these measurements to measure simultaneously cAMP signaling and Caspase-3 activation in pancreatic β-cells during oxidative stress and hyperglycemia, which are essential components in the pathology of diabetes.
NASA Astrophysics Data System (ADS)
Daudin, Gabrielle; Oburger, Eva; Schmidt, Hannes; Borisov, Sergey; Pradier, Céline; Jourdan, Christophe; Marsden, Claire; Obermaier, Daniela; Woebken, Dagmar; Richter, Andreas; Wenzel, Walter; Hinsinger, Philippe
2017-04-01
Roots do not only take up water and nutrients from surrounding soil but they also release a wide range of exudates, such as low molecular weight organic compounds, CO2 or protons. Root-soil interactions trigger heterogeneous rhizosphere processes based on differences in root activity along the root axis and with distance from the root surface. Elucidating their temporal and spatial dynamics is of crucial importance for a better understanding of these interrelated biogeochemical processes in the rhizosphere. Therefore, monitoring key parameters at a fine scale and in a non-invasive way at the root-soil interface is essential. Planar optodes are an emerging technology that allows in situ and non-destructive imaging of mainly pH, CO2 and O2. Originated in limnology, planar optodes have recently been applied to soil-root systems in laboratory conditions. This presentation will highlight advantages and challenges of using planar optodes to image pH and O2 dynamics in the rhizosphere, focusing on two RGB (red-green-blue) approaches: a commercially available system (PreSens) and a custom-made one. Important insights into robustness, accuracy, potentials and limitations of the two systems applied to different laboratory/greenhouse-based experimental conditions (flooded and aerobic rhizobox systems, plant species) will be addressed. Furthermore, challenges of optode measurements in the field, including a first case study with Eucalyptus grandis in Brazil, will be discussed.
Functional Imaging Biomarkers: Potential to Guide an Individualised Approach to Radiotherapy.
Prestwich, R J D; Vaidyanathan, S; Scarsbrook, A F
2015-10-01
The identification of robust prognostic and predictive biomarkers would transform the ability to implement an individualised approach to radiotherapy. In this regard, there has been a surge of interest in the use of functional imaging to assess key underlying biological processes within tumours and their response to therapy. Importantly, functional imaging biomarkers hold the potential to evaluate tumour heterogeneity/biology both spatially and temporally. An ever-increasing range of functional imaging techniques is now available primarily involving positron emission tomography and magnetic resonance imaging. Small-scale studies across multiple tumour types have consistently been able to correlate changes in functional imaging parameters during radiotherapy with disease outcomes. Considerable challenges remain before the implementation of functional imaging biomarkers into routine clinical practice, including the inherent temporal variability of biological processes within tumours, reproducibility of imaging, determination of optimal imaging technique/combinations, timing during treatment and design of appropriate validation studies. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Design of a rear anamorphic attachment for digital cinematography
NASA Astrophysics Data System (ADS)
Cifuentes, A.; Valles, A.
2008-09-01
Digital taking systems for HDTV and now for the film industry present a particularly challenging design problem for rear adapters in general. The thick 3-channel prism block in the camera provides an important challenge in the design. In this paper the design of a 1.33x rear anamorphic attachment is presented. The new design departs significantly from the traditional Bravais condition due to the thick dichroic prism block. Design strategies for non-rotationally symmetric systems and fields of view are discussed. Anamorphic images intrinsically have a lower contrast and less resolution than their rotationally symmetric counterparts, therefore proper image evaluation must be considered. The interpretation of the traditional image quality methods applied to anamorphic images is also discussed in relation to the design process. The final design has a total track less than 50 mm, maintaining the telecentricity of the digital prime lens and taking full advantage of the f/1.4 prism block.
Quantitative imaging of heterogeneous dynamics in drying and aging paints
van der Kooij, Hanne M.; Fokkink, Remco; van der Gucht, Jasper; Sprakel, Joris
2016-01-01
Drying and aging paint dispersions display a wealth of complex phenomena that make their study fascinating yet challenging. To meet the growing demand for sustainable, high-quality paints, it is essential to unravel the microscopic mechanisms underlying these phenomena. Visualising the governing dynamics is, however, intrinsically difficult because the dynamics are typically heterogeneous and span a wide range of time scales. Moreover, the high turbidity of paints precludes conventional imaging techniques from reaching deep inside the paint. To address these challenges, we apply a scattering technique, Laser Speckle Imaging, as a versatile and quantitative tool to elucidate the internal dynamics, with microscopic resolution and spanning seven decades of time. We present a toolbox of data analysis and image processing methods that allows a tailored investigation of virtually any turbid dispersion, regardless of the geometry and substrate. Using these tools we watch a variety of paints dry and age with unprecedented detail. PMID:27682840
Nair, Madhu K; Pettigrew, James C; Loomis, Jeffrey S; Bates, Robert E; Kostewicz, Stephen; Robinson, Boyd; Sweitzer, Jean; Dolan, Teresa A
2009-06-01
The implementation of digital radiography in dentistry in a large healthcare enterprise setting is discussed. A distinct need for a dedicated dental picture archiving and communication systems (PACS) exists for seamless integration of different vendor products across the system. Complex issues are contended with as each clinical department migrated to a digital environment with unique needs and workflow patterns. The University of Florida has had a dental PACS installed over 2 years ago. This paper describes the process of conversion from film-based imaging from the planning stages through clinical implementation. Dentistry poses many unique challenges as it strives to achieve better integration with systems primarily designed for imaging; however, the technical requirements for high-resolution image capture in dentistry far exceed those in medicine, as most routine dental diagnostic tasks are challenging. The significance of specification, evaluation, vendor selection, installation, trial runs, training, and phased clinical implementation is emphasized.
MS lesion segmentation using a multi-channel patch-based approach with spatial consistency
NASA Astrophysics Data System (ADS)
Mechrez, Roey; Goldberger, Jacob; Greenspan, Hayit
2015-03-01
This paper presents an automatic method for segmentation of Multiple Sclerosis (MS) in Magnetic Resonance Images (MRI) of the brain. The approach is based on similarities between multi-channel patches (T1, T2 and FLAIR). An MS lesion patch database is built using training images for which the label maps are known. For each patch in the testing image, k similar patches are retrieved from the database. The matching labels for these k patches are then combined to produce an initial segmentation map for the test case. Finally a novel iterative patch-based label refinement process based on the initial segmentation map is performed to ensure spatial consistency of the detected lesions. A leave-one-out evaluation is done for each testing image in the MS lesion segmentation challenge of MICCAI 2008. Results are shown to compete with the state-of-the-art methods on the MICCAI 2008 challenge.
Electro-optical imaging systems integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, R.
1987-01-01
Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less
Multigeneration data migration from legacy systems
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Liu, Brent J.; Kho, Hwa T.; Tao, Wenchao; Wang, Cun; McCoy, J. Michael
2003-05-01
The migration of image data from different generations of legacy archive systems represents a technical challenge and in incremental cost in transitions to newer generations of PACS. UCLA medical center has elected to completely replace the existing PACS infrastructure encompassing several generations of legacy systems by a new commercial system providing enterprise-wide image management and communication. One of the most challenging parts of the project was the migration of large volumes of legacy images into the new system. Planning of the migration required the development of specialized software and hardware, and included different phases of data mediation from existing databases to the new PACS database prior to the migration of the image data. The project plan included a detailed analysis of resources and cost of data migration to optimize the process and minimize the delay of a hybrid operation where the legacy systems need to remain operational. Our analysis and project planning showed that the data migration represents the most critical path in the process of PACS renewal. Careful planning and optimization of the project timeline and resources allocated is critical to minimize the financial impact and the time delays that such migrations can impose on the implementation plan.
Within-subject template estimation for unbiased longitudinal image analysis.
Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce
2012-07-16
Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.
Bogot, Naama R; Quint, Leslie E
2005-01-01
Evaluation of the thymus poses a challenge to the radiologist. In addition to age-related changes in thymic size, shape, and tissue composition, there is considerable variability in the normal adult thymic appearance within any age group. Many different types of disorders may affect the thymus, including hyperplasia, cysts, and benign and malignant neoplasms, both primary and secondary; clinical and imaging findings typical for each disease process are described in this article. Whereas computed tomography is the mainstay for imaging the thymus, other imaging modalities may occasionally provide additional structural or functional information. PMID:16361143
Coherent diffractive imaging methods for semiconductor manufacturing
NASA Astrophysics Data System (ADS)
Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin
2017-12-01
The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.
Detection of Pigment Networks in Dermoscopy Images
NASA Astrophysics Data System (ADS)
Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui
2017-02-01
One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies.
Kamlet, Adam S.; Neumann, Constanze N.; Lee, Eunsung; Carlin, Stephen M.; Moseley, Christian K.; Stephenson, Nickeisha; Hooker, Jacob M.; Ritter, Tobias
2013-01-01
New chemistry methods for the synthesis of radiolabeled small molecules have the potential to impact clinical positron emission tomography (PET) imaging, if they can be successfully translated. However, progression of modern reactions from the stage of synthetic chemistry development to the preparation of radiotracer doses ready for use in human PET imaging is challenging and rare. Here we describe the process of and the successful translation of a modern palladium-mediated fluorination reaction to non-human primate (NHP) baboon PET imaging–an important milestone on the path to human PET imaging. The method, which transforms [18F]fluoride into an electrophilic fluorination reagent, provides access to aryl–18F bonds that would be challenging to synthesize via conventional radiochemistry methods. PMID:23554994
Object recognition based on Google's reverse image search and image similarity
NASA Astrophysics Data System (ADS)
Horváth, András.
2015-12-01
Image classification is one of the most challenging tasks in computer vision and a general multiclass classifier could solve many different tasks in image processing. Classification is usually done by shallow learning for predefined objects, which is a difficult task and very different from human vision, which is based on continuous learning of object classes and one requires years to learn a large taxonomy of objects which are not disjunct nor independent. In this paper I present a system based on Google image similarity algorithm and Google image database, which can classify a large set of different objects in a human like manner, identifying related classes and taxonomies.
Discriminative feature representation: an effective postprocessing solution to low dose CT imaging
NASA Astrophysics Data System (ADS)
Chen, Yang; Liu, Jin; Hu, Yining; Yang, Jian; Shi, Luyao; Shu, Huazhong; Gui, Zhiguo; Coatrieux, Gouenou; Luo, Limin
2017-03-01
This paper proposes a concise and effective approach termed discriminative feature representation (DFR) for low dose computerized tomography (LDCT) image processing, which is currently a challenging problem in medical imaging field. This DFR method assumes LDCT images as the superposition of desirable high dose CT (HDCT) 3D features and undesirable noise-artifact 3D features (the combined term of noise and artifact features induced by low dose scan protocols), and the decomposed HDCT features are used to provide the processed LDCT images with higher quality. The target HDCT features are solved via the DFR algorithm using a featured dictionary composed by atoms representing HDCT features and noise-artifact features. In this study, the featured dictionary is efficiently built using physical phantom images collected from the same CT scanner as the target clinical LDCT images to process. The proposed DFR method also has good robustness in parameter setting for different CT scanner types. This DFR method can be directly applied to process DICOM formatted LDCT images, and has good applicability to current CT systems. Comparative experiments with abdomen LDCT data validate the good performance of the proposed approach. This research was supported by National Natural Science Foundation under grants (81370040, 81530060), the Fundamental Research Funds for the Central Universities, and the Qing Lan Project in Jiangsu Province.
Vision-based obstacle recognition system for automated lawn mower robot development
NASA Astrophysics Data System (ADS)
Mohd Zin, Zalhan; Ibrahim, Ratnawati
2011-06-01
Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.
Recent Advances in Techniques for Hyperspectral Image Processing
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony;
2009-01-01
Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms
Recent development of nanoparticles for molecular imaging
NASA Astrophysics Data System (ADS)
Kim, Jonghoon; Lee, Nohyun; Hyeon, Taeghwan
2017-10-01
Molecular imaging enables us to non-invasively visualize cellular functions and biological processes in living subjects, allowing accurate diagnosis of diseases at early stages. For successful molecular imaging, a suitable contrast agent with high sensitivity is required. To date, various nanoparticles have been developed as contrast agents for medical imaging modalities. In comparison with conventional probes, nanoparticles offer several advantages, including controllable physical properties, facile surface modification and long circulation time. In addition, they can be integrated with various combinations for multimodal imaging and therapy. In this opinion piece, we highlight recent advances and future perspectives of nanomaterials for molecular imaging. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.
Ganz, J; Baker, R P; Hamilton, M K; Melancon, E; Diba, P; Eisen, J S; Parthasarathy, R
2018-05-02
Normal gut function requires rhythmic and coordinated movements that are affected by developmental processes, physical and chemical stimuli, and many debilitating diseases. The imaging and characterization of gut motility, especially regarding periodic, propagative contractions driving material transport, are therefore critical goals. Previous image analysis approaches have successfully extracted properties related to the temporal frequency of motility modes, but robust measures of contraction magnitude, especially from in vivo image data, remain challenging to obtain. We developed a new image analysis method based on image velocimetry and spectral analysis that reveals temporal characteristics such as frequency and wave propagation speed, while also providing quantitative measures of the amplitude of gut motion. We validate this approach using several challenges to larval zebrafish, imaged with differential interference contrast microscopy. Both acetylcholine exposure and feeding increase frequency and amplitude of motility. Larvae lacking enteric nervous system gut innervation show the same average motility frequency, but reduced and less variable amplitude compared to wild types. Our image analysis approach enables insights into gut dynamics in a wide variety of developmental and physiological contexts and can also be extended to analyze other types of cell movements. © 2018 John Wiley & Sons Ltd.
Cascaded deep decision networks for classification of endoscopic images
NASA Astrophysics Data System (ADS)
Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin
2017-02-01
Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.
2015-01-01
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Edge-illumination x-ray phase contrast imaging with Pt-based metallic glass masks
NASA Astrophysics Data System (ADS)
Saghamanesh, Somayeh; Aghamiri, Seyed Mahmoud-Reza; Olivo, Alessandro; Sadeghilarijani, Maryam; Kato, Hidemi; Kamali-Asl, Alireza; Yashiro, Wataru
2017-06-01
Edge-illumination x-ray phase contrast imaging (EI XPCI) is a non-interferometric phase-sensitive method where two absorption masks are employed. These masks are fabricated through a photolithography process followed by electroplating which is challenging in terms of yield as well as time- and cost-effectiveness. We report on the first implementation of EI XPCI with Pt-based metallic glass masks fabricated by an imprinting method. The new tested alloy exhibits good characteristics including high workability beside high x-ray attenuation. The fabrication process is easy and cheap, and can produce large-size masks for high x-ray energies within minutes. Imaging experiments show a good quality phase image, which confirms the potential of these masks to make the EI XPCI technique widely available and affordable.
An incompressible fluid flow model with mutual information for MR image registration
NASA Astrophysics Data System (ADS)
Tsai, Leo; Chang, Herng-Hua
2013-03-01
Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.
Measurement of smaller colon polyp in CT colonography images using morphological image processing.
Manjunath, K N; Siddalingaswamy, P C; Prabhu, G K
2017-11-01
Automated measurement of the size and shape of colon polyps is one of the challenges in Computed tomography colonography (CTC). The objective of this retrospective study was to improve the sensitivity and specificity of smaller polyp measurement in CTC using image processing techniques. A domain knowledge-based method has been implemented with hybrid method of colon segmentation, morphological image processing operators for detecting the colonic structures, and the decision-making system for delineating the smaller polyp-based on a priori knowledge. The method was applied on 45 CTC dataset. The key finding was that the smaller polyps were accurately measured. In addition to 6-9 mm range, polyps of even <5 mm were also detected. The results were validated qualitatively and quantitatively using both 2D MPR and 3D view. Implementation was done on a high-performance computer with parallel processing. It takes [Formula: see text] min for measuring the smaller polyp in a dataset of 500 CTC images. With this method, [Formula: see text] and [Formula: see text] were achieved. The domain-based approach with morphological image processing has given good results. The smaller polyps were measured accurately which helps in making right clinical decisions. Qualitatively and quantitatively the results were acceptable when compared to the ground truth at [Formula: see text].
Colour application on mammography image segmentation
NASA Astrophysics Data System (ADS)
Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.
2017-09-01
The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).
Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo
2013-07-01
The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.
The challenges of studying visual expertise in medical image diagnosis.
Gegenfurtner, Andreas; Kok, Ellen; van Geel, Koos; de Bruin, Anique; Jarodzka, Halszka; Szulewski, Adam; van Merriënboer, Jeroen Jg
2017-01-01
Visual expertise is the superior visual skill shown when executing domain-specific visual tasks. Understanding visual expertise is important in order to understand how the interpretation of medical images may be best learned and taught. In the context of this article, we focus on the visual skill of medical image diagnosis and, more specifically, on the methodological set-ups routinely used in visual expertise research. We offer a critique of commonly used methods and propose three challenges for future research to open up new avenues for studying characteristics of visual expertise in medical image diagnosis. The first challenge addresses theory development. Novel prospects in modelling visual expertise can emerge when we reflect on cognitive and socio-cultural epistemologies in visual expertise research, when we engage in statistical validations of existing theoretical assumptions and when we include social and socio-cultural processes in expertise development. The second challenge addresses the recording and analysis of longitudinal data. If we assume that the development of expertise is a long-term phenomenon, then it follows that future research can engage in advanced statistical modelling of longitudinal expertise data that extends the routine use of cross-sectional material through, for example, animations and dynamic visualisations of developmental data. The third challenge addresses the combination of methods. Alternatives to current practices can integrate qualitative and quantitative approaches in mixed-method designs, embrace relevant yet underused data sources and understand the need for multidisciplinary research teams. Embracing alternative epistemological and methodological approaches for studying visual expertise can lead to a more balanced and robust future for understanding superior visual skills in medical image diagnosis as well as other medical fields. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.
2017-01-01
ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312
Advanced plasma etch technologies for nanopatterning
NASA Astrophysics Data System (ADS)
Wise, Rich
2013-10-01
Advances in patterning techniques have enabled the extension of immersion lithography from 65/45 nm through 14/10 nm device technologies. A key to this increase in patterning capability has been innovation in the subsequent dry plasma etch processing steps. Multiple exposure techniques, such as litho-etch-litho-etch, sidewall image transfer, line/cut mask, and self-aligned structures, have been implemented to solution required device scaling. Advances in dry plasma etch process control across wafer uniformity and etch selectivity to both masking materials have enabled adoption of vertical devices and thin film scaling for increased device performance at a given pitch. Plasma etch processes, such as trilayer etches, aggressive critical dimension shrink techniques, and the extension of resist trim processes, have increased the attainable device dimensions at a given imaging capability. Precise control of the plasma etch parameters affecting across-design variation, defectivity, profile stability within wafer, within lot, and across tools has been successfully implemented to provide manufacturable patterning technology solutions. IBM has addressed these patterning challenges through an integrated total patterning solutions team to provide seamless and synergistic patterning processes to device and integration internal customers. We will discuss these challenges and the innovative plasma etch solutions pioneered by IBM and our alliance partners.
Advanced plasma etch technologies for nanopatterning
NASA Astrophysics Data System (ADS)
Wise, Rich
2012-03-01
Advances in patterning techniques have enabled the extension of immersion lithography from 65/45nm through 14/10nm device technologies. A key to this increase in patterning capability has been innovation in the subsequent dry plasma etch processing steps. Multiple exposure techniques such as litho-etch-litho-etch, sidewall image transfer, line/cut mask and self-aligned structures have been implemented to solution required device scaling. Advances in dry plasma etch process control, across wafer uniformity and etch selectivity to both masking materials and have enabled adoption of vertical devices and thin film scaling for increased device performance at a given pitch. Plasma etch processes such as trilayer etches, aggressive CD shrink techniques, and the extension of resist trim processes have increased the attainable device dimensions at a given imaging capability. Precise control of the plasma etch parameters affecting across design variation, defectivity, profile stability within wafer, within lot, and across tools have been successfully implemented to provide manufacturable patterning technology solutions. IBM has addressed these patterning challenges through an integrated Total Patterning Solutions team to provide seamless and synergistic patterning processes to device and integration internal customers. This paper will discuss these challenges and the innovative plasma etch solutions pioneered by IBM and our alliance partners.
IEEE International Symposium on Biomedical Imaging.
2017-01-01
The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.
Learning normalized inputs for iterative estimation in medical image segmentation.
Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel
2018-02-01
In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.
Automated measurement of pressure injury through image processing.
Li, Dan; Mathews, Carol
2017-11-01
To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.
2013-08-01
Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.
Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny
2017-01-01
The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.
Standoff midwave infrared hyperspectral imaging of ship plumes
NASA Astrophysics Data System (ADS)
Gagnon, Marc-André; Gagnon, Jean-Philippe; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Guyot, Éric; Lagueux, Philippe; Chamberland, Martin; Marcotte, Frédérick
2016-05-01
Characterization of ship plumes is very challenging due to the great variety of ships, fuel, and fuel grades, as well as the extent of a gas plume. In this work, imaging of ship plumes from an operating ferry boat was carried out using standoff midwave (3-5 μm) infrared hyperspectral imaging. Quantitative chemical imaging of combustion gases was achieved by fitting a radiative transfer model. Combustion efficiency maps and mass flow rates are presented for carbon monoxide (CO) and carbon dioxide (CO2). The results illustrate how valuable information about the combustion process of a ship engine can be successfully obtained using passive hyperspectral remote sensing imaging.
Standoff midwave infrared hyperspectral imaging of ship plumes
NASA Astrophysics Data System (ADS)
Gagnon, Marc-André; Gagnon, Jean-Philippe; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Guyot, Éric; Lagueux, Philippe; Chamberland, Martin
2016-10-01
Characterization of ship plumes is very challenging due to the great variety of ships, fuel, and fuel grades, as well as the extent of a gas plume. In this work, imaging of ship plumes from an operating ferry boat was carried out using standoff midwave (3-5 μm) infrared hyperspectral imaging. Quantitative chemical imaging of combustion gases was achieved by fitting a radiative transfer model. Combustion efficiency maps and mass flow rates are presented for carbon monoxide (CO) and carbon dioxide (CO2). The results illustrate how valuable information about the combustion process of a ship engine can be successfully obtained using passive hyperspectral remote sensing imaging.
Instrumentation in molecular imaging.
Wells, R Glenn
2016-12-01
In vivo molecular imaging is a challenging task and no single type of imaging system provides an ideal solution. Nuclear medicine techniques like SPECT and PET provide excellent sensitivity but have poor spatial resolution. Optical imaging has excellent sensitivity and spatial resolution, but light photons interact strongly with tissues and so only small animals and targets near the surface can be accurately visualized. CT and MRI have exquisite spatial resolution, but greatly reduced sensitivity. To overcome the limitations of individual modalities, molecular imaging systems often combine individual cameras together, for example, merging nuclear medicine cameras with CT or MRI to allow the visualization of molecular processes with both high sensitivity and high spatial resolution.
Retrieval and classification of food images.
Farinella, Giovanni Maria; Allegra, Dario; Moltisanti, Marco; Stanco, Filippo; Battiato, Sebastiano
2016-10-01
Automatic food understanding from images is an interesting challenge with applications in different domains. In particular, food intake monitoring is becoming more and more important because of the key role that it plays in health and market economies. In this paper, we address the study of food image processing from the perspective of Computer Vision. As first contribution we present a survey of the studies in the context of food image processing from the early attempts to the current state-of-the-art methods. Since retrieval and classification engines able to work on food images are required to build automatic systems for diet monitoring (e.g., to be embedded in wearable cameras), we focus our attention on the aspect of the representation of the food images because it plays a fundamental role in the understanding engines. The food retrieval and classification is a challenging task since the food presents high variableness and an intrinsic deformability. To properly study the peculiarities of different image representations we propose the UNICT-FD1200 dataset. It was composed of 4754 food images of 1200 distinct dishes acquired during real meals. Each food plate is acquired multiple times and the overall dataset presents both geometric and photometric variabilities. The images of the dataset have been manually labeled considering 8 categories: Appetizer, Main Course, Second Course, Single Course, Side Dish, Dessert, Breakfast, Fruit. We have performed tests employing different representations of the state-of-the-art to assess the related performances on the UNICT-FD1200 dataset. Finally, we propose a new representation based on the perceptual concept of Anti-Textons which is able to encode spatial information between Textons outperforming other representations in the context of food retrieval and Classification. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analysis Of The IJCNN 2011 UTL Challenge
2012-01-13
large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...validation and final evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205...documents [3]. Transfer learning methods could accelerate the application of handwriting recognizers to historical manuscript by reducing the need for
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
FibrilJ: ImageJ plugin for fibrils' diameter and persistence length determination
NASA Astrophysics Data System (ADS)
Sokolov, P. A.; Belousov, M. V.; Bondarev, S. A.; Zhouravleva, G. A.; Kasyanenko, N. A.
2017-05-01
Application of microscopy to evaluate the morphology and size of filamentous proteins and amyloids requires new and creative approaches to simplify and automate the image processing. The estimation of mean values of fibrils diameter, length and bending stiffness on micrographs is a major challenge. For this purpose we developed an open-source FibrilJ plugin for the ImageJ/FiJi program. It automatically recognizes the fibrils on the surface of a mica, silicon, gold or formvar film and further analyzes them to calculate the distribution of fibrils by diameters, lengths and persistence lengths. The plugin has been validated by the processing of TEM images of fibrils formed by Sup35NM yeast protein and artificially created images of rod-shape objects with predefined parameters. Novel data obtained by SEM for Sup35NM protein fibrils immobilized on silicon and gold substrates are also presented and analyzed.
Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy.
Gualda, Emilio J; Simão, Daniel; Pinto, Catarina; Alves, Paula M; Brito, Catarina
2014-01-01
The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment.
York, Timothy; Powell, Samuel B.; Gao, Shengkui; Kahan, Lindsey; Charanya, Tauseef; Saha, Debajit; Roberts, Nicholas W.; Cronin, Thomas W.; Marshall, Justin; Achilefu, Samuel; Lake, Spencer P.; Raman, Baranidharan; Gruev, Viktor
2015-01-01
In this paper, we present recent work on bioinspired polarization imaging sensors and their applications in biomedicine. In particular, we focus on three different aspects of these sensors. First, we describe the electro–optical challenges in realizing a bioinspired polarization imager, and in particular, we provide a detailed description of a recent low-power complementary metal–oxide–semiconductor (CMOS) polarization imager. Second, we focus on signal processing algorithms tailored for this new class of bioinspired polarization imaging sensors, such as calibration and interpolation. Third, the emergence of these sensors has enabled rapid progress in characterizing polarization signals and environmental parameters in nature, as well as several biomedical areas, such as label-free optical neural recording, dynamic tissue strength analysis, and early diagnosis of flat cancerous lesions in a murine colorectal tumor model. We highlight results obtained from these three areas and discuss future applications for these sensors. PMID:26538682
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy
Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina
2014-01-01
The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607
High-performance computing in image registration
NASA Astrophysics Data System (ADS)
Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro
2012-10-01
Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
Performance of InGaAs short wave infrared avalanche photodetector for low flux imaging
NASA Astrophysics Data System (ADS)
Singh, Anand; Pal, Ravinder
2017-11-01
Opto-electronic performance of the InGaAs/i-InGaAs/InP short wavelength infrared focal plane array suitable for high resolution imaging under low flux conditions and ranging is presented. More than 85% quantum efficiency is achieved in the optimized detector structure. Isotropic nature of the wet etching process poses a challenge in maintaining the required control in the small pitch high density detector array. Etching process is developed to achieve low dark current density of 1 nA/cm2 in the detector array with 25 µm pitch at 298 K. Noise equivalent photon performance less than one is achievable showing single photon detection capability. The reported photodiode with low photon flux is suitable for active cum passive imaging, optical information processing and quantum computing applications.
MRO's HiRISE Education and Public Outreach during the Primary Science Phase
NASA Astrophysics Data System (ADS)
Gulick, V. C.; Davatzes, A. K.; Deardorff, G.; Kanefsky, B.; Conrad, L. B.; HiRISE Team
2008-12-01
Looking back over one Mars year, we report on the accomplishments of the HiRISE EPO program during the primary science phase of MRO. A highlight has been our student image suggestion program, conducted in association with NASA Quest as HiRISE Image Challenges (http://quest.arc.nasa.gov/challenges/hirise/). During challenges, students, either individually or as part of a collaborative classroom or group, learn about Mars through our webcasts, web chats and our educational material. They use HiWeb, HiRISE's image suggestion facility, to submit image suggestions and include a short rationale for why their target is scientifically interesting. The HiRISE team gives priority to obtaining a sampling of these suggestions as quickly as possible so that the acquired images can be examined by the students. During the challenge, a special password-protected web site allows participants to view their returned images before they are released to the public (http://marsoweb.nas.nasa.gov/hirise/quest/). Students are encouraged to write captions for the returned images. Finished captions are then posted and highlighted on the HiRISE web site (http://hirise.lpl.arizona.edu) along with their class, teacher's name and the name of their school. Through these HiRISE challenges, students and teachers become virtual science team members, participating in the same process (selecting and justifying targets, analyzing and writing captions for acquired images), and using the same software tools as the HiRISE team. Such an experience is unique among planetary exploration EPO programs. To date, we have completed three HiRISE challenges and a fourth is currently ongoing. More than 200 image suggestions were submitted during the previous challenges and over 85 of these image requests have been acquired so far. Over 675 participants from 45 states and 42 countries have registered for the previous challenges. These participants represent over 8000 students in grades 2 through 14 and consist primarily of teachers, parents of homeschoolers and student clubs, college students, and life-long learners. HiRISE Clickworkers (http://clickworkers.arc.nasa.gov/hirise), a citizen science effort is also part of our EPO where volunteers identify geologic features (e.g., dunes, craters, wind streaks, gullies, etc.) in the HiRISE images and help generate searchable image databases. We've also developed the HiRISE online image viewer (http://marsoweb.nas.nasa.gov/HiRISE/hirise_images/) where users can browse, pan and zoom through the very large HiRISE images from within their web browser. Educational materials include an assortment of K through college level, standards-based activity books, a K through 3 coloring/story book, a middle school level comic book, and several interactive educational games, including Mars jigsaw puzzles, crosswords, word searches and flash cards (http://hirise.seti.org/epo). HiRISE team members have given numerous classroom presentations and participated in many other informal educational and public events (e.g., Sally Ride Science Festivals, CA Science teachers conference workshops, NASA's Yuri's Night, Xprize events, University of Arizona's Mars Mania and Phoenix public events). The HiRISE operations team maintains a blog (HiBlog) (http://hirise.lpl.arizona.edu/HiBlog/) providing insights to the pulse of daily activities within the operations center as well as useful information about HiRISE.
Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.
Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine
2016-05-01
Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis progression. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computational characterization of ordered nanostructured surfaces
NASA Astrophysics Data System (ADS)
Mohieddin Abukhdeir, Nasser
2016-08-01
A vital and challenging task for materials researchers is to determine relationships between material characteristics and desired properties. While the measurement and assessment of material properties can be complex, quantitatively characterizing their structure is frequently a more challenging task. This issue is magnified for materials researchers in the areas of nanoscience and nanotechnology, where material structure is further complicated by phenomena such as self-assembly, collective behavior, and measurement uncertainty. Recent progress has been made in this area for both self-assembled and nanostructured surfaces due to increasing accessibility of imaging techniques at the nanoscale. In this context, recent advances in nanomaterial surface structure characterization are reviewed including the development of new theory and image processing methods.
Wangerin, Kristen A; Baratto, Lucia; Khalighi, Mohammad Mehdi; Hope, Thomas A; Gulaka, Praveen K; Deller, Timothy W; Iagaru, Andrei H
2018-06-06
Gallium-68-labeled radiopharmaceuticals pose a challenge for scatter estimation because their targeted nature can produce high contrast in these regions of the kidneys and bladder. Even small errors in the scatter estimate can result in washout artifacts. Administration of diuretics can reduce these artifacts, but they may result in adverse events. Here, we investigated the ability of algorithmic modifications to mitigate washout artifacts and eliminate the need for diuretics or other interventions. The model-based scatter algorithm was modified to account for PET/MRI scanner geometry and challenges of non-FDG tracers. Fifty-three clinical 68 Ga-RM2 and 68 Ga-PSMA-11 whole-body images were reconstructed using the baseline scatter algorithm. For comparison, reconstruction was also processed with modified sampling in the single-scatter estimation and with an offset in the scatter tail-scaling process. None of the patients received furosemide to attempt to decrease the accumulation of radiopharmaceuticals in the bladder. The images were scored independently by three blinded reviewers using the 5-point Likert scale. The scatter algorithm improvements significantly decreased or completely eliminated the washout artifacts. When comparing the baseline and most improved algorithm, the image quality increased and image artifacts were reduced for both 68 Ga-RM2 and for 68 Ga-PSMA-11 in the kidneys and bladder regions. Image reconstruction with the improved scatter correction algorithm mitigated washout artifacts and recovered diagnostic image quality in 68 Ga PET, indicating that the use of diuretics may be avoided.
Methodological challenges and solutions in auditory functional magnetic resonance imaging
Peelle, Jonathan E.
2014-01-01
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI. PMID:25191218
Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti
2013-01-01
Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...
Advanced metrology by offline SEM data processing
NASA Astrophysics Data System (ADS)
Lakcher, Amine; Schneider, Loïc.; Le-Gratiet, Bertrand; Ducoté, Julien; Farys, Vincent; Besacier, Maxime
2017-06-01
Today's technology nodes contain more and more complex designs bringing increasing challenges to chip manufacturing process steps. It is necessary to have an efficient metrology to assess process variability of these complex patterns and thus extract relevant data to generate process aware design rules and to improve OPC models. Today process variability is mostly addressed through the analysis of in-line monitoring features which are often designed to support robust measurements and as a consequence are not always very representative of critical design rules. CD-SEM is the main CD metrology technique used in chip manufacturing process but it is challenged when it comes to measure metrics like tip to tip, tip to line, areas or necking in high quantity and with robustness. CD-SEM images contain a lot of information that is not always used in metrology. Suppliers have provided tools that allow engineers to extract the SEM contours of their features and to convert them into a GDS. Contours can be seen as the signature of the shape as it contains all the dimensional data. Thus the methodology is to use the CD-SEM to take high quality images then generate SEM contours and create a data base out of them. Contours are used to feed an offline metrology tool that will process them to extract different metrics. It was shown in two previous papers that it is possible to perform complex measurements on hotspots at different process steps (lithography, etch, copper CMP) by using SEM contours with an in-house offline metrology tool. In the current paper, the methodology presented previously will be expanded to improve its robustness and combined with the use of phylogeny to classify the SEM images according to their geometrical proximities.
3D deeply supervised network for automated segmentation of volumetric medical images.
Dou, Qi; Yu, Lequan; Chen, Hao; Jin, Yueming; Yang, Xin; Qin, Jing; Heng, Pheng-Ann
2017-10-01
While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN. Copyright © 2017 Elsevier B.V. All rights reserved.
Accelerating image recognition on mobile devices using GPGPU
NASA Astrophysics Data System (ADS)
Bordallo López, Miguel; Nykänen, Henri; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku
2011-01-01
The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.
Classification of melanoma lesions using sparse coded features and random forests
NASA Astrophysics Data System (ADS)
Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré
2016-03-01
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T
2013-08-01
As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Deep learning methods for CT image-domain metal artifact reduction
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge
2017-09-01
Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P.; Young, K.; Halling-Brown, M. D.
2014-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. Over the past two decades both diagnostic and therapeutic imaging have undergone a rapid growth, the ability to be able to harness this large influx of medical images can provide an essential resource for research and training. Traditionally, the systematic collection of medical images for research from heterogeneous sites has not been commonplace within the NHS and is fraught with challenges including; data acquisition, storage, secure transfer and correct anonymisation. Here, we describe a semi-automated system, which comprehensively oversees the collection of both unprocessed and processed medical images from acquisition to a centralised database. The provision of unprocessed images within our repository enables a multitude of potential research possibilities that utilise the images. Furthermore, we have developed systems and software to integrate these data with their associated clinical data and annotations providing a centralised dataset for research. Currently we regularly collect digital mammography images from two sites and partially collect from a further three, with efforts to expand into other modalities and sites currently ongoing. At present we have collected 34,014 2D images from 2623 individuals. In this paper we describe our medical image collection system for research and discuss the wide spectrum of challenges faced during the design and implementation of such systems.
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
A GPU accelerated PDF transparency engine
NASA Astrophysics Data System (ADS)
Recker, John; Lin, I.-Jong; Tastl, Ingeborg
2011-01-01
As commercial printing presses become faster, cheaper and more efficient, so too must the Raster Image Processors (RIP) that prepare data for them to print. Digital press RIPs, however, have been challenged to on the one hand meet the ever increasing print performance of the latest digital presses, and on the other hand process increasingly complex documents with transparent layers and embedded ICC profiles. This paper explores the challenges encountered when implementing a GPU accelerated driver for the open source Ghostscript Adobe PostScript and PDF language interpreter targeted at accelerating PDF transparency for high speed commercial presses. It further describes our solution, including an image memory manager for tiling input and output images and documents, a PDF compatible multiple image layer blending engine, and a GPU accelerated ICC v4 compatible color transformation engine. The result, we believe, is the foundation for a scalable, efficient, distributed RIP system that can meet current and future RIP requirements for a wide range of commercial digital presses.
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
MO-B-BRB-01: Optimize Treatment Planning Process in Clinical Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, W.
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less
MO-B-BRB-00: Optimizing the Treatment Planning Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kapur, A.
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less
Fabrication of absorption gratings with X-ray lithography for X-ray phase contrast imaging
NASA Astrophysics Data System (ADS)
Wang, Bo; Wang, Yu-Ting; Yi, Fu-Ting; Zhang, Tian-Chong; Liu, Jing; Zhou, Yue
2018-05-01
Grating-based X-ray phase contrast imaging is promising especially in the medical area. Two or three gratings are involved in grating-based X-ray phase contrast imaging in which the absorption grating of high-aspect-ratio is the most important device and the fabrication process is a great challenge. The material with large atomic number Z is used to fabricate the absorption grating for excellent absorption of X-ray, and Au is usually used. The fabrication process, which involves X-ray lithography, development and gold electroplating, is described in this paper. The absorption gratings with 4 μm period and about 100 μm height are fabricated and the high-aspect-ratio is 50.
Raven, Erika P.; Duyn, Jeff H.
2016-01-01
Magnetic resonance imaging (MRI) at ultra-high field (UHF) strengths (7 T and above) offers unique opportunities for studying the human brain with increased spatial resolution, contrast and sensitivity. However, its reliability can be compromised by factors such as head motion, image distortion and non-neural fluctuations of the functional MRI signal. The objective of this review is to provide a critical discussion of the advantages and trade-offs associated with UHF imaging, focusing on the application to studying brain–heart interactions. We describe how UHF MRI may provide contrast and resolution benefits for measuring neural activity of regions involved in the control and mediation of autonomic processes, and in delineating such regions based on anatomical MRI contrast. Limitations arising from confounding signals are discussed, including challenges with distinguishing non-neural physiological effects from the neural signals of interest that reflect cardiorespiratory function. We also consider how recently developed data analysis techniques may be applied to high-field imaging data to uncover novel information about brain–heart interactions. PMID:27044994
Advances in Monitoring Cell-Based Therapies with Magnetic Resonance Imaging: Future Perspectives
Ngen, Ethel J.; Artemov, Dmitri
2017-01-01
Cell-based therapies are currently being developed for applications in both regenerative medicine and in oncology. Preclinical, translational, and clinical research on cell-based therapies will benefit tremendously from novel imaging approaches that enable the effective monitoring of the delivery, survival, migration, biodistribution, and integration of transplanted cells. Magnetic resonance imaging (MRI) offers several advantages over other imaging modalities for elucidating the fate of transplanted cells both preclinically and clinically. These advantages include the ability to image transplanted cells longitudinally at high spatial resolution without exposure to ionizing radiation, and the possibility to co-register anatomical structures with molecular processes and functional changes. However, since cellular MRI is still in its infancy, it currently faces a number of challenges, which provide avenues for future research and development. In this review, we describe the basic principle of cell-tracking with MRI; explain the different approaches currently used to monitor cell-based therapies; describe currently available MRI contrast generation mechanisms and strategies for monitoring transplanted cells; discuss some of the challenges in tracking transplanted cells; and suggest future research directions. PMID:28106829
Chang, Catie; Raven, Erika P; Duyn, Jeff H
2016-05-13
Magnetic resonance imaging (MRI) at ultra-high field (UHF) strengths (7 T and above) offers unique opportunities for studying the human brain with increased spatial resolution, contrast and sensitivity. However, its reliability can be compromised by factors such as head motion, image distortion and non-neural fluctuations of the functional MRI signal. The objective of this review is to provide a critical discussion of the advantages and trade-offs associated with UHF imaging, focusing on the application to studying brain-heart interactions. We describe how UHF MRI may provide contrast and resolution benefits for measuring neural activity of regions involved in the control and mediation of autonomic processes, and in delineating such regions based on anatomical MRI contrast. Limitations arising from confounding signals are discussed, including challenges with distinguishing non-neural physiological effects from the neural signals of interest that reflect cardiorespiratory function. We also consider how recently developed data analysis techniques may be applied to high-field imaging data to uncover novel information about brain-heart interactions. © 2016 The Author(s).
Combining endoscopic ultrasound with Time-Of-Flight PET: The EndoTOFPET-US Project
NASA Astrophysics Data System (ADS)
Frisch, Benjamin
2013-12-01
The EndoTOFPET-US collaboration develops a multimodal imaging technique for endoscopic exams of the pancreas or the prostate. It combines the benefits of high resolution metabolic imaging with Time-Of-Flight Positron Emission Tomography (TOF PET) and anatomical imaging with ultrasound (US). EndoTOFPET-US consists of a PET head extension for a commercial US endoscope and a PET plate outside the body in coincidence with the head. The high level of miniaturization and integration creates challenges in fields such as scintillating crystals, ultra-fast photo-detection, highly integrated electronics, system integration and image reconstruction. Amongst the developments, fast scintillators as well as fast and compact digital SiPMs with single SPAD readout are used to obtain the best coincidence time resolution (CTR). Highly integrated ASICs and DAQ electronics contribute to the timing performances of EndoTOFPET. In view of the targeted resolution of around 1 mm in the reconstructed image, we present a prototype detector system with a CTR better than 240 ps FWHM. We discuss the challenges in simulating such a system and introduce reconstruction algorithms based on graphics processing units (GPU).
UAV-Based Hyperspectral Remote Sensing for Precision Agriculture: Challenges and Opportunities
NASA Astrophysics Data System (ADS)
Angel, Y.; Parkes, S. D.; Turner, D.; Houborg, R.; Lucieer, A.; McCabe, M.
2017-12-01
Modern agricultural production relies on monitoring crop status by observing and measuring variables such as soil condition, plant health, fertilizer and pesticide effect, irrigation and crop yield. Managing all of these factors is a considerable challenge for crop producers. As such, providing integrated technological solutions that enable improved diagnostics of field condition to maximize profits, while minimizing environmental impacts, would be of much interest. Such challenges can be addressed by implementing remote sensing systems such as hyperspectral imaging to produce precise biophysical indicator maps across the various cycles of crop development. Recent progress in unmanned aerial vehicles (UAVs) have advanced traditional satellite-based capabilities, providing a capacity for high-spatial, spectral and temporal response. However, while some hyperspectral sensors have been developed for use onboard UAVs, significant investment is required to develop a system and data processing workflow that retrieves accurately georeferenced mosaics. Here we explore the use of a pushbroom hyperspectral camera that is integrated on-board a multi-rotor UAV system to measure the surface reflectance in 272 distinct spectral bands across a wavelengths range spanning 400-1000 nm, and outline the requirement for sensor calibration, integration onto a stable UAV platform enabling accurate positional data, flight planning, and development of data post-processing workflows for georeferenced mosaics. The provision of high-quality and geo-corrected imagery facilitates the development of metrics of vegetation health that can be used to identify potential problems such as production inefficiencies, diseases and nutrient deficiencies and other data-streams to enable improved crop management. Immense opportunities remain to be exploited in the implementation of UAV-based hyperspectral sensing (and its combination with other imaging systems) to provide a transferable and scalable integrated framework for crop growth monitoring and yield prediction. Here we explore some of the challenges and issues in translating the available technological capacity into a useful and useable image collection and processing flow-path that enables these potential applications to be better realized.
Paul, Anna-Lisa; Bamsey, Matthew; Berinstain, Alain; Braham, Stephen; Neron, Philip; Murdoch, Trevor; Graham, Thomas; Ferl, Robert J
2008-04-18
The use of engineered plants as biosensors has made elegant strides in the past decades, providing keen insights into the health of plants in general and particularly in the nature and cellular location of stress responses. However, most of the analytical procedures involve laboratory examination of the biosensor plants. With the advent of the green fluorescence protein (GFP) as a biosensor molecule, it became at least theoretically possible for analyses of gene expression to occur telemetrically, with the gene expression information of the plant delivered to the investigator over large distances simply as properly processed fluorescence images. Spaceflight and other extraterrestrial environments provide unique challenges to plant life, challenges that often require changes at the gene expression level to accommodate adaptation and survival. Having previously deployed transgenic plant biosensors to evaluate responses to orbital spaceflight, we wished to develop the plants and especially the imaging devices required to conduct such experiments robotically, without operator intervention, within extraterrestrial environments. This requires the development of an autonomous and remotely operated plant GFP imaging system and concomitant development of the communications infrastructure to manage dataflow from the imaging device. Here we report the results of deploying a prototype GFP imaging system within the Arthur Clarke Mars Greenhouse (ACMG) an autonomously operated greenhouse located within the Haughton Mars Project in the Canadian High Arctic. Results both demonstrate the applicability of the fundamental GFP biosensor technology and highlight the difficulties in collecting and managing telemetric data from challenging deployment environments.
High performance computing environment for multidimensional image analysis
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-01-01
Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099
High performance computing environment for multidimensional image analysis.
Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo
2007-07-10
The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.
Development of Neuromorphic Sift Operator with Application to High Speed Image Matching
NASA Astrophysics Data System (ADS)
Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.
2015-12-01
There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.
Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle
2016-01-01
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Phase in Optical Image Processing
NASA Astrophysics Data System (ADS)
Naughton, Thomas J.
2010-04-01
The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.
MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less
Image Display in Local Database Networks
NASA Astrophysics Data System (ADS)
List, James S.; Olson, Frederick R.
1989-05-01
Dearchival of image data in the form of x-ray film provides a major challenge for radiology departments. In highly active referral environments such as tertiary care hospitals, patients may be referred to multiple clinical subspecialists within a very short time. Each clinical subspecialist frequently requires diagnostic image data to complete the diagnosis. This need for image access often interferes with the normal process of film handling and interpretation, subsequently reducing the efficiency of the department. The concept of creating a local image database on individual nursing stations utilizing the AT&T CommView Results Viewing Station (RVS) is being evaluated. Initial physician acceptance has been favorable. Objective measurements of operational productivity enhancements are in progress.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media
NASA Astrophysics Data System (ADS)
Edrei, Eitan; Scarcelli, Giuliano
2016-09-01
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media.
Edrei, Eitan; Scarcelli, Giuliano
2016-09-16
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Hyperspectral image classification based on local binary patterns and PCANet
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang
2018-04-01
Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Fisher, Kevin; Chang, Chein-I
2009-01-01
Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.
Color engineering in the age of digital convergence
NASA Astrophysics Data System (ADS)
MacDonald, Lindsay W.
1998-09-01
Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.
PACS archive upgrade and data migration: clinical experiences
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Documet, Luis; Sarti, Dennis A.; Huang, H. K.; Donnelly, John
2002-05-01
Saint John's Health Center PACS data volumes have increased dramatically since the hospital became filmless in April of 1999. This is due in part of continuous image accumulation, and the integration of a new multi-slice detector CT scanner into PACS. The original PACS archive would not be able to handle the distribution and archiving load and capacity in the near future. Furthermore, there is no secondary copy backup of all the archived PACS image data for disaster recovery purposes. The purpose of this paper is to present a clinical and technical process template to upgrade and expand the PACS archive, migrate existing PACs image data to the new archive, and provide a back-up and disaster recovery function not currently available. Discussion of the technical and clinical pitfalls and challenges involved in this process will be presented as well. The server hardware configuration was upgraded and a secondary backup implemented for disaster recovery. The upgrade includes new software versions, database reconfiguration, and installation of a new tape jukebox to replace the current MOD jukebox. Upon completion, all PACS image data from the original MOD jukebox was migrated to the new tape jukebox and verified. The migration was performed during clinical operation continuously in the background. Once the data migration was completed the MOD jukebox was removed. All newly acquired PACS exams are now archived to the new tape jukebox. All PACs image data residing on the original MOD jukebox have been successfully migrated into the new archive. In addition, a secondary backup of all PACS image data has been implemented for disaster recovery and has been verified using disaster scenario testing. No PACS image data was lost during the entire process and there was very little clinical impact during the entire upgrade and data migration. Some of the pitfalls and challenges during this upgrade process included hardware reconfiguration for the original archive server, clinical downtime involved with the upgrade, and data migration planning to minimize impact on clinical workflow. The impact was minimized with a downtime contingency plan.
In utero imaging of mouse embryonic development with optical coherence tomography
NASA Astrophysics Data System (ADS)
Syed, Saba H.; Dickinson, Mary E.; Larin, Kirill V.; Larina, Irina V.
2011-03-01
Studying progression of congenital diseases in animal models can greatly benefit from live embryonic imaging Mouse have long served as a model of mammalian embryonic developmental processes, however, due to intra-uterine nature of mammalian development live imaging is challenging. In this report we present results on live mouse embryonic imaging in utero with Optical Coherence Tomography. Embryos from 12.5 through 17.5 days post-coitus (dpc) were studied through the uterine wall. In longitudinal studies, same embryos were imaged at developmental stages 13.5, 15.5 and 17.5 dpc. This study suggests that OCT can serve as a powerful tool for live mouse embryo imaging. Potentially this technique can contribute to our understanding developmental abnormalities associated with mutations, toxic drugs.
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
Karunakaran, Chithra; Lahlali, Rachid; Zhu, Ning; Webb, Adam M.; Schmidt, Marina; Fransishyn, Kyle; Belev, George; Wysokinski, Tomasz; Olson, Jeremy; Cooper, David M. L.; Hallin, Emil
2015-01-01
Minimally invasive investigation of plant parts (root, stem, leaves, and flower) has good potential to elucidate the dynamics of plant growth, morphology, physiology, and root-rhizosphere interactions. Laboratory based absorption X-ray imaging and computed tomography (CT) systems are extensively used for in situ feasibility studies of plants grown in natural and artificial soil. These techniques have challenges such as low contrast between soil pore space and roots, long X-ray imaging time, and low spatial resolution. In this study, the use of synchrotron (SR) based phase contrast X-ray imaging (PCI) has been demonstrated as a minimally invasive technique for imaging plants. Above ground plant parts and roots of 10 day old canola and wheat seedlings grown in sandy clay loam soil were successfully scanned and reconstructed. Results confirmed that SR-PCI can deliver good quality images to study dynamic and real time processes such as cavitation and water-refilling in plants. The advantages of SR-PCI, effect of X-ray energy, and effective pixel size to study plant samples have been demonstrated. The use of contrast agents to monitor physiological processes in plants was also investigated and discussed. PMID:26183486
He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan
2018-01-01
Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.
ESARR: enhanced situational awareness via road sign recognition
NASA Astrophysics Data System (ADS)
Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.
2010-04-01
The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.
The algorithm for automatic detection of the calibration object
NASA Astrophysics Data System (ADS)
Artem, Kruglov; Irina, Ugfeld
2017-06-01
The problem of the automatic image calibration is considered in this paper. The most challenging task of the automatic calibration is a proper detection of the calibration object. The solving of this problem required the appliance of the methods and algorithms of the digital image processing, such as morphology, filtering, edge detection, shape approximation. The step-by-step process of the development of the algorithm and its adopting to the specific conditions of the log cuts in the image's background is presented. Testing of the automatic calibration module was carrying out under the conditions of the production process of the logging enterprise. Through the tests the average possibility of the automatic isolating of the calibration object is 86.1% in the absence of the type 1 errors. The algorithm was implemented in the automatic calibration module within the mobile software for the log deck volume measurement.
Multiplicative noise removal via a learned dictionary.
Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong
2012-11-01
Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.
Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.
Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin
2016-01-01
Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.
Magnetic resonance imaging of the pediatric neck: an overview.
Shekdar, Karuna V; Mirsky, David M; Kazahaya, Ken; Bilaniuk, Larissa T
2012-08-01
Evaluation of neck lesions in the pediatric population can be a diagnostic challenge, for which magnetic resonance (MR) imaging is extremely valuable. This article provides an overview of the value and utility of MR imaging in the evaluation of pediatric neck lesions, addressing what the referring clinician requires from the radiologist. Concise descriptions and illustrations of MR imaging findings of commonly encountered pathologic entities in the pediatric neck, including abnormalities of the branchial apparatus, thyroglossal duct anomalies, and neoplastic processes, are given. An approach to establishing a differential diagnosis is provided, and critical points of information are summarized. Copyright © 2012 Elsevier Inc. All rights reserved.
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect
Edrei, Eitan; Scarcelli, Giuliano
2016-01-01
Several phenomena have been recently exploited to circumvent scattering and have succeeded in imaging or focusing light through turbid layers. However, the requirement for the turbid medium to be steady during the imaging process remains a fundamental limitation of these methods. Here we introduce an optical imaging modality that overcomes this challenge by taking advantage of the so-called shower-curtain effect, adapted to the spatial-frequency domain via speckle correlography. We present high resolution imaging of objects hidden behind millimeter-thick tissue or dense lens cataracts. We demonstrate our imaging technique to be insensitive to rapid medium movements (> 5 m/s) beyond any biologically-relevant motion. Furthermore, we show this method can be extended to several contrast mechanisms and imaging configurations. PMID:27347498
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.
Pierce, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E
2016-01-08
Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.
Medical imaging and registration in computer assisted surgery.
Simon, D A; Lavallée, S
1998-09-01
Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.
Coltelli, Primo; Barsanti, Laura; Evangelista, Valter; Frassanito, Anna Maria; Gualtieri, Paolo
2016-12-01
A novel procedure for deriving the absorption spectrum of an object spot from the colour values of the corresponding pixel(s) in its image is presented. Any digital image acquired by a microscope can be used; typical applications are the analysis of cellular/subcellular metabolic processes under physiological conditions and in response to environmental stressors (e.g. heavy metals), and the measurement of chromophore composition, distribution and concentration in cells. In this paper, we challenged the procedure with images of algae, acquired by means of a CCD camera mounted onto a microscope. The many colours algae display result from the combinations of chromophores whose spectroscopic information is limited to organic solvents extracts that suffers from displacements, amplifications, and contraction/dilatation respect to spectra recorded inside the cell. Hence, preliminary processing is necessary, which consists of in vivo measurement of the absorption spectra of photosynthetic compartments of algal cells and determination of spectra of the single chromophores inside the cell. The final step of the procedure consists in the reconstruction of the absorption spectrum of the cell spot from the colour values of the corresponding pixel(s) in its digital image by minimization of a system of transcendental equations based on the absorption spectra of the chromophores under physiological conditions. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Image Re-Ranking Based on Topic Diversity.
Qian, Xueming; Lu, Dan; Wang, Yaxiong; Zhu, Li; Tang, Yuan Yan; Wang, Meng
2017-08-01
Social media sharing Websites allow users to annotate images with free tags, which significantly contribute to the development of the web image retrieval. Tag-based image search is an important method to find images shared by users in social networks. However, how to make the top ranked result relevant and with diversity is challenging. In this paper, we propose a topic diverse ranking approach for tag-based image retrieval with the consideration of promoting the topic coverage performance. First, we construct a tag graph based on the similarity between each tag. Then, the community detection method is conducted to mine the topic community of each tag. After that, inter-community and intra-community ranking are introduced to obtain the final retrieved results. In the inter-community ranking process, an adaptive random walk model is employed to rank the community based on the multi-information of each topic community. Besides, we build an inverted index structure for images to accelerate the searching process. Experimental results on Flickr data set and NUS-Wide data sets show the effectiveness of the proposed approach.
High resolution imaging of objects located within a wall
NASA Astrophysics Data System (ADS)
Greneker, Eugene F.; Showman, Gregory A.; Trostel, John M.; Sylvester, Vincent
2006-05-01
Researchers at Georgia Tech Research Institute have developed a high resolution imaging radar technique that allows large sections of a test wall to be scanned in X and Y dimensions. The resulting images that can be obtained provide information on what is inside the wall, if anything. The scanning homodyne radar operates at a frequency of 24.1 GHz at with an output power level of approximately 10 milliwatts. An imaging technique that has been developed is currently being used to study the detection of toxic mold on the back surface of wallboard using radar as a sensor. The moisture that is associated with the mold can easily be detected. In addition to mold, the technique will image objects as small as a 4 millimeter sphere on the front or rear of the wallboard and will penetrate both sides of a wall made of studs and wallboard. Signal processing is performed on the resulting data to further sharpen the image. Photos of the scanner and images produced by the scanner are presented. A discussion of the signal processing and technical challenges are also discussed.
High-Content Screening for Quantitative Cell Biology.
Mattiazzi Usaj, Mojca; Styles, Erin B; Verster, Adrian J; Friesen, Helena; Boone, Charles; Andrews, Brenda J
2016-08-01
High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter
2016-05-01
At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts printability of defects at wafer level and automates the process of defect dispositioning from images captured using high resolution inspection machine. It first eliminates false defects due to registration, focus errors, image capture errors and random noise caused during inspection. For the remaining real defects, actual mask-like contours are generated using the Calibre® ILT solution [1][2], which is enhanced to predict the actual mask contours from high resolution defect images. It enables accurate prediction of defect contours, which is not possible from images captured using inspection machine because some information is already lost due to optical effects. Calibre's simulation engine is used to generate images at wafer level using scanner optical conditions and mask-like contours as input. The tool then analyses simulated images and predicts defect printability. It automatically calculates maximum CD variation and decides which defects are severe to affect patterns on wafer. In this paper, we assess the printability of defects for the mask of advanced technology nodes. In particular, we will compare the recovered mask contours with contours extracted from SEM image of the mask and compare simulation results with AIMSTM for a variety of defects and patterns. The results of printability assessment and the accuracy of comparison are presented in this paper. We also suggest how this method can be extended to predict printability of defects identified on EUV photomasks.
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
Total body photography for skin cancer screening.
Dengel, Lynn T; Petroni, Gina R; Judge, Joshua; Chen, David; Acton, Scott T; Schroen, Anneke T; Slingluff, Craig L
2015-11-01
Total body photography may aid in melanoma screening but is not widely applied due to time and cost. We hypothesized that a near-simultaneous automated skin photo-acquisition system would be acceptable to patients and could rapidly obtain total body images that enable visualization of pigmented skin lesions. From February to May 2009, a study of 20 volunteers was performed at the University of Virginia to test a prototype 16-camera imaging booth built by the research team and to guide development of special purpose software. For each participant, images were obtained before and after marking 10 lesions (five "easy" and five "difficult"), and images were evaluated to estimate visualization rates. Imaging logistical challenges were scored by the operator, and participant opinion was assessed by questionnaire. Average time for image capture was three minutes (range 2-5). All 55 "easy" lesions were visualized (sensitivity 100%, 90% CI 95-100%), and 54/55 "difficult" lesions were visualized (sensitivity 98%, 90% CI 92-100%). Operators and patients graded the imaging process favorably, with challenges identified regarding lighting and positioning. Rapid-acquisition automated skin photography is feasible with a low-cost system, with excellent lesion visualization and participant acceptance. These data provide a basis for employing this method in clinical melanoma screening. © 2014 The International Society of Dermatology.
NASA Astrophysics Data System (ADS)
Denker, Carsten; Kuckein, Christoph; Verma, Meetu; González Manrique, Sergio J.; Diercke, Andrea; Enke, Harry; Klar, Jochen; Balthasar, Horst; Louis, Rohan E.; Dineva, Ekaterina
2018-05-01
In high-resolution solar physics, the volume and complexity of photometric, spectroscopic, and polarimetric ground-based data significantly increased in the last decade, reaching data acquisition rates of terabytes per hour. This is driven by the desire to capture fast processes on the Sun and the necessity for short exposure times “freezing” the atmospheric seeing, thus enabling ex post facto image restoration. Consequently, large-format and high-cadence detectors are nowadays used in solar observations to facilitate image restoration. Based on our experience during the “early science” phase with the 1.5 m GREGOR solar telescope (2014–2015) and the subsequent transition to routine observations in 2016, we describe data collection and data management tailored toward image restoration and imaging spectroscopy. We outline our approaches regarding data processing, analysis, and archiving for two of GREGOR’s post-focus instruments (see http://gregor.aip.de), i.e., the GREGOR Fabry–Pérot Interferometer (GFPI) and the newly installed High-Resolution Fast Imager (HiFI). The heterogeneous and complex nature of multidimensional data arising from high-resolution solar observations provides an intriguing but also a challenging example for “big data” in astronomy. The big data challenge has two aspects: (1) establishing a workflow for publishing the data for the whole community and beyond and (2) creating a collaborative research environment (CRE), where computationally intense data and postprocessing tools are colocated and collaborative work is enabled for scientists of multiple institutes. This requires either collaboration with a data center or frameworks and databases capable of dealing with huge data sets based on virtual observatory (VO) and other community standards and procedures.
Workflow Challenges of Enterprise Imaging: HIMSS-SIIM Collaborative White Paper.
Towbin, Alexander J; Roth, Christopher J; Bronkalla, Mark; Cram, Dawn
2016-10-01
With the advent of digital cameras, there has been an explosion in the number of medical specialties using images to diagnose or document disease and guide interventions. In many specialties, these images are not added to the patient's electronic medical record and are not distributed so that other providers caring for the patient can view them. As hospitals begin to develop enterprise imaging strategies, they have found that there are multiple challenges preventing the implementation of systems to manage image capture, image upload, and image management. This HIMSS-SIIM white paper will describe the key workflow challenges related to enterprise imaging and offer suggestions for potential solutions to these challenges.
NASA Astrophysics Data System (ADS)
Kvitle, Anne Kristin
2018-05-01
Color is part of the visual variables in map, serving an aesthetic part and as a guide of attention. Impaired color vision affects the ability to distinguish colors, which makes the task of decoding the map colors difficult. Map reading is reported as a challenging task for these observers, especially when the size of stimuli is small. The aim of this study is to review existing methods for map design for color vision deficient users. A systematic review of research literature and case studies of map design for CVD observers has been conducted in order to give an overview of current knowledge and future research challenges. In addition, relevant research on simulations of CVD and color image enhancement for these observers from other fields of industry is included. The study identified two main approaches: pre-processing by using accessible colors and post-processing by using enhancement methods. Some of the methods may be applied for maps, but requires tailoring of test images according to map types.
Soleilhac, Emmanuelle; Nadon, Robert; Lafanechere, Laurence
2010-02-01
Screening compounds with cell-based assays and microscopy image-based analysis is an approach currently favored for drug discovery. Because of its high information yield, the strategy is called high-content screening (HCS). This review covers the application of HCS in drug discovery and also in basic research of potential new pathways that can be targeted for treatment of pathophysiological diseases. HCS faces several challenges, however, including the extraction of pertinent information from the massive amount of data generated from images. Several proposed approaches to HCS data acquisition and analysis are reviewed. Different solutions from the fields of mathematics, bioinformatics and biotechnology are presented. Potential applications and limits of these recent technical developments are also discussed. HCS is a multidisciplinary and multistep approach for understanding the effects of compounds on biological processes at the cellular level. Reliable results depend on the quality of the overall process and require strong interdisciplinary collaborations.
Molecular brain imaging in the multimodality era
Price, Julie C
2012-01-01
Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol; Padgett, Curtis; Zhu, David; Lung, Gerald; Howard, Ayanna
2000-01-01
A three-dimensional microelectronic device (3DANN-R) capable of performing general image convolution at the speed of 1012 operations/second (ops) in a volume of less than 1.5 cubic centimeter has been successfully built under the BMDO/JPL VIGILANTE program. 3DANN-R was developed in partnership with Irvine Sensors Corp., Costa Mesa, California. 3DANN-R is a sugar-cube-sized, low power image convolution engine that in its core computation circuitry is capable of performing 64 image convolutions with large (64x64) windows at video frame rates. This paper explores potential applications of 3DANN-R such as target recognition, SAR and hyperspectral data processing, and general machine vision using real data and discuss technical challenges for providing deployable systems for BMDO surveillance and interceptor programs.
X-ray coherent diffraction imaging of cellulose fibrils in situ.
Lal, Jyotsana; Harder, Ross; Makowski, Lee
2011-01-01
Cellulose is the most abundant renewable source of organic molecules on earth[1]. As fossil fuel reserves become depleted, the use of cellulose as a feed stock for fuels and chemicals is being aggressively explored. Cellulose is a linear polymer of glucose that packs tightly into crystalline fibrils that make up a substantial proportion of plant cell walls. Extraction of the cellulose chains from these fibrils in a chemically benign process has proven to be a substantial challenge [2]. Monitoring the deconstruction of the fibrils in response to physical and chemical treatments would expedite the development of efficient processing methods. As a step towards achieving that goal, we here describe Bragg-coherent diffraction imaging (CDI) as an approach to producing images of cellulose fibrils in situ within vascular bundles from maize.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Hyperspectral Fluorescence and Reflectance Imaging Instrument
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey
2008-01-01
The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.
Advances in biologically inspired on/near sensor processing
NASA Astrophysics Data System (ADS)
McCarley, Paul L.
1999-07-01
As electro-optic sensors increase in size and frame rate, the data transfer and digital processing resource requirements also increase. In many missions, the spatial area of interest is but a small fraction of the available field of view. Choosing the right region of interest, however, is a challenge and still requires an enormous amount of downstream digital processing resources. In order to filter this ever-increasing amount of data, we look at how nature solves the problem. The Advanced Guidance Division of the Munitions Directorate, Air Force Research Laboratory at Elgin AFB, Florida, has been pursuing research in the are of advanced sensor and image processing concepts based on biologically inspired sensory information processing. A summary of two 'neuromorphic' processing efforts will be presented along with a seeker system concept utilizing this innovative technology. The Neuroseek program is developing a 256 X 256 2-color dual band IRFPA coupled to an optimized silicon CMOS read-out and processing integrated circuit that provides simultaneous full-frame imaging in MWIR/LWIR wavebands along with built-in biologically inspired sensor image processing functions. Concepts and requirements for future such efforts will also be discussed.
Automatic laser welding and milling with in situ inline coherent imaging.
Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M
2014-11-01
Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.
Faure, Emmanuel; Savy, Thierry; Rizzi, Barbara; Melani, Camilo; Stašová, Olga; Fabrèges, Dimitri; Špir, Róbert; Hammons, Mark; Čúnderlík, Róbert; Recher, Gaëlle; Lombardot, Benoît; Duloquin, Louise; Colin, Ingrid; Kollár, Jozef; Desnoulez, Sophie; Affaticati, Pierre; Maury, Benoît; Boyreau, Adeline; Nief, Jean-Yves; Calvat, Pascal; Vernier, Philippe; Frain, Monique; Lutfalla, Georges; Kergosien, Yannick; Suret, Pierre; Remešíková, Mariana; Doursat, René; Sarti, Alessandro; Mikula, Karol; Peyriéras, Nadine; Bourgine, Paul
2016-01-01
The quantitative and systematic analysis of embryonic cell dynamics from in vivo 3D+time image data sets is a major challenge at the forefront of developmental biology. Despite recent breakthroughs in the microscopy imaging of living systems, producing an accurate cell lineage tree for any developing organism remains a difficult task. We present here the BioEmergences workflow integrating all reconstruction steps from image acquisition and processing to the interactive visualization of reconstructed data. Original mathematical methods and algorithms underlie image filtering, nucleus centre detection, nucleus and membrane segmentation, and cell tracking. They are demonstrated on zebrafish, ascidian and sea urchin embryos with stained nuclei and membranes. Subsequent validation and annotations are carried out using Mov-IT, a custom-made graphical interface. Compared with eight other software tools, our workflow achieved the best lineage score. Delivered in standalone or web service mode, BioEmergences and Mov-IT offer a unique set of tools for in silico experimental embryology. PMID:26912388
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
NASA Astrophysics Data System (ADS)
Shani, Uri; Kol, Tomer; Shachor, Gal
2004-04-01
Managing medical digital information objects, and in particular medical images is an enterprise-grade problem. Firstly, there is the sheer amount of digital data that is generated in the proliferation of digital (and film-free) medical imaging. Secondly, the managing software ought to enjoy high availability, recoverability and manageability that are found only in the most business-critical systems. Indeed, such requirements are borrowed from the business enterprise world. Moreover, the solution for the medical information management problem should too employ the same software tools, middlewares and architectures. It is safe to say that all first-line medical PACS products strive to provide a solution for all these challenging requirements. The DICOM standard has been a prime enabler of such solutions. DICOM created the interconnectivity, which made it possible for a PACS service to manage millions of exams consisting of trillions of images. With the more comprehensive IHE architecture, the enterprise is expanded into a multi-facility regional conglomerate, which presents extreme demands from the data management system. HIPPA legislations add considerable challenges per security, privacy and other legal issues, which aggravate the situation. In this paper, we firstly present what in our view should be the general requirements for a first-line medical PACS, taken from an enterprise medical imaging storage and management solution perspective. While these requirements can be met by homegrown implementations, we suggest looking at the existing technologies, which have emerged in the recent years to meet exactly these challenges in the business world. We present an evolutionary process, which led to the design and implementation of a medical object management subsystem. This is indeed an enterprise medical imaging solution that is built upon respective technological components. The system answers all these challenges simply by not reinventing wheels, but rather reusing the best "wheels" for the job. Relying on such middleware components allowed us to concentrate on added value for this specific problem domain.
Manyscale Computing for Sensor Processing in Support of Space Situational Awareness
NASA Astrophysics Data System (ADS)
Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.
2014-09-01
Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.
Use of letter writing as a means of integrating an altered body image: a case study.
Rancour, Patricia; Brauer, Kathryn
2003-01-01
To describe the use of letter writing as a technique to assist patients in adjusting to an altered body image after dramatic cancer treatment. Published articles and books. Gestalt therapy, psychosynthesis, and journaling techniques evolve into a technique that can assist patients who are challenged to accept altered body parts. Described in a case study presentation, letter writing was found to assist female patients with recurrent breast cancer in adjusting to reconstruction of lost breasts. Nurses can use letter writing as a means of assisting patients through the grief process associated with body image alterations.
2-D traveltime and waveform inversion for improved seismic imaging: Naga Thrust and Fold Belt, India
NASA Astrophysics Data System (ADS)
Jaiswal, Priyank; Zelt, Colin A.; Bally, Albert W.; Dasgupta, Rahul
2008-05-01
Exploration along the Naga Thrust and Fold Belt in the Assam province of Northeast India encounters geological as well as logistic challenges. Drilling for hydrocarbons, traditionally guided by surface manifestations of the Naga thrust fault, faces additional challenges in the northeast where the thrust fault gradually deepens leaving subtle surface expressions. In such an area, multichannel 2-D seismic data were collected along a line perpendicular to the trend of the thrust belt. The data have a moderate signal-to-noise ratio and suffer from ground roll and other acquisition-related noise. In addition to data quality, the complex geology of the thrust belt limits the ability of conventional seismic processing to yield a reliable velocity model which in turn leads to poor subsurface image. In this paper, we demonstrate the application of traveltime and waveform inversion as supplements to conventional seismic imaging and interpretation processes. Both traveltime and waveform inversion utilize the first arrivals that are typically discarded during conventional seismic processing. As a first step, a smooth velocity model with long wavelength characteristics of the subsurface is estimated through inversion of the first-arrival traveltimes. This velocity model is then used to obtain a Kirchhoff pre-stack depth-migrated image which in turn is used for the interpretation of the fault. Waveform inversion is applied to the central part of the seismic line to a depth of ~1 km where the quality of the migrated image is poor. Waveform inversion is performed in the frequency domain over a series of iterations, proceeding from low to high frequency (11-19 Hz) using the velocity model from traveltime inversion as the starting model. In the end, the pre-stack depth-migrated image and the waveform inversion model are jointly interpreted. This study demonstrates that a combination of traveltime and waveform inversion with Kirchhoff pre-stack depth migration is a promising approach for the interpretation of geological structures in a thrust belt.
Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
Culture, Communication, and the Challenge of Globalization.
ERIC Educational Resources Information Center
Shome, Raka; Hegde, Radha S.
2002-01-01
Deals with the problematics that globalization poses for critical communication scholarship. Address how uneven patterns of global processes are enacted through cultural practices produced by the transnational flows of images and capital. Explores several areas of contemporary global growth with the overall objective of demonstrating the urgency…
Diagnostic report acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Brooks, Everett G.; Rothman, Melvyn L.
1991-07-01
The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.
Siri, Sangeeta K; Latte, Mrityunjaya V
2017-11-01
Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.
Design of an automated imaging system for use in a space experiment
NASA Technical Reports Server (NTRS)
Hartz, William G.; Bozzolo, Nora G.; Lewis, Catherine C.; Pestak, Christopher J.
1991-01-01
An experiment, occurring in an orbiting platform, examines the mass transfer across gas-liquid and liquid-liquid interfaces. It employs an imaging system with real time image analysis. The design includes optical design, imager selection and integration, positioner control, image recording, software development for processing and interfaces to telemetry. It addresses the constraints of weight, volume, and electric power associated with placing the experiment in the Space Shuttle cargo bay. Challenging elements of the design are: imaging and recording of a 200-micron-diameter bubble with a resolution of 2 microns to serve a primary source of data; varying frame rates from 500 per second to 1 frame per second, depending on the experiment phase; and providing three-dimensional information to determine the shape of the bubble.
A review of automated image understanding within 3D baggage computed tomography security screening.
Mouton, Andre; Breckon, Toby P
2015-01-01
Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT.
Translational research of optical molecular imaging for personalized medicine.
Qin, C; Ma, X; Tian, J
2013-12-01
In the medical imaging field, molecular imaging is a rapidly developing discipline and forms many imaging modalities, providing us effective tools to visualize, characterize, and measure molecular and cellular mechanisms in complex biological processes of living organisms, which can deepen our understanding of biology and accelerate preclinical research including cancer study and medicine discovery. Among many molecular imaging modalities, although the penetration depth of optical imaging and the approved optical probes used for clinics are limited, it has evolved considerably and has seen spectacular advances in basic biomedical research and new drug development. With the completion of human genome sequencing and the emergence of personalized medicine, the specific drug should be matched to not only the right disease but also to the right person, and optical molecular imaging should serve as a strong adjunct to develop personalized medicine by finding the optimal drug based on an individual's proteome and genome. In this process, the computational methodology and imaging system as well as the biomedical application regarding optical molecular imaging will play a crucial role. This review will focus on recent typical translational studies of optical molecular imaging for personalized medicine followed by a concise introduction. Finally, the current challenges and the future development of optical molecular imaging are given according to the understanding of the authors, and the review is then concluded.
HALO: a reconfigurable image enhancement and multisensor fusion system
NASA Astrophysics Data System (ADS)
Wu, F.; Hickman, D. L.; Parker, Steve J.
2014-06-01
Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.
A robust real-time abnormal region detection framework from capsule endoscopy images
NASA Astrophysics Data System (ADS)
Cheng, Yanfen; Liu, Xu; Li, Huiping
2009-02-01
In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
NASA Astrophysics Data System (ADS)
Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.
2017-06-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.
NASA Astrophysics Data System (ADS)
Abdulbaqi, Hayder Saad; Jafri, Mohd Zubir Mat; Omar, Ahmad Fairuz; Mustafa, Iskandar Shahrim Bin; Abood, Loay Kadom
2015-04-01
Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introduce a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.
NASA Technical Reports Server (NTRS)
Partridge, James D.
2002-01-01
'NASA is preparing to launch the Next Generation Space Telescope (NGST). This telescope will be larger than the Hubble Space Telescope, be launched on an Atlas missile rather than the Space Shuttle, have a segmented primary mirror, and be placed in a higher orbit. All these differences pose significant challenges.' This effort addresses the challenge of implementing an algorithm for aligning the segments of the primary mirror during the initial deployment that was designed by Philip Olivier and members of SOMTC (Space Optics Manufacturing Technology Center). The implementation was to be performed on the SIBOA (Systematic Image Based Optical Alignment) test bed. Unfortunately, hardware/software aspect concerning SIBOA and an extended time period for algorithm development prevented testing before the end of the study period. Properties of the digital camera were studied and understood, resulting in the current ability of selecting optimal settings regarding saturation. The study was successful in manually capturing several images of two stacked segments with various relative phases. These images can be used to calibrate the algorithm for future implementation. Currently the system is ready for testing.
Microscopic time-resolved imaging of singlet oxygen by delayed fluorescence in living cells.
Scholz, Marek; Dědic, Roman; Hála, Jan
2017-11-08
Singlet oxygen is a highly reactive species which is involved in a number of processes, including photodynamic therapy of cancer. Its very weak near-infrared emission makes imaging of singlet oxygen in biological systems a long-term challenge. We address this challenge by introducing Singlet Oxygen Feedback Delayed Fluorescence (SOFDF) as a novel modality for semi-direct microscopic time-resolved wide-field imaging of singlet oxygen in biological systems. SOFDF has been investigated in individual fibroblast cells incubated with a well-known photosensitizer aluminium phthalocyanine tetrasulfonate. The SOFDF emission from the cells is several orders of magnitude stronger and much more readily detectable than the very weak near-infrared phosphorescence of singlet oxygen. Moreover, the analysis of SOFDF kinetics enables us to estimate the lifetimes of the involved excited states. Real-time SOFDF images with micrometer spatial resolution and submicrosecond temporal-resolution have been recorded. Interestingly, a steep decrease in the SOFDF intensity after the photodynamically induced release of a photosensitizer from lysosomes has been demonstrated. This effect could be potentially employed as a valuable diagnostic tool for monitoring and dosimetry in photodynamic therapy.
An effective approach for iris recognition using phase-based image matching.
Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi
2008-10-01
This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.
Radiomics: Images Are More than Pictures, They Are Data
Kinahan, Paul E.; Hricak, Hedvig
2016-01-01
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer. PMID:26579733
NASA Astrophysics Data System (ADS)
Guerrero Prado, Patricio; Nguyen, Mai K.; Dumas, Laurent; Cohen, Serge X.
2017-01-01
Characterization and interpretation of flat ancient material objects, such as those found in archaeology, paleoenvironments, paleontology, and cultural heritage, have remained a challenging task to perform by means of conventional x-ray tomography methods due to their anisotropic morphology and flattened geometry. To overcome the limitations of the mentioned methodologies for such samples, an imaging modality based on Compton scattering is proposed in this work. Classical x-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able, first, to avoid relative rotations between the sample and the imaging setup, and second, to obtain three-dimensional data even when the object is supported by a dense material by exploiting backscattered photons. Mathematically this problem is addressed by means of a conical Radon transform and its inversion. The image formation process and object reconstruction model are presented. The feasibility of this methodology is supported by numerical simulations.
Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network
Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-01-01
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838
Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.
Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-04-13
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.
Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P
2016-03-01
We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
Imaging in Central Nervous System Drug Discovery.
Gunn, Roger N; Rabiner, Eugenii A
2017-01-01
The discovery and development of central nervous system (CNS) drugs is an extremely challenging process requiring large resources, timelines, and associated costs. The high risk of failure leads to high levels of risk. Over the past couple of decades PET imaging has become a central component of the CNS drug-development process, enabling decision-making in phase I studies, where early discharge of risk provides increased confidence to progress a candidate to more costly later phase testing at the right dose level or alternatively to kill a compound through failure to meet key criteria. The so called "3 pillars" of drug survival, namely; tissue exposure, target engagement, and pharmacologic activity, are particularly well suited for evaluation by PET imaging. This review introduces the process of CNS drug development before considering how PET imaging of the "3 pillars" has advanced to provide valuable tools for decision-making on the critical path of CNS drug development. Finally, we review the advances in PET science of biomarker development and analysis that enable sophisticated drug-development studies in man. Copyright © 2017 Elsevier Inc. All rights reserved.
High speed imaging of dynamic processes with a switched source x-ray CT system
NASA Astrophysics Data System (ADS)
Thompson, William M.; Lionheart, William R. B.; Morton, Edward J.; Cunningham, Mike; Luggar, Russell D.
2015-05-01
Conventional x-ray computed tomography (CT) scanners are limited in their scanning speed by the mechanical constraints of their rotating gantries and as such do not provide the necessary temporal resolution for imaging of fast-moving dynamic processes, such as moving fluid flows. The Real Time Tomography (RTT) system is a family of fast cone beam CT scanners which instead use multiple fixed discrete sources and complete rings of detectors in an offset geometry. We demonstrate the potential of this system for use in the imaging of such high speed dynamic processes and give results using simulated and real experimental data. The unusual scanning geometry results in some challenges in image reconstruction, which are overcome using algebraic iterative reconstruction techniques and explicit regularisation. Through the use of a simple temporal regularisation term and by optimising the source firing pattern, we show that temporal resolution of the system may be increased at the expense of spatial resolution, which may be advantageous in some situations. Results are given showing temporal resolution of approximately 500 µs with simulated data and 3 ms with real experimental data.
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
NASA Astrophysics Data System (ADS)
Cho, Yong Ku; Zheng, Guoan; Augustine, George J.; Hochbaum, Daniel; Cohen, Adam; Knöpfel, Thomas; Pisanello, Ferruccio; Pavone, Francesco S.; Vellekoop, Ivo M.; Booth, Martin J.; Hu, Song; Zhu, Jiang; Chen, Zhongping; Hoshi, Yoko
2016-09-01
Mechanistic understanding of how the brain gives rise to complex behavioral and cognitive functions is one of science’s grand challenges. The technical challenges that we face as we attempt to gain a systems-level understanding of the brain are manifold. The brain’s structural complexity requires us to push the limit of imaging resolution and depth, while being able to cover large areas, resulting in enormous data acquisition and processing needs. Furthermore, it is necessary to detect functional activities and ‘map’ them onto the structural features. The functional activity occurs at multiple levels, using electrical and chemical signals. Certain electrical signals are only decipherable with sub-millisecond timescale resolution, while other modes of signals occur in minutes to hours. For these reasons, there is a wide consensus that new tools are necessary to undertake this daunting task. Optical techniques, due to their versatile and scalable nature, have great potentials to answer these challenges. Optical microscopy can now image beyond the diffraction limit, record multiple types of brain activity, and trace structural features across large areas of tissue. Genetically encoded molecular tools opened doors to controlling and detecting neural activity using light in specific cell types within the intact brain. Novel sample preparation methods that reduce light scattering have been developed, allowing whole brain imaging in rodent models. Adaptive optical methods have the potential to resolve images from deep brain regions. In this roadmap article, we showcase a few major advances in this area, survey the current challenges, and identify potential future needs that may be used as a guideline for the next steps to be taken.
Cho, Yong Ku; Zheng, Guoan; Augustine, George J; Hochbaum, Daniel; Cohen, Adam; Knöpfel, Thomas; Pisanello, Ferruccio; Pavone, Francesco S; Vellekoop, Ivo M; Booth, Martin J; Hu, Song; Zhu, Jiang; Chen, Zhongping; Hoshi, Yoko
2017-01-01
Mechanistic understanding of how the brain gives rise to complex behavioral and cognitive functions is one of science’s grand challenges. The technical challenges that we face as we attempt to gain a systems-level understanding of the brain are manifold. The brain’s structural complexity requires us to push the limit of imaging resolution and depth, while being able to cover large areas, resulting in enormous data acquisition and processing needs. Furthermore, it is necessary to detect functional activities and ‘map’ them onto the structural features. The functional activity occurs at multiple levels, using electrical and chemical signals. Certain electrical signals are only decipherable with sub-millisecond timescale resolution, while other modes of signals occur in minutes to hours. For these reasons, there is a wide consensus that new tools are necessary to undertake this daunting task. Optical techniques, due to their versatile and scalable nature, have great potentials to answer these challenges. Optical microscopy can now image beyond the diffraction limit, record multiple types of brain activity, and trace structural features across large areas of tissue. Genetically encoded molecular tools opened doors to controlling and detecting neural activity using light in specific cell types within the intact brain. Novel sample preparation methods that reduce light scattering have been developed, allowing whole brain imaging in rodent models. Adaptive optical methods have the potential to resolve images from deep brain regions. In this roadmap article, we showcase a few major advances in this area, survey the current challenges, and identify potential future needs that may be used as a guideline for the next steps to be taken. PMID:28386392
High-throughput electrical characterization for robust overlay lithography control
NASA Astrophysics Data System (ADS)
Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.
2017-03-01
Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.
Dynamic Positron Emission Tomography Imaging of Renal Clearable Gold Nanoparticles
Chen, Feng; Goel, Shreya; Hernandez, Reinier; Graves, Stephen A.; Shi, Sixiang; Nickles, Robert J.; Cai, Weibo
2016-01-01
Optical imaging has been the primary imaging modality for nearly all of the renal clearable nanoparticles since 2007. Due to the tissue depth penetration limitation, providing accurate organ kinetics non-invasively has long been a huge challenge. Although a more quantitative imaging technique has been developed by labeling nanoparticles with single-photon emission computed tomography (SPECT) isotopes, the low temporal resolution of SPECT still limits its potential for visualizing the rapid dynamic process of renal clearable nanoparticles in vivo. Here, we report the dynamic positron emission tomography (PET) imaging of renal clearable gold (Au) nanoparticles by labeling them with copper-64 (64Cu) to form 64Cu-NOTA-Au-GSH. Systematic nanoparticle synthesis and characterizations were performed to demonstrate the efficient renal clearance of as-prepared nanoparticles. A rapid renal clearance of 64Cu-NOTA-Au-GSH was observed (>75 %ID at 24 h post-injection) with its elimination half-life calculated to be less than 6 min, over 130 times shorter than previously reported similar nanoparticles. Dynamic PET imaging not only addresses the current challenges in accurately and non-invasively acquiring the organ kinetics, but also potentially provides a highly useful tool for studying renal clearance mechanism of other ultra-small nanoparticles, as well as the diagnosis of kidney diseases in the near future. PMID:27062146
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
High Fidelity Raman Chemical Imaging of Materials
NASA Astrophysics Data System (ADS)
Bobba, Venkata Nagamalli Koteswara Rao
The development of high fidelity Raman imaging systems is important for a number of application areas including material science, bio-imaging, bioscience and healthcare, pharmaceutical analysis, and semiconductor characterization. The use of Raman imaging as a characterization tool for detecting the amorphous and crystalline regions in the biopolymer poly-L-lactic acid (PLLA) is the precis of my thesis. In the first chapter, a brief insight about the basics of Raman spectroscopy, Raman chemical imaging, Raman mapping, and Raman imaging techniques has been provided. The second chapter contains details about the successful development of tailored sample of PLLA. Biodegradable polymers are used in areas of tissue engineering, agriculture, packaging, and in medical field for drug delivery, implant devices, and surgical sutures. Detailed information about the sample preparation and characterization of these cold-drawn PLLA polymer substrates has been provided. Wide-field Raman hyperspectral imaging using an acousto-optic tunable filter (AOTF) was demonstrated in the early 1990s. The AOTF contributed challenges such as image walk, distortion, and image blur. A wide-field AOTF Raman imaging system has been developed as part of my research and methods to overcome some of the challenges in performing AOTF wide-field Raman imaging are discussed in the third chapter. This imaging system has been used for studying the crystalline and amorphous regions on the cold-drawn sample of PLLA. Of all the different modalities that are available for performing Raman imaging, Raman point-mapping is the most extensively used method. The ease of obtaining the Raman hyperspectral cube dataset with a high spectral and spatial resolution is the main motive of performing this technique. As a part of my research, I have constructed a Raman point-mapping system and used it for obtaining Raman hyperspectral image data of various minerals, pharmaceuticals, and polymers. Chapter four offers information about the techniques used for characterization of pharmaceutical drugs and mapping of the crystalline domains in polymers. In addition, image processing algorithms that yield chemical-based image contrast have been designed to better enable quantitative estimates of chemical heterogeneity. Some of the problems that are needed to be solved for image processing and the need for developing a volumetric imaging system is discussed in chapter five.
Efficient image acquisition design for a cancer detection system
NASA Astrophysics Data System (ADS)
Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet
2013-09-01
Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.
Software for visualization, analysis, and manipulation of laser scan images
NASA Astrophysics Data System (ADS)
Burnsides, Dennis B.
1997-03-01
The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Dehoff, Ryan R; Lloyd, Peter D
2013-01-01
Oak Ridge National Laboratory (ORNL) has been utilizing the ARCAM electron beam melting technology to additively manufacture complex geometric structures directly from powder. Although the technology has demonstrated the ability to decrease costs, decrease manufacturing lead-time and fabricate complex structures that are impossible to fabricate through conventional processing techniques, certification of the component quality can be challenging. Because the process involves the continuous deposition of successive layers of material, each layer can be examined without destructively testing the component. However, in-situ process monitoring is difficult due to metallization on inside surfaces caused by evaporation and condensation of metal from themore » melt pool. This work describes a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process. Here, the utilization of a continuously moving Mylar film canister is described. Results will be presented related to in-situ process monitoring and how this technique results in improved mechanical properties and reliability of the process.« less
Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain
NASA Astrophysics Data System (ADS)
Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.
2017-03-01
MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.
Phase retrieval by coherent modulation imaging.
Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.
NASA Astrophysics Data System (ADS)
Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad
2018-06-01
Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.
Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F
2016-03-01
Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.
Chatterjee, S; Mott, J H; Smyth, G; Dickson, S; Dobrowsky, W; Kelly, C G
2011-04-01
Intensity-modulated radiotherapy (IMRT) is increasingly being used to treat head and neck cancer cases. We discuss the clinical challenges associated with the setting up of an image guided intensity modulated radiotherapy service for a subset of head and neck cancer patients, using a recently commissioned helical tomotherapy (HT) Hi Art (Tomotherapy Inc, WI) machine in this article. We also discuss the clinical aspects of the tomotherapy planning process, treatment and image guidance experiences for the first 10 head and neck cancer cases. The concepts of geographical miss along with tomotherapy-specific effects, including that of field width and megavoltage CT (MVCT) imaging strategy, have been highlighted using the first 10 head and neck cases treated. There is a need for effective streamlining of all aspects of the service to ensure compliance with cancer waiting time targets. We discuss how patient toxicity audits are crucial to guide refinement of the newly set-up planning dose constraints. This article highlights the important clinical issues one must consider when setting up a head and neck IMRT, image-guided radiotherapy service. It shares some of the clinical challenges we have faced during the setting up of a tomotherapy service. Implementation of a clinical tomotherapy service requires a multidisciplinary team approach and relies heavily on good team working and effective communication between different staff groups.
[Non-medical applications for brain MRI: Ethical considerations].
Sarrazin, S; Fagot-Largeault, A; Leboyer, M; Houenou, J
2015-04-01
The recent neuroimaging techniques offer the possibility to better understand complex cognitive processes that are involved in mental disorders and thus have become cornerstone tools for research in psychiatry. The performances of functional magnetic resonance imaging are not limited to medical research and are used in non-medical fields. These recent applications represent new challenges for bioethics. In this article we aim at discussing the new ethical issues raised by the applications of the latest neuroimaging technologies to non-medical fields. We included a selection of peer-reviewed English medical articles after a search on NCBI Pubmed database and Google scholar from 2000 to 2013. We screened bibliographical tables for supplementary references. Websites of governmental French institutions implicated in ethical questions were also screened for governmental reports. Findings of brain areas supporting emotional responses and regulation have been used for marketing research, also called neuromarketing. The discovery of different brain activation patterns in antisocial disorder has led to changes in forensic psychiatry with the use of imaging techniques with unproven validity. Automated classification algorithms and multivariate statistical analyses of brain images have been applied to brain-reading techniques, aiming at predicting unconscious neural processes in humans. We finally report the current position of the French legislation recently revised and discuss the technical limits of such techniques. In the near future, brain imaging could find clinical applications in psychiatry as diagnostic or predictive tools. However, the latest advances in brain imaging are also used in non-scientific fields raising key ethical questions. Involvement of neuroscientists, psychiatrists, physicians but also of citizens in neuroethics discussions is crucial to challenge the risk of unregulated uses of brain imaging. Copyright © 2014 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willingham, David G.; Naes, Benjamin E.; Heasler, Patrick G.
A novel approach to particle identification and particle isotope ratio determination has been developed for nuclear safeguard applications. This particle search approach combines an adaptive thresholding algorithm and marker-controlled watershed segmentation (MCWS) transform, which improves the secondary ion mass spectrometry (SIMS) isotopic analysis of uranium containing particle populations for nuclear safeguards applications. The Niblack assisted MCWS approach (a.k.a. SEEKER) developed for this work has improved the identification of isotopically unique uranium particles under conditions that have historically presented significant challenges for SIMS image data processing techniques. Particles obtained from five NIST uranium certified reference materials (CRM U129A, U015, U150, U500more » and U850) were successfully identified in regions of SIMS image data 1) where a high variability in image intensity existed, 2) where particles were touching or were in close proximity to one another and/or 3) where the magnitude of ion signal for a given region was count limited. Analysis of the isotopic distributions of uranium containing particles identified by SEEKER showed four distinct, accurately identified 235U enrichment distributions, corresponding to the NIST certified 235U/238U isotope ratios for CRM U129A/U015 (not statistically differentiated), U150, U500 and U850. Additionally, comparison of the minor uranium isotope (234U, 235U and 236U) atom percent values verified that, even in the absence of high precision isotope ratio measurements, SEEKER could be used to segment isotopically unique uranium particles from SIMS image data. Although demonstrated specifically for SIMS analysis of uranium containing particles for nuclear safeguards, SEEKER has application in addressing a broad set of image processing challenges.« less
Magrans de Abril, Ildefons; Yoshimoto, Junichiro; Doya, Kenji
2018-06-01
This article presents a review of computational methods for connectivity inference from neural activity data derived from multi-electrode recordings or fluorescence imaging. We first identify biophysical and technical challenges in connectivity inference along the data processing pipeline. We then review connectivity inference methods based on two major mathematical foundations, namely, descriptive model-free approaches and generative model-based approaches. We investigate representative studies in both categories and clarify which challenges have been addressed by which method. We further identify critical open issues and possible research directions. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Shaw, John M.
2013-06-01
While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process fouling. The intent of this contribution is to present some of the challenges and to provide an introduction grounded in current work on non-intrusive telemetry applications - from a mine or reservoir to a refinery!
Wang, Li; Shi, Feng; Gao, Yaozong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to poor spatial resolution, severe partial volume effect, and the ongoing maturation and myelination process. During the first year of life, the brain image contrast between white and gray matters undergoes dramatic changes. In particular, the image contrast inverses around 6–8 months of age, where the white and gray matter tissues are isointense in T1 and T2 weighted images and hence exhibit the extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a general framework that adopts sparse representation to fuse the multi-modality image information and further incorporate the anatomical constraints for brain tissue segmentation. Specifically, we first derive an initial segmentation from a library of aligned images with ground-truth segmentations by using sparse representation in a patch-based fashion for the multi-modality T1, T2 and FA images. The segmentation result is further iteratively refined by integration of the anatomical constraint. The proposed method was evaluated on 22 infant brain MR images acquired at around 6 months of age by using a leave-one-out cross-validation, as well as other 10 unseen testing subjects. Our method achieved a high accuracy for the Dice ratios that measure the volume overlap between automated and manual segmentations, i.e., 0.889±0.008 for white matter and 0.870±0.006 for gray matter. PMID:24291615
Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping
2018-05-01
Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue
NASA Astrophysics Data System (ADS)
Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.
2018-02-01
Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.
Semi-Global Matching with Self-Adjusting Penalties
NASA Astrophysics Data System (ADS)
Karkalou, E.; Stentoumis, C.; Karras, G.
2017-02-01
The demand for 3D models of various scales and precisions is strong for a wide range of applications, among which cultural heritage recording is particularly important and challenging. In this context, dense image matching is a fundamental task for processes which involve image-based reconstruction of 3D models. Despite the existence of commercial software, the need for complete and accurate results under different conditions, as well as for computational efficiency under a variety of hardware, has kept image-matching algorithms as one of the most active research topics. Semi-global matching (SGM) is among the most popular optimization algorithms due to its accuracy, computational efficiency, and simplicity. A challenging aspect in SGM implementation is the determination of smoothness constraints, i.e. penalties P1, P2 for disparity changes and discontinuities. In fact, penalty adjustment is needed for every particular stereo-pair and cost computation. In this work, a novel formulation of self-adjusting penalties is proposed: SGM penalties can be estimated solely from the statistical properties of the initial disparity space image. The proposed method of self-adjusting penalties (SGM-SAP) is evaluated using typical cost functions on stereo-pairs from the recent Middlebury dataset of interior scenes, as well as from the EPFL Herz-Jesu architectural scenes. Results are competitive against the original SGM estimates. The significant aspects of self-adjusting penalties are: (i) the time-consuming tuning process is avoided; (ii) SGM can be used in image collections with limited number of stereo-pairs; and (iii) no heuristic user intervention is needed.
Data Processing Factory for the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Stoughton, Christopher; Adelman, Jennifer; Annis, James T.; Hendry, John; Inkmann, John; Jester, Sebastian; Kent, Steven M.; Kuropatkin, Nickolai; Lee, Brian; Lin, Huan; Peoples, John, Jr.; Sparks, Robert; Tucker, Douglas; Vanden Berk, Dan; Yanny, Brian; Yocum, Dan
2002-12-01
The Sloan Digital Sky Survey (SDSS) data handling presents two challenges: large data volume and timely production of spectroscopic plates from imaging data. A data processing factory, using technologies both old and new, handles this flow. Distribution to end users is via disk farms, to serve corrected images and calibrated spectra, and a database, to efficiently process catalog queries. For distribution of modest amounts of data from Apache Point Observatory to Fermilab, scripts use rsync to update files, while larger data transfers are accomplished by shipping magnetic tapes commercially. All data processing pipelines are wrapped in scripts to address consecutive phases: preparation, submission, checking, and quality control. We constructed the factory by chaining these pipelines together while using an operational database to hold processed imaging catalogs. The science database catalogs all imaging and spectroscopic object, with pointers to the various external files associated with them. Diverse computing systems address particular processing phases. UNIX computers handle tape reading and writing, as well as calibration steps that require access to a large amount of data with relatively modest computational demands. Commodity CPUs process steps that require access to a limited amount of data with more demanding computations requirements. Disk servers optimized for cost per Gbyte serve terabytes of processed data, while servers optimized for disk read speed run SQLServer software to process queries on the catalogs. This factory produced data for the SDSS Early Data Release in June 2001, and it is currently producing Data Release One, scheduled for January 2003.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
NASA Astrophysics Data System (ADS)
Santagati, C.; Inzerillo, L.; Di Paola, F.
2013-07-01
3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.
Real-Time Detection and Reading of LED/LCD Displays for Visually Impaired Persons
Tekin, Ender; Coughlan, James M.; Shen, Huiying
2011-01-01
Modern household appliances, such as microwave ovens and DVD players, increasingly require users to read an LED or LCD display to operate them, posing a severe obstacle for persons with blindness or visual impairment. While OCR-enabled devices are emerging to address the related problem of reading text in printed documents, they are not designed to tackle the challenge of finding and reading characters in appliance displays. Any system for reading these characters must address the challenge of first locating the characters among substantial amounts of background clutter; moreover, poor contrast and the abundance of specular highlights on the display surface – which degrade the image in an unpredictable way as the camera is moved – motivate the need for a system that processes images at a few frames per second, rather than forcing the user to take several photos, each of which can take seconds to acquire and process, until one is readable. We describe a novel system that acquires video, detects and reads LED/LCD characters in real time, reading them aloud to the user with synthesized speech. The system has been implemented on both a desktop and a cell phone. Experimental results are reported on videos of display images, demonstrating the feasibility of the system. PMID:21804957
MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.
He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper
2018-07-26
Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kergosien, Yannick L.; Racoceanu, Daniel
2017-11-01
This article presents our vision about the next generation of challenges in computational/digital pathology. The key role of the domain ontology, developed in a sustainable manner (i.e. using reference checklists and protocols, as the living semantic repositories), opens the way to effective/sustainable traceability and relevance feedback concerning the use of existing machine learning algorithms, proven to be very performant in the latest digital pathology challenges (i.e. convolutional neural networks). Being able to work in an accessible web-service environment, with strictly controlled issues regarding intellectual property (image and data processing/analysis algorithms) and medical data/image confidentiality is essential for the future. Among the web-services involved in the proposed approach, the living yellow pages in the area of computational pathology seems to be very important in order to reach an operational awareness, validation, and feasibility. This represents a very promising way to go to the next generation of tools, able to bring more guidance to the computer scientists and confidence to the pathologists, towards an effective/efficient daily use. Besides, a consistent feedback and insights will be more likely to emerge in the near future - from these sophisticated machine learning tools - back to the pathologists-, strengthening, therefore, the interaction between the different actors of a sustainable biomedical ecosystem (patients, clinicians, biologists, engineers, scientists etc.). Beside going digital/computational - with virtual slide technology demanding new workflows-, Pathology must prepare for another coming revolution: semantic web technologies now enable the knowledge of experts to be stored in databases, shared through the Internet, and accessible by machines. Traceability, disambiguation of reports, quality monitoring, interoperability between health centers are some of the associated benefits that pathologists were seeking. However, major changes are also to be expected for the relation of human diagnosis to machine based procedures. Improving on a former imaging platform which used a local knowledge base and a reasoning engine to combine image processing modules into higher level tasks, we propose a framework where different actors of the histopathology imaging world can cooperate using web services - exchanging knowledge as well as imaging services - and where the results of such collaborations on diagnostic related tasks can be evaluated in international challenges such as those recently organized for mitosis detection, nuclear atypia, or tissue architecture in the context of cancer grading. This framework is likely to offer an effective context-guidance and traceability to Deep Learning approaches, with an interesting promising perspective given by the multi-task learning (MTL) paradigm, distinguished by its applicability to several different learning algorithms, its non- reliance on specialized architectures and the promising results demonstrated, in particular towards the problem of weak supervision-, an issue found when direct links from pathology terms in reports to corresponding regions within images are missing.
Gandomkar, Ziba; Brennan, Patrick C.; Mello-Thoms, Claudia
2017-01-01
Context: Previous studies showed that the agreement among pathologists in recognition of mitoses in breast slides is fairly modest. Aims: Determining the significantly different quantitative features among easily identifiable mitoses, challenging mitoses, and miscounted nonmitoses within breast slides and identifying which color spaces capture the difference among groups better than others. Materials and Methods: The dataset contained 453 mitoses and 265 miscounted objects in breast slides. The mitoses were grouped into three categories based on the confidence degree of three pathologists who annotated them. The mitoses annotated as “probably a mitosis” by the majority of pathologists were considered as the challenging category. The miscounted objects were recognized as a mitosis or probably a mitosis by only one of the pathologists. The mitoses were segmented using k-means clustering, followed by morphological operations. Morphological, intensity-based, and textural features were extracted from the segmented area and also the image patch of 63 × 63 pixels in different channels of eight color spaces. Holistic features describing the mitoses' surrounding cells of each image were also extracted. Statistical Analysis Used: The Kruskal–Wallis H-test followed by the Tukey-Kramer test was used to identify significantly different features. Results: The results indicated that challenging mitoses were smaller and rounder compared to other mitoses. Among different features, the Gabor textural features differed more than others between challenging mitoses and the easily identifiable ones. Sizes of the non-mitoses were similar to easily identifiable mitoses, but nonmitoses were rounder. The intensity-based features from chromatin channels were the most discriminative features between the easily identifiable mitoses and the miscounted objects. Conclusions: Quantitative features can be used to describe the characteristics of challenging mitoses and miscounted nonmitotic objects. PMID:28966834
Rueda, Sylvia; Fathima, Sana; Knight, Caroline L; Yaqub, Mohammad; Papageorghiou, Aris T; Rahmatullah, Bahbibi; Foi, Alessandro; Maggioni, Matteo; Pepe, Antonietta; Tohka, Jussi; Stebbing, Richard V; McManigle, John E; Ciurte, Anca; Bresson, Xavier; Cuadra, Meritxell Bach; Sun, Changming; Ponomarev, Gennady V; Gelfand, Mikhail S; Kazanov, Marat D; Wang, Ching-Wei; Chen, Hsiang-Chou; Peng, Chun-Wei; Hung, Chu-Mei; Noble, J Alison
2014-04-01
This paper presents the evaluation results of the methods submitted to Challenge US: Biometric Measurements from Fetal Ultrasound Images, a segmentation challenge held at the IEEE International Symposium on Biomedical Imaging 2012. The challenge was set to compare and evaluate current fetal ultrasound image segmentation methods. It consisted of automatically segmenting fetal anatomical structures to measure standard obstetric biometric parameters, from 2D fetal ultrasound images taken on fetuses at different gestational ages (21 weeks, 28 weeks, and 33 weeks) and with varying image quality to reflect data encountered in real clinical environments. Four independent sub-challenges were proposed, according to the objects of interest measured in clinical practice: abdomen, head, femur, and whole fetus. Five teams participated in the head sub-challenge and two teams in the femur sub-challenge, including one team who tackled both. Nobody attempted the abdomen and whole fetus sub-challenges. The challenge goals were two-fold and the participants were asked to submit the segmentation results as well as the measurements derived from the segmented objects. Extensive quantitative (region-based, distance-based, and Bland-Altman measurements) and qualitative evaluation was performed to compare the results from a representative selection of current methods submitted to the challenge. Several experts (three for the head sub-challenge and two for the femur sub-challenge), with different degrees of expertise, manually delineated the objects of interest to define the ground truth used within the evaluation framework. For the head sub-challenge, several groups produced results that could be potentially used in clinical settings, with comparable performance to manual delineations. The femur sub-challenge had inferior performance to the head sub-challenge due to the fact that it is a harder segmentation problem and that the techniques presented relied more on the femur's appearance.
Minker, Katharine R; Biedrzycki, Meredith L; Kolagunda, Abhishek; Rhein, Stephen; Perina, Fabiano J; Jacobs, Samuel S; Moore, Michael; Jamann, Tiffany M; Yang, Qin; Nelson, Rebecca; Balint-Kurti, Peter; Kambhamettu, Chandra; Wisser, Randall J; Caplan, Jeffrey L
2018-02-01
The study of phenotypic variation in plant pathogenesis provides fundamental information about the nature of disease resistance. Cellular mechanisms that alter pathogenesis can be elucidated with confocal microscopy; however, systematic phenotyping platforms-from sample processing to image analysis-to investigate this do not exist. We have developed a platform for 3D phenotyping of cellular features underlying variation in disease development by fluorescence-specific resolution of host and pathogen interactions across time (4D). A confocal microscopy phenotyping platform compatible with different maize-fungal pathosystems (fungi: Setosphaeria turcica, Cochliobolus heterostrophus, and Cercospora zeae-maydis) was developed. Protocols and techniques were standardized for sample fixation, optical clearing, species-specific combinatorial fluorescence staining, multisample imaging, and image processing for investigation at the macroscale. The sample preparation methods presented here overcome challenges to fluorescence imaging such as specimen thickness and topography as well as physiological characteristics of the samples such as tissue autofluorescence and presence of cuticle. The resulting imaging techniques provide interesting qualitative and quantitative information not possible with conventional light or electron 2D imaging. Microsc. Res. Tech., 81:141-152, 2018. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Salt-and-pepper noise removal using modified mean filter and total variation minimization
NASA Astrophysics Data System (ADS)
Aghajarian, Mickael; McInroy, John E.; Wright, Cameron H. G.
2018-01-01
The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
Prostate segmentation in MRI using fused T2-weighted and elastography images
NASA Astrophysics Data System (ADS)
Nir, Guy; Sahebjavaher, Ramin S.; Baghani, Ali; Sinkus, Ralph; Salcudean, Septimiu E.
2014-03-01
Segmentation of the prostate in medical imaging is a challenging and important task for surgical planning and delivery of prostate cancer treatment. Automatic prostate segmentation can improve speed, reproducibility and consistency of the process. In this work, we propose a method for automatic segmentation of the prostate in magnetic resonance elastography (MRE) images. The method utilizes the complementary property of the elastogram and the corresponding T2-weighted image, which are obtained from the phase and magnitude components of the imaging signal, respectively. It follows a variational approach to propagate an active contour model based on the combination of region statistics in the elastogram and the edge map of the T2-weighted image. The method is fast and does not require prior shape information. The proposed algorithm is tested on 35 clinical image pairs from five MRE data sets, and is evaluated in comparison with manual contouring. The mean absolute distance between the automatic and manual contours is 1.8mm, with a maximum distance of 5.6mm. The relative area error is 7.6%, and the duration of the segmentation process is 2s per slice.
NASA Astrophysics Data System (ADS)
Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.
2014-01-01
Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.
Decomposition and extraction: a new framework for visual classification.
Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng
2014-08-01
In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao; Snow, Patrick W.; Vaid, Alok; Solecky, Eric; Zhou, Hua; Ge, Zhenhua; Yasharzade, Shay; Shoval, Ori; Adan, Ofer; Schwarzband, Ishai; Bar-Zvi, Maayan
2015-03-01
Traditional metrology solutions are facing a range of challenges at the 1X node such as three dimensional (3D) measurement capabilities, shrinking overlay and critical dimension (CD) error budgets driven by multi-patterning and via in trench CD measurements. Hybrid metrology offers promising new capabilities to address some of these challenges but it will take some time before fully realized. This paper explores new capabilities currently offered on the in-line Critical Dimension Scanning Electron Microscope (CD-SEM) to address these challenges and enable the CD-SEM to move beyond measuring bottom CD using top down imaging. Device performance is strongly correlated with Fin geometry causing an urgent need for 3D measurements. New beam tilting capabilities enhance the ability to make 3D measurements in the front-end-of-line (FEOL) of the metal gate FinFET process in manufacturing. We explore these new capabilities for measuring Fin height and build upon the work communicated last year at SPIE1. Furthermore, we extend the application of the tilt beam to the back-end-of-line (BEOL) trench depth measurement and demonstrate its capability in production targeting replacement of the existing Atomic Force Microscope (AFM) measurements by including the height measurement in the existing CDSEM recipe to reduce fab cycle time. In the BEOL, another increasingly challenging measurement for the traditional CD-SEM is the bottom CD of the self-aligned via (SAV) in a trench first via last (TFVL) process. Due to the extremely high aspect ratio of the structure secondary electron (SE) collection from the via bottom is significantly reduced requiring the use of backscatter electrons (BSE) to increase the relevant image quality. Even with this solution, the resulting images are difficult to measure with advanced technology nodes. We explore new methods to increase measurement robustness and combine this with novel segmentation-based measurement algorithm generated specifically for BSE images. The results will be contrasted with data from previously used methods to quantify the improvement. We also compare the results to electrical test data to evaluate and quantify the measurement performance improvements. Lastly, according to International Technology Roadmap for Semiconductors (ITRS) from 2013, the overlay 3 sigma requirement will be 3.3 nm in 2015 and 2.9 nm in 2016. Advanced lithography requires overlay measurement in die on features resembling the device geometry. However, current optical overlay measurement is performed in the scribe line on large targets due to optical diffraction limit. In some cases, this limits the usefulness of the measurement since it does not represent the true behavior of the device. We explore using high voltage imaging to help address this urgent need. Novel CD-SEM based overlay targets that optimize the restrictions of process geometry and SEM technique were designed and spread out across the die. Measurements are done on these new targets both after photolithography and etch. Correlation is drawn between the two measurements. These results will also be compared to conventional optical overlay measurement approaches and we will discuss the possibility of using this capability in high volume manufacturing.
Practical vision based degraded text recognition system
NASA Astrophysics Data System (ADS)
Mohammad, Khader; Agaian, Sos; Saleh, Hani
2011-02-01
Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published techniques. The system successfully produced impressive OCR accuracies (90% -to- 93%) using customized systems generated by our development framework in two industrial OCR applications: water bottle label text recognition and concrete slab plate text recognition. The system was also trained for the Arabic language alphabet, and demonstrated extremely high recognition accuracy (99%) for Arabic license name plate text recognition with processing times of 10 seconds. The accuracy and run times of the system were compared to conventional and many states of art methods, the proposed system shows excellent results.
Hennig, Simon; van de Linde, Sebastian; Lummer, Martina; Simonis, Matthias; Huser, Thomas; Sauer, Markus
2015-02-11
Labeling internal structures within living cells with standard fluorescent probes is a challenging problem. Here, we introduce a novel intracellular staining method that enables us to carefully control the labeling process and provides instant access to the inner structures of living cells. Using a hollow glass capillary with a diameter of <100 nm, we deliver functionalized fluorescent probes directly into the cells by (di)electrophoretic forces. The label density can be adjusted and traced directly during the staining process by fluorescence microscopy. We demonstrate the potential of this technique by delivering and imaging a range of commercially available cell-permeable and nonpermeable fluorescent probes to cells.
Si, Dong; He, Jing
2014-01-01
Electron cryo-microscopy (Cryo-EM) technique produces 3-dimensional (3D) density images of proteins. When resolution of the images is not high enough to resolve the molecular details, it is challenging for image processing methods to enhance the molecular features. β-barrel is a particular structure feature that is formed by multiple β-strands in a barrel shape. There is no existing method to derive β-strands from the 3D image of a β-barrel at medium resolutions. We propose a new method, StrandRoller, to generate a small set of possible β-traces from the density images at medium resolutions of 5-10Å. StrandRoller has been tested using eleven β-barrel images simulated to 10Å resolution and one image isolated from the experimentally derived cryo-EM density image at 6.7Å resolution. StrandRoller was able to detect 81.84% of the β-strands with an overall 1.5Å 2-way distance between the detected and the observed β-traces, if the best of fifteen detections is considered. Our results suggest that it is possible to derive a small set of possible β-traces from the β-barrel cryo-EM image at medium resolutions even when no separation of the β-strands is visible in the images.
SoFAST: Automated Flare Detection with the PROBA2/SWAP EUV Imager
NASA Astrophysics Data System (ADS)
Bonte, K.; Berghmans, D.; De Groof, A.; Steed, K.; Poedts, S.
2013-08-01
The Sun Watcher with Active Pixels and Image Processing (SWAP) EUV imager onboard PROBA2 provides a non-stop stream of coronal extreme-ultraviolet (EUV) images at a cadence of typically 130 seconds. These images show the solar drivers of space-weather, such as flares and erupting filaments. We have developed a software tool that automatically processes the images and localises and identifies flares. On one hand, the output of this software tool is intended as a service to the Space Weather Segment of ESA's Space Situational Awareness (SSA) program. On the other hand, we consider the PROBA2/SWAP images as a model for the data from the Extreme Ultraviolet Imager (EUI) instrument prepared for the future Solar Orbiter mission, where onboard intelligence is required for prioritising data within the challenging telemetry quota. In this article we present the concept of the software, the first statistics on its effectiveness and the online display in real time of its results. Our results indicate that it is not only possible to detect EUV flares automatically in an acquired dataset, but that quantifying a range of EUV dynamics is also possible. The method is based on thresholding of macropixelled image sequences. The robustness and simplicity of the algorithm is a clear advantage for future onboard use.
Bright field segmentation tomography (BFST) for use as surface identification in stereomicroscopy
NASA Astrophysics Data System (ADS)
Thiesse, Jacqueline R.; Namati, Eman; de Ryk, Jessica; Hoffman, Eric A.; McLennan, Geoffrey
2004-07-01
Stereomicroscopy is an important method for use in image acquisition because it provides a 3D image of an object when other microscopic techniques can only provide the image in 2D. One challenge that is being faced with this type of imaging is determining the top surface of a sample that has otherwise indistinguishable surface and planar characteristics. We have developed a system that creates oblique illumination and in conjunction with image processing, the top surface can be viewed. The BFST consists of the Leica MZ12 stereomicroscope with a unique attached lighting source. The lighting source consists of eight light emitting diodes (LED's) that are separated by 45-degree angles. Each LED in this system illuminates with a 20-degree viewing angle once per cycle with a shadow over the rest of the sample. Subsequently, eight segmented images are taken per cycle. After the images are captured they are stacked through image addition to achieve the full field of view, and the surface is then easily identified. Image processing techniques, such as skeletonization can be used for further enhancement and measurement. With the use of BFST, advances can be made in detecting surface features from metals to tissue samples, such as in the analytical assessment of pulmonary emphysema using the technique of mean linear intercept.
Coastline detection with time series of SAR images
NASA Astrophysics Data System (ADS)
Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai
2017-10-01
For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
Intraoperative cerebral blood flow imaging of rodents
NASA Astrophysics Data System (ADS)
Li, Hangdao; Li, Yao; Yuan, Lu; Wu, Caihong; Lu, Hongyang; Tong, Shanbao
2014-09-01
Intraoperative monitoring of cerebral blood flow (CBF) is of interest to neuroscience researchers, which offers the assessment of hemodynamic responses throughout the process of neurosurgery and provides an early biomarker for surgical guidance. However, intraoperative CBF imaging has been challenging due to animal's motion and position change during the surgery. In this paper, we presented a design of an operation bench integrated with laser speckle contrast imager which enables monitoring of the CBF intraoperatively. With a specially designed stereotaxic frame and imager, we were able to monitor the CBF changes in both hemispheres during the rodent surgery. The rotatable design of the operation plate and implementation of online image registration allow the technician to move the animal without disturbing the CBF imaging during surgery. The performance of the system was tested by middle cerebral artery occlusion model of rats.
NASA Astrophysics Data System (ADS)
Gorpas, D.; Yova, D.
2009-07-01
One of the major challenges in biomedical imaging is the extraction of quantified information from the acquired images. Light and tissue interaction leads to the acquisition of images that present inconsistent intensity profiles and thus the accurate identification of the regions of interest is a rather complicated process. On the other hand, the complex geometries and the tangent objects that very often are present in the acquired images, lead to either false detections or to the merging, shrinkage or expansion of the regions of interest. In this paper an algorithm, which is based on alternating sequential filtering and watershed transformation, is proposed for the segmentation of biomedical images. This algorithm has been tested over two applications, each one based on different acquisition system, and the results illustrate its accuracy in segmenting the regions of interest.
NASA Technical Reports Server (NTRS)
Saunders, R. S.; Spear, A. J.; Allin, P. C.; Austin, R. S.; Berman, A. L.; Chandlee, R. C.; Clark, J.; Decharon, A. V.; De Jong, E. M.; Griffith, D. G.
1992-01-01
Magellan started mapping the planet Venus on September 15, 1990, and after one cycle (one Venus day or 243 earth days) had mapped 84 percent of the planet's surface. This returned an image data volume greater than all past planetary missions combined. Spacecraft problems were experienced in flight. Changes in operational procedures and reprogramming of onboard computers minimized the amount of mapping data lost. Magellan data processing is the largest planetary image-processing challenge to date. Compilation of global maps of tectonic and volcanic features, as well as impact craters and related phenomena and surface processes related to wind, weathering, and mass wasting, has begun. The Magellan project is now in an extended mission phase, with plans for additional cycles out to 1995. The Magellan project will fill in mapping gaps, obtain a global gravity data set between mid-September 1992 and May 1993, acquire images at different view angles, and look for changes on the surface from one cycle to another caused by surface activity such as volcanism, faulting, or wind activity.
Live cell imaging of in vitro human trophoblast syncytialization.
Wang, Rui; Dang, Yan-Li; Zheng, Ru; Li, Yue; Li, Weiwei; Lu, Xiaoyin; Wang, Li-Juan; Zhu, Cheng; Lin, Hai-Yan; Wang, Hongmei
2014-06-01
Human trophoblast syncytialization, a process of cell-cell fusion, is one of the most important yet least understood events during placental development. Investigating the fusion process in a placenta in vivo is very challenging given the complexity of this process. Application of primary cultured cytotrophoblast cells isolated from term placentas and BeWo cells derived from human choriocarcinoma formulates a biphasic strategy to achieve the mechanism of trophoblast cell fusion, as the former can spontaneously fuse to form the multinucleated syncytium and the latter is capable of fusing under the treatment of forskolin (FSK). Live-cell imaging is a powerful tool that is widely used to investigate many physiological or pathological processes in various animal models or humans; however, to our knowledge, the mechanism of trophoblast cell fusion has not been reported using a live- cell imaging manner. In this study, a live-cell imaging system was used to delineate the fusion process of primary term cytotrophoblast cells and BeWo cells. By using live staining with Hoechst 33342 or cytoplasmic dyes or by stably transfecting enhanced green fluorescent protein (EGFP) and DsRed2-Nuc reporter plasmids, we observed finger-like protrusions on the cell membranes of fusion partners before fusion and the exchange of cytoplasmic contents during fusion. In summary, this study provides the first video recording of the process of trophoblast syncytialization. Furthermore, the various live-cell imaging systems used in this study will help to yield molecular insights into the syncytialization process during placental development. © 2014 by the Society for the Study of Reproduction, Inc.
Diken, Mustafa; Pektor, Stefanie; Miederer, Matthias
2016-10-01
Preclinical imaging has become a powerful method for investigation of in vivo processes such as pharmacokinetics of therapeutic substances and visualization of physiologic and pathophysiological mechanisms. These are important aspects to understand diseases and develop strategies to modify their progression with pharmacologic interventions. One promising intervention is the application of specifically tailored nanoscale particles that modulate the immune system to generate a tumor targeting immune response. In this complex interaction between immunomodulatory therapies, the immune system and malignant disease, imaging methods are expected to play a key role on the way to generate new therapeutic strategies. Here, we summarize examples which demonstrate the current potential of imaging methods and develop a perspective on the future value of preclinical imaging of the immune system.
Striping artifact reduction in lunar orbiter mosaic images
Mlsna, P.A.; Becker, T.
2006-01-01
Photographic images of the moon from the 1960s Lunar Orbiter missions are being processed into maps for visual use. The analog nature of the images has produced numerous artifacts, the chief of which causes a vertical striping pattern in mosaic images formed from a series of filmstrips. Previous methods of stripe removal tended to introduce ringing and aliasing problems in the image data. This paper describes a recently developed alternative approach that succeeds at greatly reducing the striping artifacts while avoiding the creation of ringing and aliasing artifacts. The algorithm uses a one dimensional frequency domain step to deal with the periodic component of the striping artifact and a spatial domain step to handle the aperiodic residue. Several variations of the algorithm have been explored. Results, strengths, and remaining challenges are presented. ?? 2006 IEEE.
Overlay metrology for double patterning processes
NASA Astrophysics Data System (ADS)
Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej
2009-03-01
The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.
High-energy proton imaging for biomedical applications
Prall, Matthias; Durante, Marco; Berger, Thomas; ...
2016-06-10
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allowsmore » imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. As a result, tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.« less
High-energy proton imaging for biomedical applications
NASA Astrophysics Data System (ADS)
Prall, M.; Durante, M.; Berger, T.; Przybyla, B.; Graeff, C.; Lang, P. M.; Latessa, C.; Shestov, L.; Simoniello, P.; Danly, C.; Mariam, F.; Merrill, F.; Nedrow, P.; Wilde, C.; Varentsov, D.
2016-06-01
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
High-energy proton imaging for biomedical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prall, Matthias; Durante, Marco; Berger, Thomas
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allowsmore » imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. As a result, tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.« less
Efficient geometric rectification techniques for spectral analysis algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Pang, S. S.; Curlander, J. C.
1992-01-01
The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
Current Status of Single Particle Imaging with X-ray Lasers
Sun, Zhibin; Fan, Jiadong; Li, Haoyuan; ...
2018-01-22
The advent of ultrafast X-ray free-electron lasers (XFELs) opens the tantalizing possibility of the atomic-resolution imaging of reproducible objects such as viruses, nanoparticles, single molecules, clusters, and perhaps biological cells, achieving a resolution for single particle imaging better than a few tens of nanometers. Improving upon this is a significant challenge which has been the focus of a global single particle imaging (SPI) initiative launched in December 2014 at the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, USA. A roadmap was outlined, and significant multi-disciplinary effort has since been devoted to work on the technical challenges of SPImore » such as radiation damage, beam characterization, beamline instrumentation and optics, sample preparation and delivery and algorithm development at multiple institutions involved in the SPI initiative. Currently, the SPI initiative has achieved 3D imaging of rice dwarf virus (RDV) and coliphage PR772 viruses at ~10 nm resolution by using soft X-ray FEL pulses at the Atomic Molecular and Optical (AMO) instrument of LCLS. Meanwhile, diffraction patterns with signal above noise up to the corner of the detector with a resolution of ~6 Ångström (Å) were also recorded with hard X-rays at the Coherent X-ray Imaging (CXI) instrument, also at LCLS. Achieving atomic resolution is truly a grand challenge and there is still a long way to go in light of recent developments in electron microscopy. However, the potential for studying dynamics at physiological conditions and capturing ultrafast biological, chemical and physical processes represents a tremendous potential application, attracting continued interest in pursuing further method development. In this paper, we give a brief introduction of SPI developments and look ahead to further method development.« less
Current Status of Single Particle Imaging with X-ray Lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Zhibin; Fan, Jiadong; Li, Haoyuan
The advent of ultrafast X-ray free-electron lasers (XFELs) opens the tantalizing possibility of the atomic-resolution imaging of reproducible objects such as viruses, nanoparticles, single molecules, clusters, and perhaps biological cells, achieving a resolution for single particle imaging better than a few tens of nanometers. Improving upon this is a significant challenge which has been the focus of a global single particle imaging (SPI) initiative launched in December 2014 at the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, USA. A roadmap was outlined, and significant multi-disciplinary effort has since been devoted to work on the technical challenges of SPImore » such as radiation damage, beam characterization, beamline instrumentation and optics, sample preparation and delivery and algorithm development at multiple institutions involved in the SPI initiative. Currently, the SPI initiative has achieved 3D imaging of rice dwarf virus (RDV) and coliphage PR772 viruses at ~10 nm resolution by using soft X-ray FEL pulses at the Atomic Molecular and Optical (AMO) instrument of LCLS. Meanwhile, diffraction patterns with signal above noise up to the corner of the detector with a resolution of ~6 Ångström (Å) were also recorded with hard X-rays at the Coherent X-ray Imaging (CXI) instrument, also at LCLS. Achieving atomic resolution is truly a grand challenge and there is still a long way to go in light of recent developments in electron microscopy. However, the potential for studying dynamics at physiological conditions and capturing ultrafast biological, chemical and physical processes represents a tremendous potential application, attracting continued interest in pursuing further method development. In this paper, we give a brief introduction of SPI developments and look ahead to further method development.« less
Applications and challenges of digital pathology and whole slide imaging.
Higgins, C
2015-07-01
Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
A Wireless Capsule Endoscope System With Low-Power Controlling and Processing ASIC.
Xinkai Chen; Xiaoyu Zhang; Linwei Zhang; Xiaowen Li; Nan Qi; Hanjun Jiang; Zhihua Wang
2009-02-01
This paper presents the design of a wireless capsule endoscope system. The proposed system is mainly composed of a CMOS image sensor, a RF transceiver and a low-power controlling and processing application specific integrated circuit (ASIC). Several design challenges involving system power reduction, system miniaturization and wireless wake-up method are resolved by employing optimized system architecture, integration of an area and power efficient image compression module, a power management unit (PMU) and a novel wireless wake-up subsystem with zero standby current in the ASIC design. The ASIC has been fabricated in 0.18-mum CMOS technology with a die area of 3.4 mm * 3.3 mm. The digital baseband can work under a power supply down to 0.95 V with a power dissipation of 1.3 mW. The prototype capsule based on the ASIC and a data recorder has been developed. Test result shows that proposed system architecture with local image compression lead to an average of 45% energy reduction for transmitting an image frame.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
ERIC Educational Resources Information Center
Lawrence, Julian; Lin, Ching-Chiu; Irwin, Rita
2017-01-01
The ways in which teachers adjust to challenges in the process of becoming professionals are complicated. Teacher mentorship, however, is an important step to creating and sustaining a strong professional career. This article discusses new understandings from a Canadian research project: "Pedagogical Assemblage: Building and Sustaining…
Abnormal Functional MRI BOLD Contrast in the Vegetative State after Severe Traumatic Brain Injury
ERIC Educational Resources Information Center
Heelmann, Volker
2010-01-01
For the rehabilitation process, the treatment of patients surviving brain injury in a vegetative state is still a serious challenge. The aim of this study was to investigate patients exhibiting severely disturbed consciousness using functional magnetic resonance imaging. Five cases of posttraumatic vegetative state and one with minimal…
USDA-ARS?s Scientific Manuscript database
A significant challenge in ecological studies has been defining scales of observation that correspond to the relevant ecological scales for organisms or processes of interest. Remote sensing has become commonplace in ecological studies and management, but the default resolution of imagery often used...
Choice and Challenge for the American Woman. Revised Edition.
ERIC Educational Resources Information Center
Harbeson, Gladys Evans
The second edition, as the previous edition, deals with evolutionary processes contributing to changing life patterns of American women; however, new portions relate to the acceleration of the trend. The new self-image of women cannot be understood if viewed as an isolated development but must be interpreted with a perspective view. Two…
Neuroscience and Education: Issues and Challenges for Curriculum
ERIC Educational Resources Information Center
Clement, Neville D.; Lovat, Terence
2012-01-01
The burgeoning knowledge of the human brain generated by the proliferation of new brain imaging technology from in recent decades has posed questions about the potential for this new knowledge of neural processing to be translated into "usable knowledge" that teachers can employ in their practical curriculum work. The application of the findings…
Validation of Clay Modeling as a Learning Tool for the Periventricular Structures of the Human Brain
ERIC Educational Resources Information Center
Akle, Veronica; Peña-Silva, Ricardo A.; Valencia, Diego M.; Rincón-Perez, Carlos W.
2018-01-01
Visualizing anatomical structures and functional processes in three dimensions (3D) are important skills for medical students. However, contemplating 3D structures mentally and interpreting biomedical images can be challenging. This study examines the impact of a new pedagogical approach to teaching neuroanatomy, specifically how building a…
Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.
Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades
2015-01-01
DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.
Kamran, Mudassar; Fowler, Kathryn J; Mellnick, Vincent M; Sicard, Gregorio A; Narra, Vamsi R
2016-06-01
Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.
Pigment network-based skin cancer detection.
Alfed, Naser; Khelifi, Fouad; Bouridane, Ahmed; Seker, Huseyin
2015-08-01
Diagnosing skin cancer in its early stages is a challenging task for dermatologists given the fact that the chance for a patient's survival is higher and hence the process of analyzing skin images and making decisions should be time efficient. Therefore, diagnosing the disease using automated and computerized systems has nowadays become essential. This paper proposes an efficient system for skin cancer detection on dermoscopic images. It has been shown that the statistical characteristics of the pigment network, extracted from the dermoscopic image, could be used as efficient discriminating features for cancer detection. The proposed system has been assessed on a dataset of 200 dermoscopic images of the `Hospital Pedro Hispano' [1] and the results of cross-validation have shown high detection accuracy.
Investigation of autofocus algorithms for brightfield microscopy of unstained cells
NASA Astrophysics Data System (ADS)
Wu, Shu Yu; Dugan, Nazim; Hennelly, Bryan M.
2014-05-01
In the past decade there has been significant interest in image processing for brightfield cell microscopy. Much of the previous research on image processing for microscopy has focused on fluorescence microscopy, including cell counting, cell tracking, cell segmentation and autofocusing. Fluorescence microscopy provides functional image information that involves the use of labels in the form of chemical stains or dyes. For some applications, where the biochemical integrity of the cell is required to remain unchanged so that sensitive chemical testing can later be applied, it is necessary to avoid staining. For this reason the challenge of processing images of unstained cells has become a topic of increasing attention. These cells are often effectively transparent and appear to have a homogenous intensity profile when they are in focus. Bright field microscopy is the most universally available and most widely used form of optical microscopy and for this reason we are interested in investigating image processing of unstained cells recorded using a standard bright field microscope. In this paper we investigate the application of a range of different autofocus metrics applied to unstained bladder cancer cell lines using a standard inverted bright field microscope with microscope objectives that have high magnification and numerical aperture. We present a number of conclusions on the optimum metrics and the manner in which they should be applied for this application.
Real-time access of large volume imagery through low-bandwidth links
NASA Astrophysics Data System (ADS)
Phillips, James; Grohs, Karl; Brower, Bernard; Kelly, Lawrence; Carlisle, Lewis; Pellechia, Matthew
2010-04-01
Providing current, time-sensitive imagery and geospatial information to deployed tactical military forces or first responders continues to be a challenge. This challenge is compounded through rapid increases in sensor collection volumes, both with larger arrays and higher temporal capture rates. Focusing on the needs of these military forces and first responders, ITT developed a system called AGILE (Advanced Geospatial Imagery Library Enterprise) Access as an innovative approach based on standard off-the-shelf techniques to solving this problem. The AGILE Access system is based on commercial software called Image Access Solutions (IAS) and incorporates standard JPEG 2000 processing. Our solution system is implemented in an accredited, deployable form, incorporating a suite of components, including an image database, a web-based search and discovery tool, and several software tools that act in concert to process, store, and disseminate imagery from airborne systems and commercial satellites. Currently, this solution is operational within the U.S. Government tactical infrastructure and supports disadvantaged imagery users in the field. This paper presents the features and benefits of this system to disadvantaged users as demonstrated in real-world operational environments.
NASA Astrophysics Data System (ADS)
Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.
2017-10-01
Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
Gbadebo, Adenowo A; Turitsyna, Elena G; Williams, John A R
2018-01-22
We demonstrate the design and fabrication of multichannel fibre Bragg gratings (FBGs) with aperiodic channel spacings. These will be suitable for the suppression of specific spectral lines such as OH emission lines in the near infrared (NIR) which degrade ground based astronomical imaging. We discuss the design process used to meet a given specification and the fabrication challenges that can give rise to errors in the final manufactured device. We propose and demonstrate solutions to meet these challenges.
New Processing of Spaceborne Imaging Radar-C (SIR-C) Data
NASA Astrophysics Data System (ADS)
Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.
2017-12-01
The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.
An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.
2015-01-01
The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.
Automatic detection of the inner ears in head CT images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.
2018-03-01
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
Metsälä, Eija; Richli Meystre, Nicole; Pires Jorge, José; Henner, Anja; Kukkes, Tiina; Sá Dos Reis, Cláudia
2017-06-01
This study aims to identify European radiographers' challenges in clinical performance in mammography and the main areas of mammography that require more and better training. An extensive search was performed to identify relevant studies focused on clinical practice, education and training in mammography published between January 2010 and December 2015 in the English language. The data were analysed by using deductive thematic analysis. A total of 27 full text articles were read, evaluating their quality. Sixteen articles out of 27 were finally selected for this integrative review. The main challenges of radiographers' mammography education/training can be divided into three groups: training needs, challenges related to radiographers, and challenges related to the organization of education. The most common challenges of clinical performance in mammography among European radiographers involved technical performance, the quality of practices, and patient-centeredness. The introduction of harmonized mammography guidelines across Europe may serve as an evidence-based tool to be implemented in practice and education. However, the variability in human and material resources as well as the different cultural contexts should be considered during this process. • Radiographers' awareness of their professional identity and enhancing multiprofessional cooperation in mammography. • Radiographers' responsibilities regarding image quality (IQ) and optimal breast imaging performance. • Patient-centred mammography services focusing on the psychosocial needs of the patient. • Challenges: positioning, QC-testing, IQ-assessment, optimization of breast compression, communication, teamwork, and patient-centred care. • Introduction of evidence-based guidelines in Europe to harmonize mammography practice and education.
Multispectral photoacoustic imaging of nerves with a clinical ultrasound system
NASA Astrophysics Data System (ADS)
Mari, Jean Martial; West, Simeon; Beard, Paul C.; Desjardins, Adrien E.
2014-03-01
Accurate and efficient identification of nerves is of great importance during many ultrasound-guided clinical procedures, including nerve blocks and prostate biopsies. It can be challenging to visualise nerves with conventional ultrasound imaging, however. One of the challenges is that nerves can have very similar appearances to nearby structures such as tendons. Several recent studies have highlighted the potential of near-infrared optical spectroscopy for differentiating nerves and adjacent tissues, as this modality can be sensitive to optical absorption of lipids that are present in intra- and extra-neural adipose tissue and in the myelin sheaths. These studies were limited to point measurements, however. In this pilot study, a custom photoacoustic system with a clinical ultrasound imaging probe was used to acquire multi-spectral photoacoustic images of nerves and tendons from swine ex vivo, across the wavelength range of 1100 to 1300 nm. Photoacoustic images were processed and overlaid in colour onto co-registered conventional ultrasound images that were acquired with the same imaging probe. A pronounced optical absorption peak centred at 1210 nm was observed in the photoacoustic signals obtained from nerves, and it was absent in those obtained from tendons. This absorption peak, which is consistent with the presence of lipids, provides a novel image contrast mechanism to significantly enhance the visualization of nerves. In particular, image contrast for nerves was up to 5.5 times greater with photoacoustic imaging (0.82 +/- 0.15) than with conventional ultrasound imaging (0.148 +/- 0.002), with a maximum contrast of 0.95 +/- 0.02 obtained in photoacoustic mode. This pilot study demonstrates the potential of photoacoustic imaging to improve clinical outcomes in ultrasound-guided interventions in regional anaesthesia and interventional oncology.
Li, Jinhui; Wan, Haitong; Zhang, Hong; Tian, Mei
2011-09-01
Traditional Chinese medicine (TCM), which is fundamentally different from Western medicine, has been widely investigated using various approaches. Cellular- or molecular-based imaging has been used to investigate and illuminate the various challenges identified and progress made using therapeutic methods in TCM. Insight into the processes of TCM at the cellular and molecular changes and the ability to image these processes will enhance our understanding of various diseases of TCM and will provide new tools to diagnose and treat patients. Various TCM therapies including herbs and formulations, acupuncture and moxibustion, massage, Gua Sha, and diet therapy have been analyzed using positron emission tomography, single photon emission computed tomography, functional magnetic resonance imaging and ultrasound and optical imaging. These imaging tools have kept pace with developments in molecular biology, nuclear medicine, and computer technology. We provide an overview of recent developments in demystifying ancient knowledge - like the power of energy flow and blood flow meridians, and serial naturopathies - which are essential to visually and vividly recognize the body using modern technology. In TCM, treatment can be individualized in a holistic or systematic view that is consistent with molecular imaging technologies. Future studies might include using molecular imaging in conjunction with TCM to easily diagnose or monitor patients naturally and noninvasively. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Basha, Dudekula Althaf; Rosalie, Julian M; Somekawa, Hidetoshi; Miyawaki, Takashi; Singh, Alok; Tsuchiya, Koichi
2016-01-01
Microstructural investigation of extremely strained samples, such as severely plastically deformed (SPD) materials, by using conventional transmission electron microscopy techniques is very challenging due to strong image contrast resulting from the high defect density. In this study, low angle annular dark field (LAADF) imaging mode of scanning transmission electron microscope (STEM) has been applied to study the microstructure of a Mg-3Zn-0.5Y (at%) alloy processed by high pressure torsion (HPT). LAADF imaging advantages for observation of twinning, grain fragmentation, nucleation of recrystallized grains and precipitation on second phase particles in the alloy processed by HPT are highlighted. By using STEM-LAADF imaging with a range of incident angles, various microstructural features have been imaged, such as nanoscale subgrain structure and recrystallization nucleation even from the thicker region of the highly strained matrix. It is shown that nucleation of recrystallized grains starts at a strain level of revolution [Formula: see text] (earlier than detected by conventional bright field imaging). Occurrence of recrystallization of grains by nucleating heterogeneously on quasicrystalline particles is also confirmed. Minimizing all strain effects by LAADF imaging facilitated grain size measurement of [Formula: see text] nm in fully recrystallized HPT specimen after [Formula: see text].
Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K
2016-11-01
In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.
Design of area array CCD image acquisition and display system based on FPGA
NASA Astrophysics Data System (ADS)
Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming
2014-09-01
With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.
PANDA: a pipeline toolbox for analyzing brain diffusion images.
Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang
2013-01-01
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.
Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.
Pereira, Sergio; Pinto, Adriano; Alves, Victor; Silva, Carlos A
2016-05-01
Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.
Autonomous target tracking of UAVs based on low-power neural network hardware
NASA Astrophysics Data System (ADS)
Yang, Wei; Jin, Zhanpeng; Thiem, Clare; Wysocki, Bryant; Shen, Dan; Chen, Genshe
2014-05-01
Detecting and identifying targets in unmanned aerial vehicle (UAV) images and videos have been challenging problems due to various types of image distortion. Moreover, the significantly high processing overhead of existing image/video processing techniques and the limited computing resources available on UAVs force most of the processing tasks to be performed by the ground control station (GCS) in an off-line manner. In order to achieve fast and autonomous target identification on UAVs, it is thus imperative to investigate novel processing paradigms that can fulfill the real-time processing requirements, while fitting the size, weight, and power (SWaP) constrained environment. In this paper, we present a new autonomous target identification approach on UAVs, leveraging the emerging neuromorphic hardware which is capable of massively parallel pattern recognition processing and demands only a limited level of power consumption. A proof-of-concept prototype was developed based on a micro-UAV platform (Parrot AR Drone) and the CogniMemTMneural network chip, for processing the video data acquired from a UAV camera on the y. The aim of this study was to demonstrate the feasibility and potential of incorporating emerging neuromorphic hardware into next-generation UAVs and their superior performance and power advantages towards the real-time, autonomous target tracking.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Breuhan, Andy; Hildebrandt, Mario; Vielhauer, Claus; Bräutigam, Anja
2012-06-01
In the field of crime scene forensics, current methods of evidence collection, such as the acquisition of shoe-marks, tireimpressions, palm-prints or fingerprints are in most cases still performed in an analogue way. For example, fingerprints are captured by powdering and sticky tape lifting, ninhydrine bathing or cyanoacrylate fuming and subsequent photographing. Images of the evidence are then further processed by forensic experts. With the upcoming use of new multimedia systems for the digital capturing and processing of crime scene traces in forensics, higher resolutions can be achieved, leading to a much better quality of forensic images. Furthermore, the fast and mostly automated preprocessing of such data using digital signal processing techniques is an emerging field. Also, by the optical and non-destructive lifting of forensic evidence, traces are not destroyed and therefore can be re-captured, e.g. by creating time series of a trace, to extract its aging behavior and maybe determine the time the trace was left. However, such new methods and tools face different challenges, which need to be addressed before a practical application in the field. Based on the example of fingerprint age determination, which is an unresolved research challenge to forensic experts since decades, we evaluate the influences of different environmental conditions as well as different types of sweating and their implications to the capturing sensory, preprocessing methods and feature extraction. We use a Chromatic White Light (CWL) sensor to exemplary represent such a new optical and contactless measurement device and investigate the influence of 16 different environmental conditions, 8 different sweat types and 11 different preprocessing methods on the aging behavior of 48 fingerprint time series (2592 fingerprint scans in total). We show the challenges that arise for such new multimedia systems capturing and processing forensic evidence
Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S
2016-09-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Noise properties and task-based evaluation of diffraction-enhanced imaging
Brankov, Jovan G.; Saiz-Herranz, Alejandro; Wernick, Miles N.
2014-01-01
Abstract. Diffraction-enhanced imaging (DEI) is an emerging x-ray imaging method that simultaneously yields x-ray attenuation and refraction images and holds great promise for soft-tissue imaging. The DEI has been mainly studied using synchrotron sources, but efforts have been made to transition the technology to more practical implementations using conventional x-ray sources. The main technical challenge of this transition lies in the relatively lower x-ray flux obtained from conventional sources, leading to photon-limited data contaminated by Poisson noise. Several issues that must be understood in order to design and optimize DEI imaging systems with respect to noise performance are addressed. Specifically, we: (a) develop equations describing the noise properties of DEI images, (b) derive the conditions under which the DEI algorithm is statistically optimal, (c) characterize the imaging performance that can be obtained as measured by task-based metrics, and (d) consider image-processing steps that may be employed to mitigate noise effects. PMID:26158056
Noninvasive Molecular Imaging of Disease Activity in Atherosclerosis
Aikawa, Elena; Newby, David E.; Tarkin, Jason M.; Rudd, James H.F.; Narula, Jagat; Fayad, Zahi A.
2016-01-01
Major focus has been placed on the identification of vulnerable plaques as a means of improving the prediction of myocardial infarction. However, this strategy has recently been questioned on the basis that the majority of these individual coronary lesions do not in fact go on to cause clinical events. Attention is, therefore, shifting to alternative imaging modalities that might provide a more complete pan-coronary assessment of the atherosclerotic disease process. These include markers of disease activity with the potential to discriminate between patients with stable burnt-out disease that is no longer metabolically active and those with active atheroma, faster disease progression, and increased risk of infarction. This review will examine how novel molecular imaging approaches can provide such assessments, focusing on inflammation and microcalcification activity, the importance of these processes to coronary atherosclerosis, and the advantages and challenges posed by these techniques. PMID:27390335
Johnston-Peck, Aaron C; Winterstein, Jonathan P; Roberts, Alan D; DuChene, Joseph S; Qian, Kun; Sweeny, Brendan C; Wei, Wei David; Sharma, Renu; Stach, Eric A; Herzing, Andrew A
2016-03-01
Low-angle annular dark field (LAADF) scanning transmission electron microscopy (STEM) imaging is presented as a method that is sensitive to the oxidation state of cerium ions in CeO2 nanoparticles. This relationship was validated through electron energy loss spectroscopy (EELS), in situ measurements, as well as multislice image simulations. Static displacements caused by the increased ionic radius of Ce(3+) influence the electron channeling process and increase electron scattering to low angles while reducing scatter to high angles. This process manifests itself by reducing the high-angle annular dark field (HAADF) signal intensity while increasing the LAADF signal intensity in close proximity to Ce(3+) ions. This technique can supplement STEM-EELS and in so doing, relax the experimental challenges associated with acquiring oxidation state information at high spatial resolutions. Published by Elsevier B.V.
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J
2017-01-01
Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.
Fetal MRI: A Technical Update with Educational Aspirations
Gholipour, Ali; Estroff, Judith A.; Barnewolt, Carol E.; Robertson, Richard L.; Grant, P. Ellen; Gagoski, Borjan; Warfield, Simon K.; Afacan, Onur; Connolly, Susan A.; Neil, Jeffrey J.; Wolfberg, Adam; Mulkern, Robert V.
2015-01-01
Fetal magnetic resonance imaging (MRI) examinations have become well-established procedures at many institutions and can serve as useful adjuncts to ultrasound (US) exams when diagnostic doubts remain after US. Due to fetal motion, however, fetal MRI exams are challenging and require the MR scanner to be used in a somewhat different mode than that employed for more routine clinical studies. Herein we review the techniques most commonly used, and those that are available, for fetal MRI with an emphasis on the physics of the techniques and how to deploy them to improve success rates for fetal MRI exams. By far the most common technique employed is single-shot T2-weighted imaging due to its excellent tissue contrast and relative immunity to fetal motion. Despite the significant challenges involved, however, many of the other techniques commonly employed in conventional neuro- and body MRI such as T1 and T2*-weighted imaging, diffusion and perfusion weighted imaging, as well as spectroscopic methods remain of interest for fetal MR applications. An effort to understand the strengths and limitations of these basic methods within the context of fetal MRI is made in order to optimize their use and facilitate implementation of technical improvements for the further development of fetal MR imaging, both in acquisition and post-processing strategies. PMID:26225129
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
In recent years, steady progress has been made towards the implementation of MRI in external beam radiation therapy for processes ranging from treatment simulation to in-room guidance. Novel procedures relying mostly on MR data are currently implemented in the clinic. This session will cover topics such as (a) commissioning and quality control of the MR in-room imagers and simulators specific to RT, (b) treatment planning requirements, constraints and challenges when dealing with various MR data, (c) quantification of organ motion with an emphasis on treatment delivery guidance, and (d) MR-driven strategies for adaptive RT workflows. The content of the sessionmore » was chosen to address both educational and practical key aspects of MR guidance. Learning Objectives: Good understanding of MR testing recommended for in-room MR imaging as well as image data validation for RT chain (e.g. image transfer, filtering for consistency, spatial accuracy, manipulation for task specific); Familiarity with MR-based planning procedures: motivation, core workflow requirements, current status, challenges; Overview of the current methods for the quantification of organ motion; Discussion on approaches for adaptive treatment planning and delivery. T. Stanescu - License agreement with Modus Medical Devices to develop a phantom for the quantification of MR image system-related distortions.; T. Stanescu, N/A.« less
WE-H-207B-03: MRI Guidance in the Radiation Therapy Clinic: Site-Specific Discussions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C.
2016-06-15
In recent years, steady progress has been made towards the implementation of MRI in external beam radiation therapy for processes ranging from treatment simulation to in-room guidance. Novel procedures relying mostly on MR data are currently implemented in the clinic. This session will cover topics such as (a) commissioning and quality control of the MR in-room imagers and simulators specific to RT, (b) treatment planning requirements, constraints and challenges when dealing with various MR data, (c) quantification of organ motion with an emphasis on treatment delivery guidance, and (d) MR-driven strategies for adaptive RT workflows. The content of the sessionmore » was chosen to address both educational and practical key aspects of MR guidance. Learning Objectives: Good understanding of MR testing recommended for in-room MR imaging as well as image data validation for RT chain (e.g. image transfer, filtering for consistency, spatial accuracy, manipulation for task specific); Familiarity with MR-based planning procedures: motivation, core workflow requirements, current status, challenges; Overview of the current methods for the quantification of organ motion; Discussion on approaches for adaptive treatment planning and delivery. T. Stanescu - License agreement with Modus Medical Devices to develop a phantom for the quantification of MR image system-related distortions.; T. Stanescu, N/A.« less
Deep Learning for Lowtextured Image Matching
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.; Fomin, N. A.
2018-05-01
Low-textured objects pose challenges for an automatic 3D model reconstruction. Such objects are common in archeological applications of photogrammetry. Most of the common feature point descriptors fail to match local patches in featureless regions of an object. Hence, automatic documentation of the archeological process using Structure from Motion (SfM) methods is challenging. Nevertheless, such documentation is possible with the aid of a human operator. Deep learning-based descriptors have outperformed most of common feature point descriptors recently. This paper is focused on the development of a new Wide Image Zone Adaptive Robust feature Descriptor (WIZARD) based on the deep learning. We use a convolutional auto-encoder to compress discriminative features of a local path into a descriptor code. We build a codebook to perform point matching on multiple images. The matching is performed using the nearest neighbor search and a modified voting algorithm. We present a new "Multi-view Amphora" (Amphora) dataset for evaluation of point matching algorithms. The dataset includes images of an Ancient Greek vase found at Taman Peninsula in Southern Russia. The dataset provides color images, a ground truth 3D model, and a ground truth optical flow. We evaluated the WIZARD descriptor on the "Amphora" dataset to show that it outperforms the SIFT and SURF descriptors on the complex patch pairs.
Koyuncu, Hasan; Ceylan, Rahime
2018-04-01
Dynamic Contrast-Enhanced Computed Tomography (DCE-CT) is applied to observe adrenal tumours in detail by utilising from the contrast matter, which generally brings the tumour into the forefront. However, DCE-CT images are generally influenced by noises that occur as the result of the trade-off between radiation doses vs. noise. Herein, this situation constitutes a challenge in the achievement of accurate tumour segmentation. In CT images, most of the noises are similar to Gaussian Noise. In this study, arterial phase CT images containing adrenal tumours are utilised, and elimination of Gaussian Noise is realised by fourteen different techniques reported in literature for the achievement of the best denoising process. In this study, the Block Matching and 3D Filtering (BM3D) algorithm typically achieve reliable Peak Signal-to-Noise Ratios (PSNR) and resolves challenges of similar techniques when addressing different levels of noise. Furthermore, BM3D obtains the best mean PSNR values among the first five techniques. BM3D outperforms to other techniques by obtaining better Total Statistical Success (TSS), CPU time and computation cost. Consequently, it prepares clearer arterial phase CT images for the next step (segmentation of adrenal tumours). Copyright © 2017 Elsevier Ltd. All rights reserved.
WE-H-207B-04: Strategies for Adaptive RT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, O.
2016-06-15
In recent years, steady progress has been made towards the implementation of MRI in external beam radiation therapy for processes ranging from treatment simulation to in-room guidance. Novel procedures relying mostly on MR data are currently implemented in the clinic. This session will cover topics such as (a) commissioning and quality control of the MR in-room imagers and simulators specific to RT, (b) treatment planning requirements, constraints and challenges when dealing with various MR data, (c) quantification of organ motion with an emphasis on treatment delivery guidance, and (d) MR-driven strategies for adaptive RT workflows. The content of the sessionmore » was chosen to address both educational and practical key aspects of MR guidance. Learning Objectives: Good understanding of MR testing recommended for in-room MR imaging as well as image data validation for RT chain (e.g. image transfer, filtering for consistency, spatial accuracy, manipulation for task specific); Familiarity with MR-based planning procedures: motivation, core workflow requirements, current status, challenges; Overview of the current methods for the quantification of organ motion; Discussion on approaches for adaptive treatment planning and delivery. T. Stanescu - License agreement with Modus Medical Devices to develop a phantom for the quantification of MR image system-related distortions.; T. Stanescu, N/A.« less
In vivo molecular and genomic imaging: new challenges for imaging physics.
Cherry, Simon R
2004-02-07
The emerging and rapidly growing field of molecular and genomic imaging is providing new opportunities to directly visualize the biology of living organisms. By combining our growing knowledge regarding the role of specific genes and proteins in human health and disease, with novel ways to target these entities in a manner that produces an externally detectable signal, it is becoming increasingly possible to visualize and quantify specific biological processes in a non-invasive manner. All the major imaging modalities are contributing to this new field, each with its unique mechanisms for generating contrast and trade-offs in spatial resolution, temporal resolution and sensitivity with respect to the biological process of interest. Much of the development in molecular imaging is currently being carried out in animal models of disease, but as the field matures and with the development of more individualized medicine and the molecular targeting of new therapeutics, clinical translation is inevitable and will likely forever change our approach to diagnostic imaging. This review provides an introduction to the field of molecular imaging for readers who are not experts in the biological sciences and discusses the opportunities to apply a broad range of imaging technologies to better understand the biology of human health and disease. It also provides a brief review of the imaging technology (particularly for x-ray, nuclear and optical imaging) that is being developed to support this new field.
TOPICAL REVIEW: In vivo molecular and genomic imaging: new challenges for imaging physics
NASA Astrophysics Data System (ADS)
Cherry, Simon R.
2004-02-01
The emerging and rapidly growing field of molecular and genomic imaging is providing new opportunities to directly visualize the biology of living organisms. By combining our growing knowledge regarding the role of specific genes and proteins in human health and disease, with novel ways to target these entities in a manner that produces an externally detectable signal, it is becoming increasingly possible to visualize and quantify specific biological processes in a non-invasive manner. All the major imaging modalities are contributing to this new field, each with its unique mechanisms for generating contrast and trade-offs in spatial resolution, temporal resolution and sensitivity with respect to the biological process of interest. Much of the development in molecular imaging is currently being carried out in animal models of disease, but as the field matures and with the development of more individualized medicine and the molecular targeting of new therapeutics, clinical translation is inevitable and will likely forever change our approach to diagnostic imaging. This review provides an introduction to the field of molecular imaging for readers who are not experts in the biological sciences and discusses the opportunities to apply a broad range of imaging technologies to better understand the biology of human health and disease. It also provides a brief review of the imaging technology (particularly for x-ray, nuclear and optical imaging) that is being developed to support this new field.
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
Ober, Christopher P
Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process.
Computational intelligence for target assessment in Parkinson's disease
NASA Astrophysics Data System (ADS)
Micheli-Tzanakou, Evangelia; Hamilton, J. L.; Zheng, J.; Lehman, Richard M.
2001-11-01
Recent advances in image and signal processing have created a new challenging environment for biomedical engineers. Methods that were developed for different fields are now finding a fertile ground in biomedicine, especially in the analysis of bio-signals and in the understanding of images. More and more, these methods are used in the operating room, helping surgeons, and in the physician's office as aids for diagnostic purposes. Neural Network (NN) research on the other hand, has gone a long way in the past decade. NNs now consist of many thousands of highly interconnected processing elements that can encode, store and recall relationships between different patterns by altering the weighting coefficients of inputs in a systematic way. Although they can generate reasonable outputs from unknown input patterns, and can tolerate a great deal of noise, they are very slow when run on a serial machine. We have used advanced signal processing and innovative image processing methods that are used along with computational intelligence for diagnostic purposes and as visualization aids inside and outside the operating room. Applications to be discussed include EEGs and field potentials in Parkinson's disease along with 3D reconstruction of MR or fMR brain images in Parkinson's patients, are currently used in the operating room for Pallidotomies and Deep Brain Stimulation (DBS).
NASA Astrophysics Data System (ADS)
Bélanger, Erik; Crépeau, Joël; Laffray, Sophie; Vallée, Réal; De Koninck, Yves; Côté, Daniel
2012-02-01
In vivo imaging of cellular dynamics can be dramatically enabling to understand the pathophysiology of nervous system diseases. To fully exploit the power of this approach, the main challenges have been to minimize invasiveness and maximize the number of concurrent optical signals that can be combined to probe the interplay between multiple cellular processes. Label-free coherent anti-Stokes Raman scattering (CARS) microscopy, for example, can be used to follow demyelination in neurodegenerative diseases or after trauma, but myelin imaging alone is not sufficient to understand the complex sequence of events that leads to the appearance of lesions in the white matter. A commercially available microendoscope is used here to achieve minimally invasive, video-rate multimodal nonlinear imaging of cellular processes in live mouse spinal cord. The system allows for simultaneous CARS imaging of myelin sheaths and two-photon excitation fluorescence microendoscopy of microglial cells and axons. Morphometric data extraction at high spatial resolution is also described, with a technique for reducing motion-related imaging artifacts. Despite its small diameter, the microendoscope enables high speed multimodal imaging over wide areas of tissue, yet at resolution sufficient to quantify subtle differences in myelin thickness and microglial motility.
Unveiling molecular events in the brain by noninvasive imaging.
Klohs, Jan; Rudin, Markus
2011-10-01
Neuroimaging allows researchers and clinicians to noninvasively assess structure and function of the brain. With the advances of imaging modalities such as magnetic resonance, nuclear, and optical imaging; the design of target-specific probes; and/or the introduction of reporter gene assays, these technologies are now capable of visualizing cellular and molecular processes in vivo. Undoubtedly, the system biological character of molecular neuroimaging, which allows for the study of molecular events in the intact organism, will enhance our understanding of physiology and pathophysiology of the brain and improve our ability to diagnose and treat diseases more specifically. Technical/scientific challenges to be faced are the development of highly sensitive imaging modalities, the design of specific imaging probe molecules capable of penetrating the CNS and reporting on endogenous cellular and molecular processes, and the development of tools for extracting quantitative, biologically relevant information from imaging data. Today, molecular neuroimaging is still an experimental approach with limited clinical impact; this is expected to change within the next decade. This article provides an overview of molecular neuroimaging approaches with a focus on rodent studies documenting the exploratory state of the field. Concepts are illustrated by discussing applications related to the pathophysiology of Alzheimer's disease.
Development of the Science Data System for the International Space Station Cold Atom Lab
NASA Technical Reports Server (NTRS)
van Harmelen, Chris; Soriano, Melissa A.
2015-01-01
Cold Atom Laboratory (CAL) is a facility that will enable scientists to study ultra-cold quantum gases in a microgravity environment on the International Space Station (ISS) beginning in 2016. The primary science data for each experiment consists of two images taken in quick succession. The first image is of the trapped cold atoms and the second image is of the background. The two images are subtracted to obtain optical density. These raw Level 0 atom and background images are processed into the Level 1 optical density data product, and then into the Level 2 data products: atom number, Magneto-Optical Trap (MOT) lifetime, magnetic chip-trap atom lifetime, and condensate fraction. These products can also be used as diagnostics of the instrument health. With experiments being conducted for 8 hours every day, the amount of data being generated poses many technical challenges, such as downlinking and managing the required data volume. A parallel processing design is described, implemented, and benchmarked. In addition to optimizing the data pipeline, accuracy and speed in producing the Level 1 and 2 data products is key. Algorithms for feature recognition are explored, facilitating image cropping and accurate atom number calculations.
Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme.
Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun
2015-01-01
Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation.
Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme
Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun
2015-01-01
Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation. PMID:25709942
Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images
NASA Astrophysics Data System (ADS)
Kamble, V. M.; Bhurchandi, K.
2018-03-01
Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.
Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro
2010-07-08
A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.
Integrated circuits for volumetric ultrasound imaging with 2-D CMUT arrays.
Bhuyan, Anshuman; Choe, Jung Woo; Lee, Byung Chul; Wygant, Ira O; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T
2013-12-01
Real-time volumetric ultrasound imaging systems require transmit and receive circuitry to generate ultrasound beams and process received echo signals. The complexity of building such a system is high due to requirement of the front-end electronics needing to be very close to the transducer. A large number of elements also need to be interfaced to the back-end system and image processing of a large dataset could affect the imaging volume rate. In this work, we present a 3-D imaging system using capacitive micromachined ultrasonic transducer (CMUT) technology that addresses many of the challenges in building such a system. We demonstrate two approaches in integrating the transducer and the front-end electronics. The transducer is a 5-MHz CMUT array with an 8 mm × 8 mm aperture size. The aperture consists of 1024 elements (32 × 32) with an element pitch of 250 μm. An integrated circuit (IC) consists of a transmit beamformer and receive circuitry to improve the noise performance of the overall system. The assembly was interfaced with an FPGA and a back-end system (comprising of a data acquisition system and PC). The FPGA provided the digital I/O signals for the IC and the back-end system was used to process the received RF echo data (from the IC) and reconstruct the volume image using a phased array imaging approach. Imaging experiments were performed using wire and spring targets, a ventricle model and a human prostrate. Real-time volumetric images were captured at 5 volumes per second and are presented in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdulbaqi, Hayder Saad; Department of Physics, College of Education, University of Al-Qadisiya, Al-Qadisiya; Jafri, Mohd Zubir Mat
Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introducemore » a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.« less
NASA Astrophysics Data System (ADS)
Koch, Holger; Kägeler, Christian; Otto, Andreas; Schmidt, Michael
Welding of zinc coated sheets in zero gap configuration is of eminent interest for the automotive industry. This Laser welding process would enable the automotive industry to build auto bodies with a high durability in a plain manufacturing process. Today good welding results can only be achieved by expensive constructive procedures such as clamping devices to ensure a defined gad. The welding in zero gap configuration is a big challenge because of the vaporised zinc expelled from the interface between the two sheets. To find appropriate welding parameters for influencing the keyhole and melt pool dynamics, a three dimensional simulation and a high speed imaging system for laser keyhole welding have been developed. The obtained results help to understand the process of the melt pool perturbation caused by vaporised zinc.
CT imaging spectrum of infiltrative renal diseases.
Ballard, David H; De Alba, Luis; Migliaro, Matias; Previgliano, Carlos H; Sangster, Guillermo P
2017-11-01
Most renal lesions replace the renal parenchyma as a focal space-occupying mass with borders distinguishing the mass from normal parenchyma. However, some renal lesions exhibit interstitial infiltration-a process that permeates the renal parenchyma by using the normal renal architecture for growth. These infiltrative lesions frequently show nonspecific patterns that lead to little or no contour deformity and have ill-defined borders on CT, making detection and diagnosis challenging. The purpose of this pictorial essay is to describe the CT imaging findings of various conditions that may manifest as infiltrative renal lesions.
Experimental investigation of orbitally shaken bioreactor hydrodynamics
NASA Astrophysics Data System (ADS)
Reclari, Martino; Dreyer, Matthieu; Farhat, Mohamed
2010-11-01
The growing interest in the use of orbitally shaken bioreactors for mammalian cells cultivation raises challenging hydrodynamic issues. Optimizations of mixing and oxygenation, as well as similarity relations between different culture scales are still lacking. In the present study, we investigated the relation between the shape of the free surface, the mixing process and the velocity fields, using specific image processing of high speed visualization and Laser Doppler velocimetry. Moreover, similarity parameters were identified for scale-up purposes.
Phase retrieval by coherent modulation imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Phase retrieval by coherent modulation imaging
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Brain Imaging in Alzheimer Disease
Johnson, Keith A.; Fox, Nick C.; Sperling, Reisa A.; Klunk, William E.
2012-01-01
Imaging has played a variety of roles in the study of Alzheimer disease (AD) over the past four decades. Initially, computed tomography (CT) and then magnetic resonance imaging (MRI) were used diagnostically to rule out other causes of dementia. More recently, a variety of imaging modalities including structural and functional MRI and positron emission tomography (PET) studies of cerebral metabolism with fluoro-deoxy-d-glucose (FDG) and amyloid tracers such as Pittsburgh Compound-B (PiB) have shown characteristic changes in the brains of patients with AD, and in prodromal and even presymptomatic states that can help rule-in the AD pathophysiological process. No one imaging modality can serve all purposes as each have unique strengths and weaknesses. These modalities and their particular utilities are discussed in this article. The challenge for the future will be to combine imaging biomarkers to most efficiently facilitate diagnosis, disease staging, and, most importantly, development of effective disease-modifying therapies. PMID:22474610
Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder
NASA Astrophysics Data System (ADS)
August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian
2016-03-01
Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.
Practical considerations of image analysis and quantification of signal transduction IHC staining.
Grunkin, Michael; Raundahl, Jakob; Foged, Niels T
2011-01-01
The dramatic increase in computer processing power in combination with the availability of high-quality digital cameras during the last 10 years has fertilized the grounds for quantitative microscopy based on digital image analysis. With the present introduction of robust scanners for whole slide imaging in both research and routine, the benefits of automation and objectivity in the analysis of tissue sections will be even more obvious. For in situ studies of signal transduction, the combination of tissue microarrays, immunohistochemistry, digital imaging, and quantitative image analysis will be central operations. However, immunohistochemistry is a multistep procedure including a lot of technical pitfalls leading to intra- and interlaboratory variability of its outcome. The resulting variations in staining intensity and disruption of original morphology are an extra challenge for the image analysis software, which therefore preferably should be dedicated to the detection and quantification of histomorphometrical end points.
Bacterial cell identification in differential interference contrast microscopy images.
Obara, Boguslaw; Roberts, Mark A J; Armitage, Judith P; Grau, Vicente
2013-04-23
Microscopy image segmentation lays the foundation for shape analysis, motion tracking, and classification of biological objects. Despite its importance, automated segmentation remains challenging for several widely used non-fluorescence, interference-based microscopy imaging modalities. For example in differential interference contrast microscopy which plays an important role in modern bacterial cell biology. Therefore, new revolutions in the field require the development of tools, technologies and work-flows to extract and exploit information from interference-based imaging data so as to achieve new fundamental biological insights and understanding. We have developed and evaluated a high-throughput image analysis and processing approach to detect and characterize bacterial cells and chemotaxis proteins. Its performance was evaluated using differential interference contrast and fluorescence microscopy images of Rhodobacter sphaeroides. Results demonstrate that the proposed approach provides a fast and robust method for detection and analysis of spatial relationship between bacterial cells and their chemotaxis proteins.
August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian
2016-03-23
Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
A computer vision for animal ecology.
Weinstein, Ben G
2018-05-01
A central goal of animal ecology is to observe species in the natural world. The cost and challenge of data collection often limit the breadth and scope of ecological study. Ecologists often use image capture to bolster data collection in time and space. However, the ability to process these images remains a bottleneck. Computer vision can greatly increase the efficiency, repeatability and accuracy of image review. Computer vision uses image features, such as colour, shape and texture to infer image content. I provide a brief primer on ecological computer vision to outline its goals, tools and applications to animal ecology. I reviewed 187 existing applications of computer vision and divided articles into ecological description, counting and identity tasks. I discuss recommendations for enhancing the collaboration between ecologists and computer scientists and highlight areas for future growth of automated image analysis. © 2017 The Author. Journal of Animal Ecology © 2017 British Ecological Society.
Masedunskas, Andrius; Milberg, Oleg; Porat-Shliom, Natalie; Sramkova, Monika; Wigand, Tim; Amornphimoltham, Panomwat; Weigert, Roberto
2012-01-01
Intravital microscopy is an extremely powerful tool that enables imaging several biological processes in live animals. Recently, the ability to image subcellular structures in several organs combined with the development of sophisticated genetic tools has made possible extending this approach to investigate several aspects of cell biology. Here we provide a general overview of intravital microscopy with the goal of highlighting its potential and challenges. Specifically, this review is geared toward researchers that are new to intravital microscopy and focuses on practical aspects of carrying out imaging in live animals. Here we share the know-how that comes from first-hand experience, including topics such as choosing the right imaging platform and modality, surgery and stabilization techniques, anesthesia and temperature control. Moreover, we highlight some of the approaches that facilitate subcellular imaging in live animals by providing numerous examples of imaging selected organelles and the actin cytoskeleton in multiple organs. PMID:22992750
Characterization of PET/CT images using texture analysis: the past, the present… any future?
Hatt, Mathieu; Tixier, Florent; Pierce, Larry; Kinahan, Paul E; Le Rest, Catherine Cheze; Visvikis, Dimitris
2017-01-01
After seminal papers over the period 2009 - 2011, the use of texture analysis of PET/CT images for quantification of intratumour uptake heterogeneity has received increasing attention in the last 4 years. Results are difficult to compare due to the heterogeneity of studies and lack of standardization. There are also numerous challenges to address. In this review we provide critical insights into the recent development of texture analysis for quantifying the heterogeneity in PET/CT images, identify issues and challenges, and offer recommendations for the use of texture analysis in clinical research. Numerous potentially confounding issues have been identified, related to the complex workflow for the calculation of textural features, and the dependency of features on various factors such as acquisition, image reconstruction, preprocessing, functional volume segmentation, and methods of establishing and quantifying correspondences with genomic and clinical metrics of interest. A lack of understanding of what the features may represent in terms of the underlying pathophysiological processes and the variability of technical implementation practices makes comparing results in the literature challenging, if not impossible. Since progress as a field requires pooling results, there is an urgent need for standardization and recommendations/guidelines to enable the field to move forward. We provide a list of correct formulae for usual features and recommendations regarding implementation. Studies on larger cohorts with robust statistical analysis and machine learning approaches are promising directions to evaluate the potential of this approach.
Can we match ultraviolet face images against their visible counterparts?
NASA Astrophysics Data System (ADS)
Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.
2015-05-01
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.
D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances
NASA Astrophysics Data System (ADS)
Piras, M.; Di Pietra, V.; Visintini, D.
2017-08-01
The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.
Development of AN All-Purpose Free Photogrammetric Tool
NASA Astrophysics Data System (ADS)
González-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Guerrero, D.; Hernandez-Lopez, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; Gaiani, M.
2016-06-01
Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Malenovský, Zbyněk; Van der Tol, Christiaan; Camps-Valls, Gustau; Gastellu-Etchegorry, Jean-Philippe; Lewis, Philip; North, Peter; Moreno, Jose
2018-06-01
An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given.
Advanced magnetic resonance imaging of the physical processes in human glioblastoma.
Kalpathy-Cramer, Jayashree; Gerstner, Elizabeth R; Emblem, Kyrre E; Andronesi, Ovidiu; Rosen, Bruce
2014-09-01
The most common malignant primary brain tumor, glioblastoma multiforme (GBM) is a devastating disease with a grim prognosis. Patient survival is typically less than two years and fewer than 10% of patients survive more than five years. Magnetic resonance imaging (MRI) can have great utility in the diagnosis, grading, and management of patients with GBM as many of the physical manifestations of the pathologic processes in GBM can be visualized and quantified using MRI. Newer MRI techniques such as dynamic contrast enhanced and dynamic susceptibility contrast MRI provide functional information about the tumor hemodynamic status. Diffusion MRI can shed light on tumor cellularity and the disruption of white matter tracts in the proximity of tumors. MR spectroscopy can be used to study new tumor tissue markers such as IDH mutations. MRI is helping to noninvasively explore the link between the molecular basis of gliomas and the imaging characteristics of their physical processes. We, here, review several approaches to MR-based imaging and discuss the potential for these techniques to quantify the physical processes in glioblastoma, including tumor cellularity and vascularity, metabolite expression, and patterns of tumor growth and recurrence. We conclude with challenges and opportunities for further research in applying physical principles to better understand the biologic process in this deadly disease. See all articles in this Cancer Research section, "Physics in Cancer Research." ©2014 American Association for Cancer Research.
Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I
2010-11-19
Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.
With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less
Reu, Phillip L.; Toussaint, E.; Jones, Elizabeth M. C.; ...
2017-12-11
With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approachesmore » and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.« less
High Density or Urban Sprawl: What Works Best in Biology?
Oreopoulos, John; Gray-Owen, Scott D; Yip, Christopher M
2017-02-28
With new approaches in imaging-from new tools or reagents to processing algorithms-come unique opportunities and challenges to our understanding of biological processes, structures, and dynamics. Although innovations in super-resolution imaging are affording novel perspectives into how molecules structurally associate and localize in response to, or in order to initiate, specific signaling events in the cell, questions arise as to how to interpret these observations in the context of biological function. Just as each neighborhood in a city has its own unique vibe, culture, and indeed density, recent work has shown that membrane receptor behavior and action is governed by their localization and association state. There is tremendous potential in developing strategies for tracking how the populations of these molecular neighborhoods change dynamically.
Building a print on demand web service
NASA Astrophysics Data System (ADS)
Reddy, Prakash; Rozario, Benedict; Dudekula, Shariff; V, Anil Dev
2011-03-01
There is considerable effort underway to digitize all books that have ever been printed. There is need for a service that can take raw book scans and convert them into Print on Demand (POD) books. Such a service definitely augments the digitization effort and enables broader access to a wider audience. To make this service practical we have identified three key challenges that needed to be addressed. These are: a) produce high quality image images by eliminating artifacts that exist due to the age of the document or those that are introduced during the scanning process b) develop an efficient automated system to process book scans with minimum human intervention; and c) build an eco system which allows us the target audience to discover these books.
Challenges of microtome‐based serial block‐face scanning electron microscopy in neuroscience
WANNER, A. A.; KIRSCHMANN, M. A.
2015-01-01
Summary Serial block‐face scanning electron microscopy (SBEM) is becoming increasingly popular for a wide range of applications in many disciplines from biology to material sciences. This review focuses on applications for circuit reconstruction in neuroscience, which is one of the major driving forces advancing SBEM. Neuronal circuit reconstruction poses exceptional challenges to volume EM in terms of resolution, field of view, acquisition time and sample preparation. Mapping the connections between neurons in the brain is crucial for understanding information flow and information processing in the brain. However, information on the connectivity between hundreds or even thousands of neurons densely packed in neuronal microcircuits is still largely missing. Volume EM techniques such as serial section TEM, automated tape‐collecting ultramicrotome, focused ion‐beam scanning electron microscopy and SBEM (microtome serial block‐face scanning electron microscopy) are the techniques that provide sufficient resolution to resolve ultrastructural details such as synapses and provides sufficient field of view for dense reconstruction of neuronal circuits. While volume EM techniques are advancing, they are generating large data sets on the terabyte scale that require new image processing workflows and analysis tools. In this review, we present the recent advances in SBEM for circuit reconstruction in neuroscience and an overview of existing image processing and analysis pipelines. PMID:25907464
CZT sensors for Computed Tomography: from crystal growth to image quality
NASA Astrophysics Data System (ADS)
Iniewski, K.
2016-12-01
Recent advances in Traveling Heater Method (THM) growth and device fabrication that require additional processing steps have enabled to dramatically improve hole transport properties and reduce polarization effects in Cadmium Zinc Telluride (CZT) material. As a result high flux operation of CZT sensors at rates in excess of 200 Mcps/mm2 is now possible and has enabled multiple medical imaging companies to start building prototype Computed Tomography (CT) scanners. CZT sensors are also finding new commercial applications in non-destructive testing (NDT) and baggage scanning. In order to prepare for high volume commercial production we are moving from individual tile processing to whole wafer processing using silicon methodologies, such as waxless processing, cassette based/touchless wafer handling. We have been developing parametric level screening at the wafer stage to ensure high wafer quality before detector fabrication in order to maximize production yields. These process improvements enable us, and other CZT manufacturers who pursue similar developments, to provide high volume production for photon counting applications in an economically feasible manner. CZT sensors are capable of delivering both high count rates and high-resolution spectroscopic performance, although it is challenging to achieve both of these attributes simultaneously. The paper discusses material challenges, detector design trade-offs and ASIC architectures required to build cost-effective CZT based detection systems. Photon counting ASICs are essential part of the integrated module platforms as charge-sensitive electronics needs to deal with charge-sharing and pile-up effects.
The Role of Sexual Images in Online and Offline Sexual Behaviour With Minors.
Quayle, Ethel; Newman, Emily
2015-06-01
Sexual images have long been associated with sexual interest and behaviour with minors. The Internet has impacted access to existing content and the ability to create content which can be uploaded and distributed. These images can be used forensically to determine the legality of the behaviour, but importantly for psychiatry, they offer insight into motivation, sexual interest and deviance, the relationship between image content and offline sexual behaviour, and how they might be used in online solicitation and grooming with children and adolescents. Practitioners will need to consider the function that these images may serve, the motivation for their use and the challenges of assessment. This article provides an overview of the literature on the use of illegal images and the parallels with existing paraphilias, such as exhibitionism and voyeurism. The focus is on recent research on the Internet and sexual images of children, including the role that self-taken images by youth may play in the offending process.
Abnormal GABAergic function and negative affect in schizophrenia.
Taylor, Stephan F; Demeter, Elise; Phan, K Luan; Tso, Ivy F; Welsh, Robert C
2014-03-01
Deficits in the γ-aminobutyric acid (GABA) system have been reported in postmortem studies of schizophrenia, and therapeutic interventions in schizophrenia often involve potentiation of GABA receptors (GABAR) to augment antipsychotic therapy and treat negative affect such as anxiety. To map GABAergic mechanisms associated with processing affect, we used a benzodiazepine challenge while subjects viewed salient visual stimuli. Fourteen stable, medicated schizophrenia/schizoaffective patients and 13 healthy comparison subjects underwent functional magnetic resonance imaging using the blood oxygenation level-dependent (BOLD) technique while they viewed salient emotional images. Subjects received intravenous lorazepam (LRZ; 0.01 mg/kg) or saline in a single-blinded, cross-over design (two sessions separated by 1-3 weeks). A predicted group by drug interaction was noted in the dorsal medial prefrontal cortex (dmPFC) as well as right superior frontal gyrus and left and right occipital regions, such that psychosis patients showed an increased BOLD signal to LRZ challenge, rather than the decreased signal exhibited by the comparison group. A main effect of reduced BOLD signal in bilateral occipital areas was noted across groups. Consistent with the role of the dmPFC in processing emotion, state negative affect positively correlated with the response to the LRZ challenge in the dmPFC for the patients and comparison subjects. The altered response to LRZ challenge is consistent with altered inhibition predicted by postmortem findings of altered GABAR in schizophrenia. These results also suggest that negative affect in schizophrenia/schizoaffective disorder is associated-directly or indirectly-with GABAergic function on a continuum with normal behavior.
Automated Segmentation of Nuclei in Breast Cancer Histopathology Images.
Paramanandam, Maqlin; O'Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram
2016-01-01
The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods-Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets.
Automated Segmentation of Nuclei in Breast Cancer Histopathology Images
Paramanandam, Maqlin; O’Byrne, Michael; Ghosh, Bidisha; Mammen, Joy John; Manipadam, Marie Therese; Thamburaj, Robinson; Pakrashi, Vikram
2016-01-01
The process of Nuclei detection in high-grade breast cancer images is quite challenging in the case of image processing techniques due to certain heterogeneous characteristics of cancer nuclei such as enlarged and irregularly shaped nuclei, highly coarse chromatin marginalized to the nuclei periphery and visible nucleoli. Recent reviews state that existing techniques show appreciable segmentation accuracy on breast histopathology images whose nuclei are dispersed and regular in texture and shape; however, typical cancer nuclei are often clustered and have irregular texture and shape properties. This paper proposes a novel segmentation algorithm for detecting individual nuclei from Hematoxylin and Eosin (H&E) stained breast histopathology images. This detection framework estimates a nuclei saliency map using tensor voting followed by boundary extraction of the nuclei on the saliency map using a Loopy Back Propagation (LBP) algorithm on a Markov Random Field (MRF). The method was tested on both whole-slide images and frames of breast cancer histopathology images. Experimental results demonstrate high segmentation performance with efficient precision, recall and dice-coefficient rates, upon testing high-grade breast cancer images containing several thousand nuclei. In addition to the optimal performance on the highly complex images presented in this paper, this method also gave appreciable results in comparison with two recently published methods—Wienert et al. (2012) and Veta et al. (2013), which were tested using their own datasets. PMID:27649496
Picturing pathogen infection in plants.
Barón, Matilde; Pineda, Mónica; Pérez-Bueno, María Luisa
2016-09-01
Several imaging techniques have provided valuable tools to evaluate the impact of biotic stress on host plants. The use of these techniques enables the study of plant-pathogen interactions by analysing the spatial and temporal heterogeneity of foliar metabolism during pathogenesis. In this work we review the use of imaging techniques based on chlorophyll fluorescence, multicolour fluorescence and thermography for the study of virus, bacteria and fungi-infected plants. These studies have revealed the impact of pathogen challenge on photosynthetic performance, secondary metabolism, as well as leaf transpiration as a promising tool for field and greenhouse management of diseases. Images of standard chlorophyll fluorescence (Chl-F) parameters obtained during Chl-F induction kinetics related to photochemical processes and those involved in energy dissipation, could be good stress indicators to monitor pathogenesis. Changes on UV-induced blue (F440) and green fluorescence (F520) measured by multicolour fluorescence imaging in pathogen-challenged plants seem to be related with the up-regulation of the plant secondary metabolism and with an increase in phenolic compounds involved in plant defence, such as scopoletin, chlorogenic or ferulic acids. Thermal imaging visualizes the leaf transpiration map during pathogenesis and emphasizes the key role of stomata on innate plant immunity. Using several imaging techniques in parallel could allow obtaining disease signatures for a specific pathogen. These techniques have also turned out to be very useful for presymptomatic pathogen detection, and powerful non-destructive tools for precision agriculture. Their applicability at lab-scale, in the field by remote sensing, and in high-throughput plant phenotyping, makes them particularly useful. Thermal sensors are widely used in crop fields to detect early changes in leaf transpiration induced by both air-borne and soil-borne pathogens. The limitations of measuring photosynthesis by Chl-F at the canopy level are being solved, while the use of multispectral fluorescence imaging is very challenging due to the type of light excitation that is used.
Filho, Mercedes; Ma, Zhen; Tavares, João Manuel R S
2015-11-01
In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.
A novel double patterning approach for 30nm dense holes
NASA Astrophysics Data System (ADS)
Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven
2011-04-01
Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.
Microscopy Images as Interactive Tools in Cell Modeling and Cell Biology Education
ERIC Educational Resources Information Center
Araujo-Jorge, Tania C.; Cardona, Tania S.; Mendes, Claudia L. S.; Henriques-Pons, Andrea; Meirelles, Rosane M. S.; Coutinho, Claudia M. L. M.; Aguiar, Luiz Edmundo V.; Meirelles, Maria de Nazareth L.; de Castro, Solange L.; Barbosa, Helene S.; Luz, Mauricio R. M. P.
2004-01-01
The advent of genomics, proteomics, and microarray technology has brought much excitement to science, both in teaching and in learning. The public is eager to know about the processes of life. In the present context of the explosive growth of scientific information, a major challenge of modern cell biology is to popularize basic concepts of…
"Needle and Stick" Save the World: Sustainable Development and the Universal Child
ERIC Educational Resources Information Center
Dahlbeck, Johan; De Lucia Dahlbeck, Moa
2012-01-01
This text deals with a problem concerning processes of the productive power of knowledge. We draw on the so-called poststructural theories challenging the classical image of thought--as hinged upon a representational logic identifying entities in a rigid sense--when formulating a problem concerning the gap between knowledge and the object of…
Unsupervised tattoo segmentation combining bottom-up and top-down cues
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen
2011-06-01
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamran, Mudassar, E-mail: kamranm@mir.wustl.edu; Fowler, Kathryn J., E-mail: fowlerk@mir.wustl.edu; Mellnick, Vincent M., E-mail: mellnickv@mir.wustl.edu
Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented tomore » display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.« less
Automated microscopy for high-content RNAi screening
2010-01-01
Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920
Tonti, Simone; Di Cataldo, Santa; Bottino, Andrea; Ficarra, Elisa
2015-03-01
The automatization of the analysis of Indirect Immunofluorescence (IIF) images is of paramount importance for the diagnosis of autoimmune diseases. This paper proposes a solution to one of the most challenging steps of this process, the segmentation of HEp-2 cells, through an adaptive marker-controlled watershed approach. Our algorithm automatically conforms the marker selection pipeline to the peculiar characteristics of the input image, hence it is able to cope with different fluorescent intensities and staining patterns without any a priori knowledge. Furthermore, it shows a reduced sensitivity to over-segmentation errors and uneven illumination, that are typical issues of IIF imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Processing challenges in the XMM-Newton slew survey
NASA Astrophysics Data System (ADS)
Saxton, Richard D.; Altieri, Bruno; Read, Andrew M.; Freyberg, Michael J.; Esquej, M. P.; Bermejo, Diego
2005-08-01
The great collecting area of the mirrors coupled with the high quantum efficiency of the EPIC detectors have made XMM-Newton the most sensitive X-ray observatory flown to date. This is particularly evident during slew exposures which, while giving only 15 seconds of on-source time, actually constitute a 2-10 keV survey ten times deeper than current "all-sky" catalogues. Here we report on progress towards making a catalogue of slew detections constructed from the full, 0.2-12 keV energy band and discuss the challenges associated with processing the slew data. The fast (90 degrees per hour) slew speed results in images which are smeared, by different amounts depending on the readout mode, effectively changing the form of the point spread function. The extremely low background in slew images changes the optimum source searching criteria such that searching a single image using the full energy band is seen to be more sensitive than splitting the data into discrete energy bands. False detections due to optical loading by bright stars, the wings of the PSF in very bright sources and single-frame detector flashes are considered and techniques for identifying and removing these spurious sources from the final catalogue are outlined. Finally, the attitude reconstruction of the satellite during the slewing maneuver is complex. We discuss the implications of this on the positional accuracy of the catalogue.
Covert photo classification by fusing image features and visual attributes.
Lang, Haitao; Ling, Haibin
2015-10-01
In this paper, we study a novel problem of classifying covert photos, whose acquisition processes are intentionally concealed from the subjects being photographed. Covert photos are often privacy invasive and, if distributed over Internet, can cause serious consequences. Automatic identification of such photos, therefore, serves as an important initial step toward further privacy protection operations. The problem is, however, very challenging due to the large semantic similarity between covert and noncovert photos, the enormous diversity in the photographing process and environment of cover photos, and the difficulty to collect an effective data set for the study. Attacking these challenges, we make three consecutive contributions. First, we collect a large data set containing 2500 covert photos, each of them is verified rigorously and carefully. Second, we conduct a user study on how humans distinguish covert photos from noncovert ones. The user study not only provides an important evaluation baseline, but also suggests fusing heterogeneous information for an automatic solution. Our third contribution is a covert photo classification algorithm that fuses various image features and visual attributes in the multiple kernel learning framework. We evaluate the proposed approach on the collected data set in comparison with other modern image classifiers. The results show that our approach achieves an average classification rate (1-EER) of 0.8940, which significantly outperforms other competitors as well as human's performance.
Co-Registration of Terrestrial and Uav-Based Images - Experimental Results
NASA Astrophysics Data System (ADS)
Gerke, M.; Nex, F.; Jende, P.
2016-03-01
For many applications within urban environments the combined use of images taken from the ground and from unmanned aerial platforms seems interesting: while from the airborne perspective the upper parts of objects including roofs can be observed, the ground images can complement the data from lateral views to retrieve a complete visualisation or 3D reconstruction of interesting areas. The automatic co-registration of air- and ground-based images is still a challenge and cannot be considered solved. The main obstacle is originating from the fact that objects are photographed from quite different angles, and hence state-of-the-art tie point measurement approaches cannot cope with the induced perspective transformation. One first important step towards a solution is to use airborne images taken under slant directions. Those oblique views not only help to connect vertical images and horizontal views but also provide image information from 3D-structures not visible from the other two directions. According to our experience, however, still a good planning and many images taken under different viewing angles are needed to support an automatic matching across all images and complete bundle block adjustment. Nevertheless, the entire process is still quite sensible - the removal of a single image might lead to a completely different or wrong solution, or separation of image blocks. In this paper we analyse the impact different parameters and strategies have on the solution. Those are a) the used tie point matcher, b) the used software for bundle adjustment. Using the data provided in the context of the ISPRS benchmark on multi-platform photogrammetry, we systematically address the mentioned influences. Concerning the tie-point matching we test the standard SIFT point extractor and descriptor, but also the SURF and ASIFT-approaches, the ORB technique, as well as (A)KAZE, which are based on a nonlinear scale space. In terms of pre-processing we analyse the Wallis-filter. Results show that in more challenging situations, in this case for data captured from different platforms at different days most approaches do not perform well. Wallis-filtering emerged to be most helpful especially for the SIFT approach. The commercial software pix4dmapper succeeds in overall bundle adjustment only for some configurations, and especially not for the entire image block provided.
Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli
2014-03-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.
Lee, Woogul; Kim, Sung-il
2014-01-01
We conducted behavioral and functional magnetic resonance imaging (fMRI) research to investigate the effects of two types of achievement goals—mastery goals and performance-approach goals— on challenge seeking and feedback processing. The results of the behavioral experiment indicated that mastery goals were associated with a tendency to seek challenge, both before and after experiencing difficulty during task performance, whereas performance-approach goals were related to a tendency to avoid challenge after encountering difficulty during task performance. The fMRI experiment uncovered a significant decrease in ventral striatal activity when participants received negative feedback for any task type and both forms of achievement goals. During the processing of negative feedback for the rule-finding task, performance-approach-oriented participants showed a substantial reduction in activity in the dorsolateral prefrontal cortex (DLPFC) and the frontopolar cortex, whereas mastery-oriented participants showed little change. These results suggest that performance-approach-oriented participants are less likely to either recruit control processes in response to negative feedback or focus on task-relevant information provided alongside the negative feedback. In contrast, mastery-oriented participants are more likely to modulate aversive valuations to negative feedback and focus on the constructive elements of feedback in order to attain their task goals. We conclude that performance-approach goals lead to a reluctant stance towards difficulty, while mastery goals encourage a proactive stance. PMID:25251396
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
RVC-CAL library for endmember and abundance estimation in hyperspectral image analysis
NASA Astrophysics Data System (ADS)
Lazcano López, R.; Madroñal Quintín, D.; Juárez Martínez, E.; Sanz Álvaro, C.
2015-10-01
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for instance, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. Thus, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization. In that line, this paper describes the construction of a new hyperspectral processing library for RVC-CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This paper presents the development of the required library functions to implement two of the four stages of the hyperspectral imaging processing chain--endmember and abundances estimation. The results obtained show that the library achieves speedups of 30%, approximately, comparing to an existing software of hyperspectral images analysis; concretely, the endmember estimation step reaches an average speedup of 27.6%, which saves almost 8 seconds in the execution time. It also shows the existence of some bottlenecks, as the communication interfaces among the different actors due to the volume of data to transfer. Finally, it is shown that the library considerably simplifies the implementation process. Thus, experimental results show the potential of a RVC-CAL library for analyzing hyperspectral images in real-time, as it provides enough resources to study the system performance.
Probing the brain with molecular fMRI.
Ghosh, Souparno; Harvey, Peter; Simon, Jacob C; Jasanoff, Alan
2018-06-01
One of the greatest challenges of modern neuroscience is to incorporate our growing knowledge of molecular and cellular-scale physiology into integrated, organismic-scale models of brain function in behavior and cognition. Molecular-level functional magnetic resonance imaging (molecular fMRI) is a new technology that can help bridge these scales by mapping defined microscopic phenomena over large, optically inaccessible regions of the living brain. In this review, we explain how MRI-detectable imaging probes can be used to sensitize noninvasive imaging to mechanistically significant components of neural processing. We discuss how a combination of innovative probe design, advanced imaging methods, and strategies for brain delivery can make molecular fMRI an increasingly successful approach for spatiotemporally resolved studies of diverse neural phenomena, perhaps eventually in people. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guo, Lu; Wang, Gang; Feng, Yuanming; Yu, Tonggang; Guo, Yu; Bai, Xu; Ye, Zhaoxiang
2016-09-21
Accurate target volume delineation is crucial for the radiotherapy of tumors. Diffusion and perfusion magnetic resonance imaging (MRI) can provide functional information about brain tumors, and they are able to detect tumor volume and physiological changes beyond the lesions shown on conventional MRI. This review examines recent studies that utilized diffusion and perfusion MRI for tumor volume definition in radiotherapy of brain tumors, and it presents the opportunities and challenges in the integration of multimodal functional MRI into clinical practice. The results indicate that specialized and robust post-processing algorithms and tools are needed for the precise alignment of targets on the images, and comprehensive validations with more clinical data are important for the improvement of the correlation between histopathologic results and MRI parameter images.
Flexible medical image management using service-oriented architecture.
Shaham, Oded; Melament, Alex; Barak-Corren, Yuval; Kostirev, Igor; Shmueli, Noam; Peres, Yardena
2012-01-01
Management of medical images increasingly involves the need for integration with a variety of information systems. To address this need, we developed Content Management Offering (CMO), a platform for medical image management supporting interoperability through compliance with standards. CMO is based on the principles of service-oriented architecture, implemented with emphasis on three areas: clarity of business process definition, consolidation of service configuration management, and system scalability. Owing to the flexibility of this platform, a small team is able to accommodate requirements of customers varying in scale and in business needs. We describe two deployments of CMO, highlighting the platform's value to customers. CMO represents a flexible approach to medical image management, which can be applied to a variety of information technology challenges in healthcare and life sciences organizations.
The challenges for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Cox, B. T.; Laufer, J. G.; Beard, P. C.
2009-02-01
In recent years, some of the promised potential of biomedical photoacoustic imaging has begun to be realised. It has been used to produce good, three-dimensional, images of blood vasculature in mice and other small animals, and in human skin in vivo, to depths of several mm, while maintaining a spatial resolution of <100 μm. Furthermore, photoacoustic imaging depends for contrast on the optical absorption distribution of the tissue under study, so, in the same way that the measurement of optical spectra has traditionally provided a means of determining the molecular constituents of an object, there is hope that multiwavelength photoacoustic imaging will provide a way to distinguish and quantify the component molecules of optically-scattering biological tissue (which may include exogeneous, targeted, chromophores). In simple situations with only a few significant absorbers and some prior knowledge of the geometry of the arrangement, this has been shown to be possible, but significant hurdles remain before the general problem can be solved. The general problem may be stated as follows: is it possible, in general, to take a set of photoacoustic images obtained at multiple optical wavelengths, and process them in a way that results in a set of quantitatively accurate images of the concentration distributions of the constituent chromophores of the imaged tissue? If such an 'inversion' procedure - not specific to any particular situation and free of restrictive suppositions - were designed, then photoacoustic imaging would offer the possibility of high resolution 'molecular' imaging of optically scattering tissue: a very powerful technique that would find uses in many areas of the life sciences and in clinical practice. This paper describes the principal challenges that must be overcome for such a general procedure to be successful.
ARCOCT: Automatic detection of lumen border in intravascular OCT images.
Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos
2017-11-01
Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border detection in OCT images. Copyright © 2017 Elsevier B.V. All rights reserved.
Small blob identification in medical images using regional features from optimum scale.
Zhang, Min; Wu, Teresa; Bennett, Kevin M
2015-04-01
Recent advances in medical imaging technology have greatly enhanced imaging-based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this research, we are interested in one type of imaging objects: small blobs. Examples of small blob objects are cells in histopathology images, glomeruli in MR images, etc. This problem is particularly challenging because the small blobs often have in homogeneous intensity distribution and an indistinct boundary against the background. Yet, in general, these blobs have similar sizes. Motivated by this finding, we propose a novel detector termed Hessian-based Laplacian of Gaussian (HLoG) using scale space theory as the foundation. Like most imaging detectors, an image is first smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale on which a presegmentation is conducted. The advantage of the Hessian process is that it is capable of delineating the blobs. As a result, regional features can be retrieved. These features enable an unsupervised clustering algorithm for postpruning which should be more robust and sensitive than the traditional threshold-based postpruning commonly used in most imaging detectors. To test the performance of the proposed HLoG, two sets of 2-D grey medical images are studied. HLoG is compared against three state-of-the-art detectors: generalized LoG, Radial-Symmetry and LoG using precision, recall, and F-score metrics.We observe that HLoG statistically outperforms the compared detectors.
The correlation study of parallel feature extractor and noise reduction approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria
2015-05-15
This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less
NASA Astrophysics Data System (ADS)
Stumpf, André; Malet, Jean-Philippe
2016-04-01
Since more than 20 years, "Earth Observation" (EO) satellites developed or operated by ESA have provided a wealth of data. In the coming years, the Sentinel missions, along with the Copernicus Contributing Missions as well as Earth Explorers and other, Third Party missions will provide routine monitoring of our environment at the global scale, thereby delivering an unprecedented amount of data. While the availability of the growing volume of environmental data from space represents a unique opportunity for science, general R&D, and applications, it also poses major challenges to fully exploit the potential of archived and daily incoming datasets. Those challenges do not only comprise the discovery, access, processing, and visualization of large data volumes but also an increasing diversity of data sources and end users from different fields (e.g. EO, in-situ monitoring, and modeling). In this context, the GTEP (Geohazards Thematic Exploitation Platform) initiative aims to build an operational distributed processing platform to maximize the exploitation of EO data from past and future satellite missions for the detection and monitoring of natural hazards. This presentation focuses on the "Optical Image Correlation" Pilot Project (funded by ESA within the GTEP platform) which objectives are to develop an easy-to-use, flexible and distributed processing chain for: 1) the automated reconstruction of surface Digital Elevation Models from stereo (and tristereo) pairs of Spot 6/7 and Pléiades satellite imagery, 2) the creation of ortho-images (panchromatic and multi-spectral) of Landsat 8, Sentinel-2, Spot 6/7 and Pléiades scenes, 3) the calculation of horizontal (E-N) displacement vectors based on sub-pixel image correlation. The processing chains is being implemented on the GEP cloud-based (Hadoop, MapReduce) environment and designed for analysis of surface displacements at local to regional scale (10-1000 km2) targeting in particular co-seismic displacement and slow-moving landslides. The processing targets both the analysis of time-series of archived data (Pléiades, Landsat 8) and current satellite missions Spot 6/7 and Sentinel-2. The possibility of rapid calculation in near-real time is an important aspect of the design of the processing chain. Archived datasets will be processed for some 'demonstrator' test sites in order to develop and test the implemented workflows.
NASA Astrophysics Data System (ADS)
Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo
2015-05-01
The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self-learning strategies were implemented with very promising results, demonstrating the feasibility of using low-cost high-speed infrared imagers in advancing towards a real-time / in-line zero-defect production systems.
NASA Astrophysics Data System (ADS)
Owen, S. E.; Hua, H.; Rosen, P. A.; Agram, P. S.; Webb, F.; Simons, M.; Yun, S. H.; Sacco, G. F.; Liu, Z.; Fielding, E. J.; Lundgren, P.; Moore, A. W.
2017-12-01
A new era of geodetic imaging arrived with the launch of the ESA Sentinel-1A/B satellites in 2014 and 2016, and with the 2016 confirmation of the NISAR mission, planned for launch in 2021. These missions assure high quality, freely and openly distributed regularly sampled SAR data into the indefinite future. These unprecedented data sets are a watershed for solid earth sciences as we progress towards the goal of ubiquitous InSAR measurements. We now face the challenge of how to best address the massive volumes of data and intensive processing requirements. Should scientists individually process the same data independently themselves? Should a centralized service provider create standard products that all can use? Are there other approaches to accelerate science that are cost effective and efficient? The Advanced Rapid Imaging and Analysis (ARIA) project, a joint venture co-sponsored by California Institute of Technology (Caltech) and by NASA through the Jet Propulsion Laboratory (JPL), is focused on rapidly generating higher level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. However, there are challenges in defining the optimal InSAR data products for the solid earth science community. In this presentation, we will present our experience with InSAR users, our lessons learned the advantages of on demand and standard products, and our proposal for the most effective path forward.
NASA Astrophysics Data System (ADS)
Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan
2014-03-01
Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.
Using color management in color document processing
NASA Astrophysics Data System (ADS)
Nehab, Smadar
1995-04-01
Color Management Systems have been used for several years in Desktop Publishing (DTP) environments. While this development hasn't matured yet, we are already experiencing the next generation of the color imaging revolution-Device Independent Color for the small office/home office (SOHO) environment. Though there are still open technical issues with device independent color matching, they are not the focal point of this paper. This paper discusses two new and crucial aspects in using color management in color document processing: the management of color objects and their associated color rendering methods; a proposal for a precedence order and handshaking protocol among the various software components involved in color document processing. As color peripherals become affordable to the SOHO market, color management also becomes a prerequisite for common document authoring applications such as word processors. The first color management solutions were oriented towards DTP environments whose requirements were largely different. For example, DTP documents are image-centric, as opposed to SOHO documents that are text and charts centric. To achieve optimal reproduction on low-cost SOHO peripherals, it is critical that different color rendering methods are used for the different document object types. The first challenge in using color management of color document processing is the association of rendering methods with object types. As a result of an evolutionary process, color matching solutions are now available as application software, as driver embedded software and as operating system extensions. Consequently, document processing faces a new challenge, the correct selection of the color matching solution while avoiding duplicate color corrections.
LDFT-based watermarking resilient to local desynchronization attacks.
Tian, Huawei; Zhao, Yao; Ni, Rongrong; Qin, Lunming; Li, Xuelong
2013-12-01
Up to now, a watermarking scheme that is robust against desynchronization attacks (DAs) is still a grand challenge. Most image watermarking resynchronization schemes in literature can survive individual global DAs (e.g., rotation, scaling, translation, and other affine transforms), but few are resilient to challenging cropping and local DAs. The main reason is that robust features for watermark synchronization are only globally invariable rather than locally invariable. In this paper, we present a blind image watermarking resynchronization scheme against local transform attacks. First, we propose a new feature transform named local daisy feature transform (LDFT), which is not only globally but also locally invariable. Then, the binary space partitioning (BSP) tree is used to partition the geometrically invariant LDFT space. In the BSP tree, the location of each pixel is fixed under global transform, local transform, and cropping. Lastly, the watermarking sequence is embedded bit by bit into each leaf node of the BSP tree by using the logarithmic quantization index modulation watermarking embedding method. Simulation results show that the proposed watermarking scheme can survive numerous kinds of distortions, including common image-processing attacks, local and global DAs, and noninvertible cropping.
'Endurance': A Daunting Challenge
NASA Technical Reports Server (NTRS)
2004-01-01
This image shows the approximate size of the Mars Exploration Rover Opportunity in comparison to the impressive impact crater dubbed 'Endurance,' which is roughly 130 meters (430 feet) across. A model of Opportunity has been superimposed on top of an approximate true-color image taken by the rover's panoramic camera. Scientists are eager to explore Endurance for clues to the red planet's history. The crater's exposed walls provide a window to what lies beneath the surface of Mars and thus what geologic processes occurred there in the past. While recent studies of the smaller crater nicknamed 'Eagle' revealed an evaporating body of salty water, that crater was not deep enough to indicate what came before the water. Endurance may be able to help answer this question, but the challenge is getting to the scientific targets: most of the crater's rocks are embedded in vertical cliffs. Rover planners are developing strategies to overcome this obstacle. This image is a portion of a larger mosaic taken with the panoramic camera's 480-, 530- and 750-nanometer filters on sols 97 and 98.Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
Challenges of image placement and overlay at the 90-nm and 65-nm nodes
NASA Astrophysics Data System (ADS)
Trybula, Walter J.
2003-05-01
The technology acceleration of the ITRS Roadmap has many implications on both the semiconductor supplier community and the manufacturers. INTERNATIONAL SE-MATECH has been leading and supporting efforts to investigate the impact of the tech-nology introduction. This paper examines the issue of manufacturing tolerances available for image placement on adjacent critical levels (overlay) at the 90nm and 65nm technol-ogy nodes. The allowable values from the 2001 release of the ITRS Roadmap are 32nm for the 90nm node, and 23nm for the 65nm node. Even the 130nm node has overlay requirements of only 46nm. Employing tolerances that can be predicted, the impact of existing production/processing tolerance accumulation can provide an indication of the challenges facing the manufacturer in the production of 90nm and 65nm Node devices.
Topical Review: Unique Contributions of Magnetic Resonance Imaging to Pediatric Psychology Research
Duraccio, Kara M.; Carbine, Kaylie M.; Kirwan, C. Brock
2016-01-01
Objective This review aims to provide a brief introduction of the utility of magnetic resonance imaging (MRI) methods in pediatric psychology research, describe several exemplar studies that highlight the unique benefits of MRI techniques for pediatric psychology research, and detail methods for addressing several challenges inherent to pediatric MRI research. Methods Literature review. Results Numerous useful applications of MRI research in pediatric psychology have been illustrated in published research. MRI methods yield information that cannot be obtained using neuropsychological or behavioral measures. Conclusions Using MRI in pediatric psychology research may facilitate examination of neural structures and processes that underlie health behaviors. Challenges inherent to conducting MRI research with pediatric research participants (e.g., head movement) may be addressed using evidence-based strategies. We encourage pediatric psychology researchers to consider adopting MRI techniques to answer research questions relevant to pediatric health and illness. PMID:26141118
Credibility assessments: operational issues and technology impact for law enforcement applications
NASA Astrophysics Data System (ADS)
Ryan, Andrew H., Jr.; Pavlidis, Ioannis; Rohrbaugh, J. W.; Marchak, Frank; Kozel, F. Andrew
2003-09-01
Law Enforcement personnel are faced with new challenges to rapidly assess the credibility of statements made by individuals in airports, border crossings, and a variety of environments not conducive to interviews. New technologies may offer assistance to law enforcement personnel in the interview and interrogation process. Additionally, homeland defense against terrorism challenges scientists to develop new methods of assessing truthfulness and credibility in humans. Current findings of four advanced research projects looking at emerging technologies in the credibility assessment are presented for discussion. This paper will discuss research efforts on four emerging technologies now underway at DoDPI and other institutions. These include: (1) Thermal Image Analysis (TIA); (2) Laser Doppler Vibrometry (LDV); (3) Eye Movement based Memory Assessment (EMMA); and (4) functional Magnetic Resonance Imaging (fMRI). A description each technique, the current state of these research efforts, and an overview of the potential for each of these emerging technologies will be provided.
Spinal cord grey matter segmentation challenge.
Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien
2017-05-15
An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Design and manufacture of imaging time-of-propagation optics
NASA Astrophysics Data System (ADS)
Albrecht, Mike; Fast, James; Schwartz, Alan
2016-09-01
There are several challenges associated with the design and manufacture of the optics required for the imaging time-of- propagation detector constructed for the Belle II particle physics experiment. This detector uses Cherenkov light radiated in quartz bars to identify subatomic particles: pions, kaons, and protons. The optics are physically large (125 cm x 45 cm x 2 cm bars and 45 cm x 10 cm x 5 cm prisms), all surfaces are optically polished, and there is very little allowance for chamfers or surface defects. In addition to the optical challenges, there are several logistical and handling challenges associated with measuring, assembling, cleaning, packaging, and shipping these delicate precision optics. This paper describes a collaborative effort between Pacific Northwest National Laboratory, the University of Cincinnati, and ZYGO Corporation for the design and manufacture of 48 fused silica optics (30 bars and 18 prisms) for the iTOP Detector. Details of the iTOP detector design that drove the challenging optical requirements are provided, along with material selection considerations. Since the optics are so large, precise, and delicate, special care had to be given to the selection of a manufacturing process capable of achieving the challenging optical and surface defect requirements on such large and high-aspect-ratio (66:1) components. A brief update on the current status and performance of these optics is also provided.
Ravi, Daniele; Fabelo, Himar; Callic, Gustavo Marrero; Yang, Guang-Zhong
2017-09-01
Recent advances in hyperspectral imaging have made it a promising solution for intra-operative tissue characterization, with the advantages of being non-contact, non-ionizing, and non-invasive. Working with hyperspectral images in vivo, however, is not straightforward as the high dimensionality of the data makes real-time processing challenging. In this paper, a novel dimensionality reduction scheme and a new processing pipeline are introduced to obtain a detailed tumor classification map for intra-operative margin definition during brain surgery. However, existing approaches to dimensionality reduction based on manifold embedding can be time consuming and may not guarantee a consistent result, thus hindering final tissue classification. The proposed framework aims to overcome these problems through a process divided into two steps: dimensionality reduction based on an extension of the T-distributed stochastic neighbor approach is first performed and then a semantic segmentation technique is applied to the embedded results by using a Semantic Texton Forest for tissue classification. Detailed in vivo validation of the proposed method has been performed to demonstrate the potential clinical value of the system.
Thermal imaging as a smartphone application: exploring and implementing a new concept
NASA Astrophysics Data System (ADS)
Yanai, Omer
2014-06-01
Today's world is going mobile. Smartphone devices have become an important part of everyday life for billions of people around the globe. Thermal imaging cameras have been around for half a century and are now making their way into our daily lives. Originally built for military applications, thermal cameras are starting to be considered for personal use, enabling enhanced vision and temperature mapping for different groups of professional individuals. Through a revolutionary concept that turns smartphones into fully functional thermal cameras, we have explored how these two worlds can converge by utilizing the best of each technology. We will present the thought process, design considerations and outcome of our development process, resulting in a low-power, high resolution, lightweight USB thermal imaging device that turns Android smartphones into thermal cameras. We will discuss the technological challenges that we faced during the development of the product, and what are the system design decisions taken during the implementation. We will provide some insights we came across during this development process. Finally, we will discuss the opportunities that this innovative technology brings to the market.
Quantitative Aspects of Single Molecule Microscopy
Ober, Raimund J.; Tahmasbi, Amir; Ram, Sripad; Lin, Zhiping; Ward, E. Sally
2015-01-01
Single molecule microscopy is a relatively new optical microscopy technique that allows the detection of individual molecules such as proteins in a cellular context. This technique has generated significant interest among biologists, biophysicists and biochemists, as it holds the promise to provide novel insights into subcellular processes and structures that otherwise cannot be gained through traditional experimental approaches. Single molecule experiments place stringent demands on experimental and algorithmic tools due to the low signal levels and the presence of significant extraneous noise sources. Consequently, this has necessitated the use of advanced statistical signal and image processing techniques for the design and analysis of single molecule experiments. In this tutorial paper, we provide an overview of single molecule microscopy from early works to current applications and challenges. Specific emphasis will be on the quantitative aspects of this imaging modality, in particular single molecule localization and resolvability, which will be discussed from an information theoretic perspective. We review the stochastic framework for image formation, different types of estimation techniques and expressions for the Fisher information matrix. We also discuss several open problems in the field that demand highly non-trivial signal processing algorithms. PMID:26167102
Review methods for image segmentation from computed tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik
Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affectmore » the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan.« less
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
The National Library of Medicine Pill Image Recognition Challenge: An Initial Report.
Yaniv, Ziv; Faruque, Jessica; Howe, Sally; Dunn, Kathel; Sharlip, David; Bond, Andrew; Perillan, Pablo; Bodenreider, Olivier; Ackerman, Michael J; Yoo, Terry S
2016-10-01
In January 2016 the U.S. National Library of Medicine announced a challenge competition calling for the development and discovery of high-quality algorithms and software that rank how well consumer images of prescription pills match reference images of pills in its authoritative RxIMAGE collection. This challenge was motivated by the need to easily identify unknown prescription pills both by healthcare personnel and the general public. Potential benefits of this capability include confirmation of the pill in settings where the documentation and medication have been separated, such as in a disaster or emergency; and confirmation of a pill when the prescribed medication changes from brand to generic, or for any other reason the shape and color of the pill change. The data for the competition consisted of two types of images, high quality macro photographs, reference images, and consumer quality photographs of the quality we expect users of a proposed application to acquire. A training dataset consisting of 2000 reference images and 5000 corresponding consumer quality images acquired from 1000 pills was provided to challenge participants. A second dataset acquired from 1000 pills with similar distributions of shape and color was reserved as a segregated testing set. Challenge submissions were required to produce a ranking of the reference images, given a consumer quality image as input. Determination of the winning teams was done using the mean average precision quality metric, with the three winners obtaining mean average precision scores of 0.27, 0.09, and 0.08. In the retrieval results, the correct image was amongst the top five ranked images 43%, 12%, and 11% of the time, out of 5000 query/consumer images. This is an initial promising step towards development of an NLM software system and application-programming interface facilitating pill identification. The training dataset will continue to be freely available online at: http://pir.nlm.nih.gov/challenge/submission.html.
Object detection from images obtained through underwater turbulence medium
NASA Astrophysics Data System (ADS)
Furhad, Md. Hasan; Tahtali, Murat; Lambert, Andrew
2017-09-01
Imaging through underwater experiences severe distortions due to random fluctuations of temperature and salinity in water, which produces underwater turbulence through diffraction limited blur. Lights reflecting from objects perturb and attenuate contrast, making the recognition of objects of interest difficult. Thus, the information available for detecting underwater objects of interest becomes a challenging task as they have inherent confusion among the background, foreground and other image properties. In this paper, a saliency-based approach is proposed to detect the objects acquired through an underwater turbulent medium. This approach has drawn attention among a wide range of computer vision applications, such as image retrieval, artificial intelligence, neuro-imaging and object detection. The image is first processed through a deblurring filter. Next, a saliency technique is used on the image for object detection. In this step, a saliency map that highlights the target regions is generated and then a graph-based model is proposed to extract these target regions for object detection.
Precision injection molding of freeform optics
NASA Astrophysics Data System (ADS)
Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong
2016-08-01
Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.
A Scalable Approach to Modeling Cascading Risk in the MDAP Network
2014-05-01
Populate Decision Process Model. • Identify challenges to data acquisition. Legend: ATIE_MOD Automated Text & Image Extraction Module IID_MOD...8217:~ TI ~.O.Y <D1Y o:yle-~Jti<NI:Aboolate:tos>:J14 : lert•tl ::J!i <DtV o; vlc "~’"""’al>oolote:tos~: 3l4: 1•tt:t’l...DAES, PE docs, SARS – Topic models built from MDAP hub data seem to be relevant to neighbors. – Challenges : Formatting and Content inconsistencies
In situ study on atomic mechanism of melting and freezing of single bismuth nanoparticles
Li, Yingxuan; Zang, Ling; Jacobs, Daniel L.; Zhao, Jie; Yue, Xiu; Wang, Chuanyi
2017-01-01
Experimental study of the atomic mechanism in melting and freezing processes remains a formidable challenge. We report herein on a unique material system that allows for in situ growth of bismuth nanoparticles from the precursor compound SrBi2Ta2O9 under an electron beam within a high-resolution transmission electron microscope (HRTEM). Simultaneously, the melting and freezing processes within the nanoparticles are triggered and imaged in real time by the HRTEM. The images show atomic-scale evidence for point defect induced melting, and a freezing mechanism mediated by crystallization of an intermediate ordered liquid. During the melting and freezing, the formation of nucleation precursors, nucleation and growth, and the relaxation of the system, are directly observed. Based on these observations, an interaction–relaxation model is developed towards understanding the microscopic mechanism of the phase transitions, highlighting the importance of cooperative multiscale processes. PMID:28194017
In situ study on atomic mechanism of melting and freezing of single bismuth nanoparticles
NASA Astrophysics Data System (ADS)
Li, Yingxuan; Zang, Ling; Jacobs, Daniel L.; Zhao, Jie; Yue, Xiu; Wang, Chuanyi
2017-02-01
Experimental study of the atomic mechanism in melting and freezing processes remains a formidable challenge. We report herein on a unique material system that allows for in situ growth of bismuth nanoparticles from the precursor compound SrBi2Ta2O9 under an electron beam within a high-resolution transmission electron microscope (HRTEM). Simultaneously, the melting and freezing processes within the nanoparticles are triggered and imaged in real time by the HRTEM. The images show atomic-scale evidence for point defect induced melting, and a freezing mechanism mediated by crystallization of an intermediate ordered liquid. During the melting and freezing, the formation of nucleation precursors, nucleation and growth, and the relaxation of the system, are directly observed. Based on these observations, an interaction-relaxation model is developed towards understanding the microscopic mechanism of the phase transitions, highlighting the importance of cooperative multiscale processes.
In situ study on atomic mechanism of melting and freezing of single bismuth nanoparticles.
Li, Yingxuan; Zang, Ling; Jacobs, Daniel L; Zhao, Jie; Yue, Xiu; Wang, Chuanyi
2017-02-13
Experimental study of the atomic mechanism in melting and freezing processes remains a formidable challenge. We report herein on a unique material system that allows for in situ growth of bismuth nanoparticles from the precursor compound SrBi 2 Ta 2 O 9 under an electron beam within a high-resolution transmission electron microscope (HRTEM). Simultaneously, the melting and freezing processes within the nanoparticles are triggered and imaged in real time by the HRTEM. The images show atomic-scale evidence for point defect induced melting, and a freezing mechanism mediated by crystallization of an intermediate ordered liquid. During the melting and freezing, the formation of nucleation precursors, nucleation and growth, and the relaxation of the system, are directly observed. Based on these observations, an interaction-relaxation model is developed towards understanding the microscopic mechanism of the phase transitions, highlighting the importance of cooperative multiscale processes.
Analysis of cholesterol trafficking with fluorescent probes
Maxfield, Frederick R.; Wüstner, Daniel
2013-01-01
Cholesterol plays an important role in determining the biophysical properties of biological membranes, and its concentration is tightly controlled by homeostatic processes. The intracellular transport of cholesterol among organelles is a key part of the homeostatic mechanism, but sterol transport processes are not well understood. Fluorescence microscopy is a valuable tool for studying intracellular transport processes, but this method can be challenging for lipid molecules because addition of a fluorophore may alter the properties of the molecule greatly. We discuss the use of fluorescent molecules that can bind to cholesterol to reveal its distribution in cells. We also discuss the use of intrinsically fluorescent sterols that closely mimic cholesterol, as well as some minimally modified fluorophore-labeled sterols. Methods for imaging these sterols by conventional fluorescence microscopy and by multiphoton microscopy are described. Some label-free methods for imaging cholesterol itself are also discussed briefly. PMID:22325611
Burnett, Stephanie; Sebastian, Catherine; Kadosh, Kathrin Cohen; Blakemore, Sarah-Jayne
2015-01-01
Social cognition is the collection of cognitive processes required to understand and interact with others. The term ‘social brain’ refers to the network of brain regions that underlies these processes. Recent evidence suggests that a number of social cognitive functions continue to develop during adolescence, resulting in age differences in tasks that assess cognitive domains including face processing, mental state inference and responding to peer influence and social evaluation. Concurrently, functional and structural magnetic resonance imaging (MRI) studies show differences between adolescent and adult groups within parts of the social brain. Understanding the relationship between these neural and behavioural observations is a challenge. This review discusses current research findings on adolescent social cognitive development and its functional MRI correlates, then integrates and interprets these findings in the context of hypothesised developmental neurocognitive and neurophysiological mechanisms. PMID:21036192
Imaging Systems for Size Measurements of Debrisat Fragments
NASA Technical Reports Server (NTRS)
Shiotani, B.; Scruggs, T.; Toledo, R.; Fitz-Coy, N.; Liou, J. C.; Sorge, M.; Huynh, T.; Opiela, J.; Krisko, P.; Cowardin, H.
2017-01-01
The overall objective of the DebriSat project is to provide data to update existing standard spacecraft breakup models. One of the key sets of parameters used in these models is the physical dimensions of the fragments (i.e., length, average-cross sectional area, and volume). For the DebriSat project, only fragments with at least one dimension greater than 2 mm are collected and processed. Additionally, a significant portion of the fragments recovered from the impact test are needle-like and/or flat plate-like fragments where their heights are almost negligible in comparison to their other dimensions. As a result, two fragment size categories were defined: 2D objects and 3D objects. While measurement systems are commercially available, factors such as measurement rates, system adaptability, size characterization limitations and equipment costs presented significant challenges to the project and a decision was made to develop our own size characterization systems. The size characterization systems consist of two automated image systems, one referred to as the 3D imaging system and the other as the 2D imaging system. Which imaging system to use depends on the classification of the fragment being measured. Both imaging systems utilize point-and-shoot cameras for object image acquisition and create representative point clouds of the fragments. The 3D imaging system utilizes a space-carving algorithm to generate a 3D point cloud, while the 2D imaging system utilizes an edge detection algorithm to generate a 2D point cloud. From the point clouds, the three largest orthogonal dimensions are determined using a convex hull algorithm. For 3D objects, in addition to the three largest orthogonal dimensions, the volume is computed via an alpha-shape algorithm applied to the point clouds. The average cross-sectional area is also computed for 3D objects. Both imaging systems have automated size measurements (image acquisition and image processing) driven by the need to quickly and accurately measure tens of thousands of debris fragments. Moreover, the automated size measurement reduces potential fragment damage/mishandling and ability for accuracy and repeatability. As the fragment characterization progressed, it became evident that the imaging systems had to be revised. For example, an additional view was added to the 2D imaging system to capture the height of the 2D object. This paper presents the DebriSat project's imaging systems and calculation techniques in detail; from design and development to maturation. The experiences and challenges are also shared.
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
An Intelligent Fingerprint-Biometric Image Scrambling Scheme
NASA Astrophysics Data System (ADS)
Khan, Muhammad Khurram; Zhang, Jiashu
To obstruct the attacks, and to hamper with the liveness and retransmission issues of biometrics images, we have researched on the challenge/response-based biometrics scrambled image transmission. We proposed an intelligent biometrics sensor, which has computational power to receive challenges from the authentication server and generate response against the challenge with the encrypted biometric image. We utilized the FRT for biometric image encryption and used its scaling factors and random phase mask as the additional secret keys. In addition, we chaotically generated the random phase masks by a chaotic map to further improve the encryption security. Experimental and simulation results have shown that the presented system is secure, robust, and deters the risks of attacks of biometrics image transmission.
NASA Astrophysics Data System (ADS)
Burton, Mike
2015-07-01
Magmatic degassing plays a key role in the dynamics of volcanic activity and also in contributing to the carbon, water and sulphur volatile cycles on Earth. Quantifying the fluxes of magmatic gas emitted from volcanoes is therefore of fundamental importance in Earth Science. This has been recognised since the beginning of modern volcanology, with initial measurements of volcanic SO2 flux being conducted with COrrelation SPECtrometer instruments from the late seventies. While COSPEC measurements continue today, they have been largely superseded by compact grating spectrometers, which were first introduced soon after the start of the 21st Century. Since 2006, a new approach to measuring fluxes has appeared, that of quantitative imaging of the SO2 slant column amount in a volcanic plume. Quantitative imaging of volcanic plumes has created new opportunities and challenges, and in April 2013 an ESF-funded MeMoVolC workshop was held, with the objectives of bringing together the main research groups, create a vibrant, interconnected, community, and examine the current state of the art of this new research frontier. This special issue of sixteen papers within the Journal of Volcanology and Geothermal Research is the direct result of the discussions, intercomparisons and results reported in that workshop. The papers report on the volcanological objectives of the plume imaging community, the state of the art of the technology used, intercomparisons, validations, novel methods and results from field applications. Quantitative plume imaging of volcanic plumes is achieved by using both infrared and ultraviolet wavelengths, with each wavelength offering a different trade-off of strengths and weaknesses, and the papers in this issue reflect this wavelength flexibility. Gas compositions can also be imaged, and this approach offers much promise in the quantification of chemical processing within plumes. One of the key advantages of the plume imaging approach is that we can achieve gas flux measurements at 1-10 Hz frequencies, allowing direct comparisons with geophysical measurements, opening new, interdisciplinary opportunities to deepen our understanding of volcanological processes. Several challenges still can be improved upon, such as dealing with light scattering issues and full automation of data processing. However, it is clear that quantitative plume imaging will have a lasting and profound impact on how volcano observatories operate, our ability to forecast and manage volcanic eruptions, our constraints of global volcanic gas fluxes, and on our understanding of magma dynamics.
NASA Astrophysics Data System (ADS)
Rose, Jake; Martin, Michael; Bourlai, Thirimachos
2014-06-01
In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.
Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon
2015-01-01
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods. PMID:26900569
Advances in Small Animal Imaging Systems
NASA Astrophysics Data System (ADS)
Loudos, George K.
2007-11-01
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to an increased interest in in vivo laboratory animal imaging during the past few years. For this purpose, new instrumentation, data acquisition strategies, and image processing and reconstruction techniques are being developed, researched and evaluated. The aim of this article is to give a short overview of the state of the art technologies for high resolution and high sensitivity molecular imaging techniques, primarily positron emission tomography (PET) and single photon emission computed tomography (SPECT). The basic needs of small animal imaging will be described. The evolution in instrumentation in the past two decades, as well as the commercially available systems will be overviewed. Finally, the new trends in detector technology and preliminary results from challenging applications will be presented. For more details a number of references are provided.
Interstitial ablation and imaging of soft tissue using miniaturized ultrasound arrays
NASA Astrophysics Data System (ADS)
Makin, Inder R. S.; Gallagher, Laura A.; Mast, T. Douglas; Runk, Megan M.; Faidi, Waseem; Barthe, Peter G.; Slayton, Michael H.
2004-05-01
A potential alternative to extracorporeal, noninvasive HIFU therapy is minimally invasive, interstitial ultrasound ablation that can be performed laparoscopically or percutaneously. Research in this area at Guided Therapy Systems and Ethicon Endo-Surgery has included development of miniaturized (~3 mm diameter) linear ultrasound arrays capable of high power for bulk tissue ablation as well as broad bandwidth for imaging. An integrated control system allows therapy planning and automated treatment guided by real-time interstitial B-scan imaging. Image quality, challenging because of limited probe dimensions and channel count, is aided by signal processing techniques that improve image definition and contrast. Simulations of ultrasonic heat deposition, bio-heat transfer, and tissue modification provide understanding and guidance for development of treatment strategies. Results from in vitro and in vivo ablation experiments, together with corresponding simulations, will be described. Using methods of rotational scanning, this approach is shown to be capable of clinically relevant ablation rates and volumes.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences.
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Analysis of x-ray tomography data of an extruded low density styrenic foam: an image analysis study
NASA Astrophysics Data System (ADS)
Lin, Jui-Ching; Heeschen, William
2016-10-01
Extruded styrenic foams are low density foams that are widely used for thermal insulation. It is difficult to precisely characterize the structure of the cells in low density foams by traditional cross-section viewing due to the frailty of the walls of the cells. X-ray computed tomography (CT) is a non-destructive, three dimensional structure characterization technique that has great potential for structure characterization of styrenic foams. Unfortunately the intrinsic artifacts of the data and the artifacts generated during image reconstruction are often comparable in size and shape to the thin walls of the foam, making robust and reliable analysis of cell sizes challenging. We explored three different image processing methods to clean up artifacts in the reconstructed images, thus allowing quantitative three dimensional determination of cell size in a low density styrenic foam. Three image processing approaches - an intensity based approach, an intensity variance based approach, and a machine learning based approach - are explored in this study, and the machine learning image feature classification method was shown to be the best. Individual cells are segmented within the images after the images were cleaned up using the three different methods and the cell sizes are measured and compared in the study. Although the collected data with the image analysis methods together did not yield enough measurements for a good statistic of the measurement of cell sizes, the problem can be resolved by measuring multiple samples or increasing imaging field of view.