How Digital Image Processing Became Really Easy
NASA Astrophysics Data System (ADS)
Cannon, Michael
1988-02-01
In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.
RayPlus: a Web-Based Platform for Medical Image Processing.
Yuan, Rong; Luo, Ming; Sun, Zhi; Shi, Shuyue; Xiao, Peng; Xie, Qingguo
2017-04-01
Medical image can provide valuable information for preclinical research, clinical diagnosis, and treatment. As the widespread use of digital medical imaging, many researchers are currently developing medical image processing algorithms and systems in order to accommodate a better result to clinical community, including accurate clinical parameters or processed images from the original images. In this paper, we propose a web-based platform to present and process medical images. By using Internet and novel database technologies, authorized users can easily access to medical images and facilitate their workflows of processing with server-side powerful computing performance without any installation. We implement a series of algorithms of image processing and visualization in the initial version of Rayplus. Integration of our system allows much flexibility and convenience for both research and clinical communities.
Comparative performance evaluation of transform coding in image pre-processing
NASA Astrophysics Data System (ADS)
Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha
2017-07-01
We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.
Image processing and recognition for biological images
Uchida, Seiichi
2013-01-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739
A midas plugin to enable construction of reproducible web-based image processing pipelines
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A.; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline. PMID:24416016
A midas plugin to enable construction of reproducible web-based image processing pipelines.
Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek
2013-01-01
Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.
Biomedical image analysis and processing in clouds
NASA Astrophysics Data System (ADS)
Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John
2013-10-01
Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.
Image processing and recognition for biological images.
Uchida, Seiichi
2013-05-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.
ERIC Educational Resources Information Center
Greenberg, Richard
1998-01-01
Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1988-01-01
Two types of research issues are involved in image management systems with space station applications: image processing research and image perception research. The image processing issues are the traditional ones of digitizing, coding, compressing, storing, analyzing, and displaying, but with a new emphasis on the constraints imposed by the human perceiver. Two image coding algorithms have been developed that may increase the efficiency of image management systems (IMS). Image perception research involves a study of the theoretical and practical aspects of visual perception of electronically displayed images. Issues include how rapidly a user can search through a library of images, how to make this search more efficient, and how to present images in terms of resolution and split screens. Other issues include optimal interface to an IMS and how to code images in a way that is optimal for the human perceiver. A test-bed within which such issues can be addressed has been designed.
Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.
Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A
2003-07-01
Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.
Web-based platform for collaborative medical imaging research
NASA Astrophysics Data System (ADS)
Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.
2015-03-01
Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.
Benos, Dale J; Vollmer, Sara H
2010-12-01
Modifying images for scientific publication is now quick and easy due to changes in technology. This has created a need for new image processing guidelines and attitudes, such as those offered to the research community by Doug Cromey (Cromey 2010). We suggest that related changes in technology have simplified the task of detecting misconduct for journal editors as well as researchers, and that this simplification has caused a shift in the responsibility for reporting misconduct. We also argue that the concept of best practices in image processing can serve as a general model for education in best practices in research.
NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.
2012-01-01
A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.
Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.
Researching on the process of remote sensing video imagery
NASA Astrophysics Data System (ADS)
Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan
Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.
Complex Event Processing for Content-Based Text, Image, and Video Retrieval
2016-06-01
NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval
Retinal imaging analysis based on vessel detection.
Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila
2017-07-01
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.
Semi-automated camera trap image processing for the detection of ungulate fence crossing events.
Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija
2017-09-27
Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.
2012-10-01
separate image processing course were attended and this programming language will be used for the research component of this project. Subharmonic...4 5 BODY ...lesions. 5 BODY 5.1 Training Component The training component of this research has been split into breast imaging and image processing arms
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Prescott, Jeffrey William
2013-02-01
The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
1976-03-01
This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is
Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris
2017-06-01
Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.
Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank.
Alfaro-Almagro, Fidel; Jenkinson, Mark; Bangerter, Neal K; Andersson, Jesper L R; Griffanti, Ludovica; Douaud, Gwenaëlle; Sotiropoulos, Stamatios N; Jbabdi, Saad; Hernandez-Fernandez, Moises; Vallee, Emmanuel; Vidaurre, Diego; Webster, Matthew; McCarthy, Paul; Rorden, Christopher; Daducci, Alessandro; Alexander, Daniel C; Zhang, Hui; Dragonu, Iulius; Matthews, Paul M; Miller, Karla L; Smith, Stephen M
2018-02-01
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Electronic photography at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack M.
1994-01-01
The field of photography began a metamorphosis several years ago which promises to fundamentally change how images are captured, transmitted, and output. At this time the metamorphosis is still in the early stages, but already new processes, hardware, and software are allowing many individuals and organizations to explore the entry of imaging into the information revolution. Exploration at this time is prerequisite to leading expertise in the future, and a number of branches at LaRC have ventured into electronic and digital imaging. Their progress until recently has been limited by two factors: the lack of an integrated approach and the lack of an electronic photographic capability. The purpose of the research conducted was to address these two items. In some respects, the lack of electronic photographs has prevented application of an integrated imaging approach. Since everything could not be electronic, the tendency was to work with hard copy. Over the summer, the Photographics Section has set up an Electronic Photography Laboratory. This laboratory now has the capability to scan film images, process the images, and output the images in a variety of forms. Future plans also include electronic capture capability. The current forms of image processing available include sharpening, noise reduction, dust removal, tone correction, color balancing, image editing, cropping, electronic separations, and halftoning. Output choices include customer specified electronic file formats which can be output on magnetic or optical disks or over the network, 4400 line photographic quality prints and transparencies to 8.5 by 11 inches, and 8000 line film negatives and transparencies to 4 by 5 inches. The problem of integrated imaging involves a number of branches at LaRC including Visual Imaging, Research Printing and Publishing, Data Visualization and Animation, Advanced Computing, and various research groups. These units must work together to develop common approaches to image processing and archiving. The ultimate goal is to be able to search for images using an on-line database and image catalog. These images could then be retrieved over the network as needed, along with information on the acquisition and processing prior to storage. For this goal to be realized, a number of standard processing protocols must be developed to allow the classification of images into categories. Standard series of processing algorithms can then be applied to each category (although many of these may be adaptive between images). Since the archived image files would be standardized, it should also be possible to develop standard output processing protocols for a number of output devices. If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging. As such, it could serve as a model for other organizations in government and the private sector.
Image recognition on raw and processed potato detection: a review
NASA Astrophysics Data System (ADS)
Qi, Yan-nan; Lü, Cheng-xu; Zhang, Jun-ning; Li, Ya-shuo; Zeng, Zhen; Mao, Wen-hua; Jiang, Han-lu; Yang, Bing-nan
2018-02-01
Objective: Chinese potato staple food strategy clearly pointed out the need to improve potato processing, while the bottleneck of this strategy is technology and equipment of selection of appropriate raw and processed potato. The purpose of this paper is to summarize the advanced raw and processed potato detection methods. Method: According to consult research literatures in the field of image recognition based potato quality detection, including the shape, weight, mechanical damage, germination, greening, black heart, scab potato etc., the development and direction of this field were summarized in this paper. Result: In order to obtain whole potato surface information, the hardware was built by the synchronous of image sensor and conveyor belt to achieve multi-angle images of a single potato. Researches on image recognition of potato shape are popular and mature, including qualitative discrimination on abnormal and sound potato, and even round and oval potato, with the recognition accuracy of more than 83%. Weight is an important indicator for potato grading, and the image classification accuracy presents more than 93%. The image recognition of potato mechanical damage focuses on qualitative identification, with the main affecting factors of damage shape and damage time. The image recognition of potato germination usually uses potato surface image and edge germination point. Both of the qualitative and quantitative detection of green potato have been researched, currently scab and blackheart image recognition need to be operated using the stable detection environment or specific device. The image recognition of processed potato mainly focuses on potato chips, slices and fries, etc. Conclusion: image recognition as a food rapid detection tool have been widely researched on the area of raw and processed potato quality analyses, its technique and equipment have the potential for commercialization in short term, to meet to the strategy demand of development potato as staple food in China.
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Population-based imaging biobanks as source of big data.
Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian
2017-06-01
Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.
IDAPS (Image Data Automated Processing System) System Description
1988-06-24
This document describes the physical configuration and components used in the image processing system referred to as IDAPS (Image Data Automated ... Processing System). This system was developed by the Environmental Research Institute of Michigan (ERIM) for Eglin Air Force Base. The system is designed
A Review on Medical Image Registration as an Optimization Problem
Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin
2017-01-01
Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149
Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S
2016-07-01
Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.
NASA Technical Reports Server (NTRS)
Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill
1992-01-01
The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.
Experiences with digital processing of images at INPE
NASA Technical Reports Server (NTRS)
Mascarenhas, N. D. A. (Principal Investigator)
1984-01-01
Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.
NASA Astrophysics Data System (ADS)
Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.
Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye
2014-02-01
Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.
Sechopoulos, Ioannis
2013-01-01
Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127
scikit-image: image processing in Python.
van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony
2014-01-01
scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.
Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek
2018-04-26
Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.
Image Segmentation Using Minimum Spanning Tree
NASA Astrophysics Data System (ADS)
Dewi, M. P.; Armiati, A.; Alvini, S.
2018-04-01
This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.
From nociception to pain perception: imaging the spinal and supraspinal pathways
Brooks, Jonathan; Tracey, Irene
2005-01-01
Functional imaging techniques have allowed researchers to look within the brain, and revealed the cortical representation of pain. Initial experiments, performed in the early 1990s, revolutionized pain research, as they demonstrated that pain was not processed in a single cortical area, but in several distributed brain regions. Over the last decade, the roles of these pain centres have been investigated and a clearer picture has emerged of the medial and lateral pain system. In this brief article, we review the imaging literature to date that has allowed these advances to be made, and examine the new frontiers for pain imaging research: imaging the brainstem and other structures involved in the descending control of pain; functional and anatomical connectivity studies of pain processing brain regions; imaging models of neuropathic pain-like states; and going beyond the brain to image spinal function. The ultimate goal of such research is to take these new techniques into the clinic, to investigate and provide new remedies for chronic pain sufferers. PMID:16011543
Quantitative image processing in fluid mechanics
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus; Helman, James; Ning, Paul
1992-01-01
The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.
Neural network face recognition using wavelets
NASA Astrophysics Data System (ADS)
Karunaratne, Passant V.; Jouny, Ismail I.
1997-04-01
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
A survey of GPU-based medical image computing techniques
Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming
2012-01-01
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080
Multi-Dimensional Signal Processing Research Program
1981-09-30
applications to real-time image processing and analysis. A specific long-range application is the automated processing of aerial reconnaissance imagery...Non-supervised image segmentation is a potentially im- portant operation in the automated processing of aerial reconnaissance pho- tographs since it
Future directions for positive body image research.
Halliwell, Emma
2015-06-01
The emergence of positive body image research during the last 10 years represents an important shift in the body image literature. The existing evidence provides a strong empirical basis for the study of positive body image and research has begun to address issues of age, gender, ethnicity, culture, development, and intervention in relation to positive body image. This article briefly reviews the existing evidence before outlining directions for future research. Specifically, six areas for future positive body image research are outlined: (a) conceptualization, (b) models, (c) developmental factors, (d) social interactions, (e) cognitive processing style, and (f) interventions. Finally, the potential role of positive body image as a protective factor within the broader body image literature is discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Continuous tone printing in silicone from CNC milled matrices
NASA Astrophysics Data System (ADS)
Hoskins, S.; McCallion, P.
2014-02-01
Current research at the Centre for Fine Print Research (CFPR) at the University of the West of England, Bristol, is exploring the potential of creating coloured pictorial imagery from a continuous tone relief surface. To create the printing matrices the research team have been using CNC milled images where the height of the relief image is dictated by creating a tone curve and then milling this curve into a series of relief blocks from which the image is cast in a silicone ink. A translucent image is cast from each of the colour matrices and each colour is assembled - one on top of another - resulting is a colour continuous tone print, where colour tone is created by physical depth of colour. This process is a contemporary method of continuous tone colour printing based upon the Nineteenth Century black and white printing process of Woodburytype as developed by Walter Bentley Woodbury in 1865. Woodburytype is the only true continuous tone printing process invented, and although its delicate and subtle surfaces surpassed all other printing methods at the time. The process died out in the late nineteenth century as more expedient and cost effective methods of printing prevailed. New research at CFPR builds upon previous research that combines 19th Century Photomechanical techniques with digital technology to reappraise the potential of these processes.
IMAGES: An interactive image processing system
NASA Technical Reports Server (NTRS)
Jensen, J. R.
1981-01-01
The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.
An Image Retrieval and Processing Expert System for the World Wide Web
NASA Technical Reports Server (NTRS)
Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon
1998-01-01
This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.
Real-time hyperspectral imaging for food safety applications
USDA-ARS?s Scientific Manuscript database
Multispectral imaging systems with selected bands can commonly be used for real-time applications of food processing. Recent research has demonstrated several image processing methods including binning, noise removal filter, and appropriate morphological analysis in real-time mode can remove most fa...
scikit-image: image processing in Python
Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony
2014-01-01
scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921
Research into language concepts for the mission control center
NASA Technical Reports Server (NTRS)
Dellenback, Steven W.; Barton, Timothy J.; Ratner, Jeremiah M.
1990-01-01
A final report is given on research into language concepts for the Mission Control Center (MCC). The Specification Driven Language research is described. The state of the image processing field and how image processing techniques could be applied toward automating the generation of the language known as COmputation Development Environment (CODE or Comp Builder) are discussed. Also described is the development of a flight certified compiler for Comps.
Tracker: Image-Processing and Object-Tracking System Developed
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Theodore W.
1999-01-01
Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.
Collection of sequential imaging events for research in breast cancer screening
NASA Astrophysics Data System (ADS)
Patel, M. N.; Young, K.; Halling-Brown, M. D.
2016-03-01
Due to the huge amount of research involving medical images, there is a widely accepted need for comprehensive collections of medical images to be made available for research. This demand led to the design and implementation of a flexible image repository, which retrospectively collects images and data from multiple sites throughout the UK. The OPTIMAM Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, annotations and expert-determined ground truths. Collection has been ongoing for over three years, providing the opportunity to collect sequential imaging events. Extensive alterations to the identification, collection, processing and storage arms of the system have been undertaken to support the introduction of sequential events, including interval cancers. These updates to the collection systems allow the acquisition of many more images, but more importantly, allow one to build on the existing high-dimensional data stored in the OMI-DB. A research dataset of this scale, which includes original normal and subsequent malignant cases along with expert derived and clinical annotations, is currently unique. These data provide a powerful resource for future research and has initiated new research projects, amongst which, is the quantification of normal cases by applying a large number of quantitative imaging features, with a priori knowledge that eventually these cases develop a malignancy. This paper describes, extensions to the OMI-DB collection systems and tools and discusses the prospective applications of having such a rich dataset for future research applications.
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
An architecture for a brain-image database
NASA Technical Reports Server (NTRS)
Herskovits, E. H.
2000-01-01
The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.
Parallel Processing of Images in Mobile Devices using BOINC
NASA Astrophysics Data System (ADS)
Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo
2018-04-01
Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
Effect of Experience of Use on The Process of Formation of Stereotype Images on Shapes of Products
NASA Astrophysics Data System (ADS)
Kwak, Yong-Min; Yamanaka, Toshimasa
It is necessary to explain the terms used in this research to help the readers better understand the contents of this research. Originally stereotype meant the lead plate cast from a mold of letterpress printing, but now it is used as a term indicating a simplified and fixed notion toward certain group “Knowledge in fixed form” or a term indicating an image simplified and generalized over the members of certain group.[1] Generally, stereotype is used in negative cases, but has both sides of positive and negative view.[2, 3] I believe that a research on the factors of forming stereotype[4] images commonly felt by a large number of persons may suggest a new research methodology for the areas which require high level of creative thinking such as areas of design and researches on emotions. Stereotype images appear between persons, groups and images of countries, enterprises and other organizations. For example, as we usually hear words saying ‘He maybe oo because he is from oo’, we have strong images of characteristics commonly held by the persons who belong to certain categories after tying regions and persons to dividing categories.[5, 6] In this research, I define such images as the stereotype images. This kind of phenomenon appears for the articles used in daily lives. In this research, I established a hypothesis that stereotype images exist for products and underwent the process of verification through experiments.
Image processing based detection of lung cancer on CT scan images
NASA Astrophysics Data System (ADS)
Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi
2017-10-01
In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.
Research on remote sensing image pixel attribute data acquisition method in AutoCAD
NASA Astrophysics Data System (ADS)
Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui
2013-07-01
The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.
NASA Technical Reports Server (NTRS)
1992-01-01
The GENETI-SCANNER, newest product of Perceptive Scientific Instruments, Inc. (PSI), rapidly scans slides, locates, digitizes, measures and classifies specific objects and events in research and diagnostic applications. Founded by former NASA employees, PSI's primary product line is based on NASA image processing technology. The instruments karyotype - a process employed in analysis and classification of chromosomes - using a video camera mounted on a microscope. Images are digitized, enabling chromosome image enhancement. The system enables karyotyping to be done significantly faster, increasing productivity and lowering costs. Product is no longer being manufactured.
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
UWGSP7: a real-time optical imaging workstation
NASA Astrophysics Data System (ADS)
Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.
1995-04-01
With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.
Electronic Photography at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack; Judge, Nancianne
1995-01-01
An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Halftoning and Image Processing Algorithms
1999-02-01
screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our
Saliency-aware food image segmentation for personal dietary assessment using a wearable computer
USDA-ARS?s Scientific Manuscript database
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing h...
The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH).
García-Rojo, Marcial; Gonçalves, Luís; Blobel, Bernd
2012-01-01
The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH) is a European COST Action that has been running from 2007 to 2011. COST Actions are funded by the COST (European Cooperation in the field of Scientific and Technical Research) Agency, supported by the Seventh Framework Programme for Research and Technological Development (FP7), of the European Union. EURO-TELEPATH's main objectives were evaluating and validating the common technological framework and communication standards required to access, transmit and manage digital medical records by pathologists and other medical professionals in a networked environment. The project was organized in four working groups. orking Group 1 "Business modeling in pathology" has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy -using Business Process Modeling Notation (BPMN). orking Group 2 "Informatics standards in pathology" has been dedicated to promoting the development and application of informatics standards in pathology, collaborating with Integrating the Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Working Group 3 "Images: Analysis, Processing, Retrieval and Management" worked on the use of virtual or digital slides that are fostering the use of image processing and analysis in pathology not only for research purposes, but also in daily practice. Working Group 4 "Technology and Automation in Pathology" was focused on studying the adequacy of current existing technical solutions, including, e.g., the quality of images obtained by slide scanners, or the efficiency of image analysis applications. Major outcome of this action are the collaboration with international health informatics standardization bodies to foster the development of standards for digital pathology, offering a new approach for workflow analysis, based in business process modeling. Health terminology standardization research has become a topic of high interest. Future research work should focus on standardization of automatic image analysis and tissue microarrays imaging.
OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization
NASA Astrophysics Data System (ADS)
Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian
2018-03-01
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
TheHiveDB image data management and analysis framework.
Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew
2014-01-06
The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.
TheHiveDB image data management and analysis framework
Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew
2014-01-01
The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000
NASA Technical Reports Server (NTRS)
Bracken, P. A.; Dalton, J. T.; Quann, J. J.; Billingsley, J. B.
1978-01-01
The Atmospheric and Oceanographic Information Processing System (AOIPS) was developed to help applications investigators perform required interactive image data analysis rapidly and to eliminate the inefficiencies and problems associated with batch operation. This paper describes the configuration and processing capabilities of AOIPS and presents unique subsystems for displaying, analyzing, storing, and manipulating digital image data. Applications of AOIPS to research investigations in meteorology and earth resources are featured.
Hybrid vision activities at NASA Johnson Space Center
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1990-01-01
NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.
Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek
2013-01-01
Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.
Physics of fractional imaging in biomedicine.
Sohail, Ayesha; Bég, O A; Li, Zhiwu; Celik, Sebahattin
2018-03-12
The mathematics of imaging is a growing field of research and is evolving rapidly parallel to evolution in the field of imaging. Imaging, which is a sub-field of biomedical engineering, considers novel approaches to visualize biological tissues with the general goal of improving health. "Medical imaging research provides improved diagnostic tools in clinical settings and supports the development of drugs and other therapies. The data acquisition and diagnostic interpretation with minimum error are the important technical aspects of medical imaging. The image quality and resolution are really important in portraying the internal aspects of patient's body. Although there are several user friendly resources for processing image features, such as enhancement, colour manipulation and compression, the development of new processing methods is still worthy of efforts. In this article we aim to present the role of fractional calculus in imaging with the aid of practical examples. Copyright © 2018 Elsevier Ltd. All rights reserved.
Integrated circuit layer image segmentation
NASA Astrophysics Data System (ADS)
Masalskis, Giedrius; Petrauskas, Romas
2010-09-01
In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.
NASA Astrophysics Data System (ADS)
Tomczak, Kamil; Jakubowski, Jacek; Fiołek, Przemysław
2017-06-01
Crack width measurement is an important element of research on the progress of self-healing cement composites. Due to the nature of this research, the method of measuring the width of cracks and their changes over time must meet specific requirements. The article presents a novel method of measuring crack width based on images from a scanner with an optical resolution of 6400 dpi, subject to initial image processing in the ImageJ development environment and further processing and analysis of results. After registering a series of images of the cracks at different times using SIFT conversion (Scale-Invariant Feature Transform), a dense network of line segments is created in all images, intersecting the cracks perpendicular to the local axes. Along these line segments, brightness profiles are extracted, which are the basis for determination of crack width. The distribution and rotation of the line of intersection in a regular layout, automation of transformations, management of images and profiles of brightness, and data analysis to determine the width of cracks and their changes over time are made automatically by own code in the ImageJ and VBA environment. The article describes the method, tests on its properties, sources of measurement uncertainty. It also presents an example of application of the method in research on autogenous self-healing of concrete, specifically the ability to reduce a sample crack width and its full closure within 28 days of the self-healing process.
Electronic workflow for imaging in clinical research.
Hedges, Rebecca A; Goodman, Danielle; Sachs, Peter B
2014-08-01
In the transition from paper to electronic workflow, the University of Colorado Health System's implementation of a new electronic health record system (EHR) forced all clinical groups to reevaluate their practices including the infrastructure surrounding clinical trials. Radiological imaging is an important piece of many clinical trials and requires a high level of consistency and standardization. With EHR implementation, paper orders were manually transcribed into the EHR, digitizing an inefficient work flow. A team of schedulers, radiologists, technologists, research personnel, and EHR analysts worked together to optimize the EHR to accommodate the needs of research imaging protocols. The transition to electronic workflow posed several problems: (1) there needed to be effective communication throughout the imaging process from scheduling to radiologist interpretation. (2) The exam ordering process needed to be automated to allow scheduling of specific research studies on specific equipment. (3) The billing process needed to be controlled to accommodate radiologists already supported by grants. (4) There needed to be functionality allowing exams to finalize automatically skipping the PACS and interpretation process. (5) There needed to be a way to alert radiologists that a specialized research interpretation was needed on a given exam. These issues were resolved through the optimization of the "visit type," allowing a high-level control of an exam at the time of scheduling. Additionally, we added columns and fields to work queues displaying grant identification numbers. The build solutions we implemented reduced the mistakes made and increased imaging quality and compliance.
Autocorrelation techniques for soft photogrammetry
NASA Astrophysics Data System (ADS)
Yao, Wu
In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.
Study on field weed recognition in real time
NASA Astrophysics Data System (ADS)
He, Yong; Pan, Jiazhi; Zhang, Yun
2006-02-01
This research aimed to identify weeds from crops in early stage in the field by using image-processing technology. As 3CCD images offer greater binary value difference between weed and crop section than ordinary digital images taken by common cameras. It has 3 channels (green, red, ir red), which takes a snap-photo of the same area, and the three images can be composed into one image, which facilitates the segmentation of different areas. In this research, MS3100 3CCD camera is used to get images of 6 kinds of weeds and crops. Part of these images contained more than 2 kinds of plants. The leaves' shapes, sizes and colors may be very similar or differs from each other greatly. Some are sword-shaped and some (are) round. Some are large as palm and some small as peanut. Some are little brown while other is blue or green. Different combinations are taken into consideration. By the application of image-processing toolkit in MATLAB, the different areas in the image can be segmented clearly. The texture of the images was also analyzed. The processing methods include operations, such as edge detection, erosion, dilation and other algorithms to process the edge vectors and textures. It is of great importance to segment, in real time, the different areas in digital images in field. When the technique is applied in precision farming, many energies and herbicides and many other materials can be saved. At present time large scale softwares as MATLAB on PC are also used, but the computation can be reduced and integrated into a small embedded system. The research results have shown that the application of this technique in agricultural engineering is feasible and of great economical value.
Evaluation of security algorithms used for security processing on DICOM images
NASA Astrophysics Data System (ADS)
Chen, Xiaomeng; Shuai, Jie; Zhang, Jianguo; Huang, H. K.
2005-04-01
In this paper, we developed security approach to provide security measures and features in PACS image acquisition and Tele-radiology image transmission. The security processing on medical images was based on public key infrastructure (PKI) and including digital signature and data encryption to achieve the security features of confidentiality, privacy, authenticity, integrity, and non-repudiation. There are many algorithms which can be used in PKI for data encryption and digital signature. In this research, we select several algorithms to perform security processing on different DICOM images in PACS environment, evaluate the security processing performance of these algorithms, and find the relationship between performance with image types, sizes and the implementation methods.
A Unified Mathematical Approach to Image Analysis.
1987-08-31
describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .
NASA Astrophysics Data System (ADS)
Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan
2017-08-01
The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.
Medical image analysis with artificial neural networks.
Jiang, J; Trundle, P; Ren, J
2010-12-01
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.
Development of Neuromorphic Sift Operator with Application to High Speed Image Matching
NASA Astrophysics Data System (ADS)
Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.
2015-12-01
There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.
Brain's tumor image processing using shearlet transform
NASA Astrophysics Data System (ADS)
Cadena, Luis; Espinosa, Nikolai; Cadena, Franklin; Korneeva, Anna; Kruglyakov, Alexey; Legalov, Alexander; Romanenko, Alexey; Zotin, Alexander
2017-09-01
Brain tumor detection is well known research area for medical and computer scientists. In last decades there has been much research done on tumor detection, segmentation, and classification. Medical imaging plays a central role in the diagnosis of brain tumors and nowadays uses methods non-invasive, high-resolution techniques, especially magnetic resonance imaging and computed tomography scans. Edge detection is a fundamental tool in image processing, particularly in the areas of feature detection and feature extraction, which aim at identifying points in a digital image at which the image has discontinuities. Shearlets is the most successful frameworks for the efficient representation of multidimensional data, capturing edges and other anisotropic features which frequently dominate multidimensional phenomena. The paper proposes an improved brain tumor detection method by automatically detecting tumor location in MR images, its features are extracted by new shearlet transform.
A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina
Kafieh, Raheleh; Rabbani, Hossein; Kermani, Saeed
2013-01-01
Optical coherence tomography (OCT) is a recently established imaging technique to describe different information about the internal structures of an object and to image various aspects of biological tissues. OCT image segmentation is mostly introduced on retinal OCT to localize the intra-retinal boundaries. Here, we review some of the important image segmentation methods for processing retinal OCT images. We may classify the OCT segmentation approaches into five distinct groups according to the image domain subjected to the segmentation algorithm. Current researches in OCT segmentation are mostly based on improving the accuracy and precision, and on reducing the required processing time. There is no doubt that current 3-D imaging modalities are now moving the research projects toward volume segmentation along with 3-D rendering and visualization. It is also important to develop robust methods capable of dealing with pathologic cases in OCT imaging. PMID:24083137
Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat
2015-06-01
Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine.
2013-05-01
contract or a PhD di sse rtation typically are a " proo f- of-concept" code base that can onl y read a single set of inputs and are not designed ...AFRL-RX-WP-TR-2013-0210 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) III Task Order 0090: Image Processing Framework: From...public release; distribution unlimited. See additional restrictions described on inside pages. STINFO COPY AIR FORCE RESEARCH LABORATORY
[Research applications in digital radiology. Big data and co].
Müller, H; Hanbury, A
2016-02-01
Medical imaging produces increasingly complex images (e.g. thinner slices and higher resolution) with more protocols, so that image reading has also become much more complex. More information needs to be processed and usually the number of radiologists available for these tasks has not increased to the same extent. The objective of this article is to present current research results from projects on the use of image data for clinical decision support. An infrastructure that can allow large volumes of data to be accessed is presented. In this way the best performing tools can be identified without the medical data having to leave secure servers. The text presents the results of the VISCERAL and Khresmoi EU-funded projects, which allow the analysis of previous cases from institutional archives to support decision-making and for process automation. The results also represent a secure evaluation environment for medical image analysis. This allows the use of data extracted from past cases to solve information needs occurring when diagnosing new cases. The presented research prototypes allow direct extraction of knowledge from the visual data of the images and to use this for decision support or process automation. Real clinical use has not been tested but several subjective user tests showed the effectiveness and efficiency of the process. The future in radiology will clearly depend on better use of the important knowledge in clinical image archives to automate processes and aid decision-making via big data analysis. This can help concentrate the work of radiologists towards the most important parts of diagnostics.
Sabbatini, Amber K; Merck, Lisa H; Froemming, Adam T; Vaughan, William; Brown, Michael D; Hess, Erik P; Applegate, Kimberly E; Comfere, Nneka I
2015-12-01
Patient-centered emergency diagnostic imaging relies on efficient communication and multispecialty care coordination to ensure optimal imaging utilization. The construct of the emergency diagnostic imaging care coordination cycle with three main phases (pretest, test, and posttest) provides a useful framework to evaluate care coordination in patient-centered emergency diagnostic imaging. This article summarizes findings reached during the patient-centered outcomes session of the 2015 Academic Emergency Medicine consensus conference "Diagnostic Imaging in the Emergency Department: A Research Agenda to Optimize Utilization." The primary objective was to develop a research agenda focused on 1) defining component parts of the emergency diagnostic imaging care coordination process, 2) identifying gaps in communication that affect emergency diagnostic imaging, and 3) defining optimal methods of communication and multidisciplinary care coordination that ensure patient-centered emergency diagnostic imaging. Prioritized research questions provided the framework to define a research agenda for multidisciplinary care coordination in emergency diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
Applications of High-speed motion analysis system on Solid Rocket Motor (SRM)
NASA Astrophysics Data System (ADS)
Liu, Yang; He, Guo-qiang; Li, Jiang; Liu, Pei-jin; Chen, Jian
2007-01-01
High-speed motion analysis system could record images up to 12,000fps and analyzed with the image processing system. The system stored data and images directly in electronic memory convenient for managing and analyzing. The high-speed motion analysis system and the X-ray radiography system were established the high-speed real-time X-ray radiography system, which could diagnose and measure the dynamic and high-speed process in opaque. The image processing software was developed for improve quality of the original image for acquiring more precise information. The typical applications of high-speed motion analysis system on solid rocket motor (SRM) were introduced in the paper. The research of anomalous combustion of solid propellant grain with defects, real-time measurement experiment of insulator eroding, explosion incision process of motor, structure and wave character of plume during the process of ignition and flameout, measurement of end burning of solid propellant, measurement of flame front and compatibility between airplane and missile during the missile launching were carried out using high-speed motion analysis system. The significative results were achieved through the research. Aim at application of high-speed motion analysis system on solid rocket motor, the key problem, such as motor vibrancy, electrical source instability, geometry aberrance, and yawp disturbance, which damaged the image quality, was solved. The image processing software was developed which improved the capability of measuring the characteristic of image. The experimental results showed that the system was a powerful facility to study instantaneous and high-speed process in solid rocket motor. With the development of the image processing technique, the capability of high-speed motion analysis system was enhanced.
NASA Astrophysics Data System (ADS)
Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.
2004-02-01
This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.
A platform for European CMOS image sensors for space applications
NASA Astrophysics Data System (ADS)
Minoglou, K.; San Segundo Bello, D.; Sabuncuoglu Tezcan, D.; Haspeslagh, L.; Van Olmen, J.; Merry, B.; Cavaco, C.; Mazzamuto, F.; Toqué-Trésonne, I.; Moirin, R.; Brouwer, M.; Toccafondi, M.; Preti, G.; Rosmeulen, M.; De Moor, P.
2017-11-01
Both ESA and the EC have identified the need for a supply chain of CMOS imagers for space applications which uses solely European sources. An essential requirement on this supply chain is the platformization of the process modules, in particular when it comes to very specific processing steps, such as those required for the manufacturing of backside illuminated image sensors. This is the goal of the European (EC/FP7/SPACE) funded project EUROCIS. All EUROCIS partners have excellent know-how and track record in the expertise fields required. Imec has been leading the imager chip design and the front side and backside processing. LASSE, as a major player in the laser annealing supplier sector, has been focusing on the optimization of the process related to the backside passivation of the image sensors. TNO, known worldwide as a top developer of instruments for scientific research, including space research and sensors for satellites, has contributed in the domain of optical layers for space instruments and optimized antireflective coatings. Finally, Selex ES, as a world-wide leader for manufacturing instruments with expertise in various space missions and programs, has defined the image sensor specifications and is taking care of the final device characterization. In this paper, an overview of the process flow, the results on test structures and imagers processed using this platform will be presented.
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
Tunable Light-Guide Image Processing Snapshot Spectrometer (TuLIPSS) for Earth and Moon Observations
NASA Astrophysics Data System (ADS)
Tkaczyk, T. S.; Alexander, D.; Luvall, J. C.; Wang, Y.; Dwight, J. G.; Pawlowsk, M. E.; Howell, B.; Tatum, P. F.; Stoian, R.-I.; Cheng, S.; Daou, A.
2018-02-01
A tunable light-guide image processing snapshot spectrometer (TuLIPSS) for Earth science research and observation is being developed through a NASA instrument incubator project with Rice University and Marshall Space Flight Center.
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A
2011-01-01
Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
Image Algebra Matlab language version 2.3 for image processing and compression research
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric
2010-08-01
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.
NASA Astrophysics Data System (ADS)
Gurov, I. P.; Kozlov, S. A.
2014-09-01
The first international scientific school "Methods of Digital Image Processing in Optics and Photonics" was held with a view to develop cooperation between world-class experts, young scientists, students and post-graduate students, and to exchange information on the current status and directions of research in the field of digital image processing in optics and photonics. The International Scientific School was managed by: Saint Petersburg National Research University of Information Technologies, Mechanics and Optics (ITMO University) - Saint Petersburg (Russia) Chernyshevsky Saratov State University - Saratov (Russia) National research nuclear University "MEPHI" (NRNU MEPhI) - Moscow (Russia) The school was held with the participation of the local chapters of Optical Society of America (OSA), the Society of Photo-Optical Instrumentation Engineers (SPIE) and IEEE Photonics Society. Further details, including topics, committees and conference photos are available in the PDF
NASA Technical Reports Server (NTRS)
1997-01-01
In 1990, Lewis Research Center jointly sponsored a conference with the U.S. Air Force Wright Laboratory focused on high speed imaging. This conference, and early funding by Lewis Research Center, helped to spur work by Silicon Mountain Design, Inc. to break the performance barriers of imaging speed, resolution, and sensitivity through innovative technology. Later, under a Small Business Innovation Research contract with the Jet Propulsion Laboratory, the company designed a real-time image enhancing camera that yields superb, high quality images in 1/30th of a second while limiting distortion. The result is a rapidly available, enhanced image showing significantly greater detail compared to image processing executed on digital computers. Current applications include radiographic and pathology-based medicine, industrial imaging, x-ray inspection devices, and automated semiconductor inspection equipment.
A quality-refinement process for medical imaging applications.
Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I
2009-01-01
To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
2002-09-30
Physical Modeling for Processing Geosynchronous Imaging Fourier Transform Spectrometer-Indian Ocean METOC Imager ( GIFTS -IOMI) Hyperspectral Data...water quality assessment. OBJECTIVES The objective of this DoD research effort is to develop and demonstrate a fully functional GIFTS - IOMI...environment once GIFTS -IOMI is stationed over the Indian Ocean. The system will provide specialized methods for the characterization of the atmospheric
Advanced magnetic resonance imaging of the physical processes in human glioblastoma.
Kalpathy-Cramer, Jayashree; Gerstner, Elizabeth R; Emblem, Kyrre E; Andronesi, Ovidiu; Rosen, Bruce
2014-09-01
The most common malignant primary brain tumor, glioblastoma multiforme (GBM) is a devastating disease with a grim prognosis. Patient survival is typically less than two years and fewer than 10% of patients survive more than five years. Magnetic resonance imaging (MRI) can have great utility in the diagnosis, grading, and management of patients with GBM as many of the physical manifestations of the pathologic processes in GBM can be visualized and quantified using MRI. Newer MRI techniques such as dynamic contrast enhanced and dynamic susceptibility contrast MRI provide functional information about the tumor hemodynamic status. Diffusion MRI can shed light on tumor cellularity and the disruption of white matter tracts in the proximity of tumors. MR spectroscopy can be used to study new tumor tissue markers such as IDH mutations. MRI is helping to noninvasively explore the link between the molecular basis of gliomas and the imaging characteristics of their physical processes. We, here, review several approaches to MR-based imaging and discuss the potential for these techniques to quantify the physical processes in glioblastoma, including tumor cellularity and vascularity, metabolite expression, and patterns of tumor growth and recurrence. We conclude with challenges and opportunities for further research in applying physical principles to better understand the biologic process in this deadly disease. See all articles in this Cancer Research section, "Physics in Cancer Research." ©2014 American Association for Cancer Research.
Discriminative feature representation: an effective postprocessing solution to low dose CT imaging
NASA Astrophysics Data System (ADS)
Chen, Yang; Liu, Jin; Hu, Yining; Yang, Jian; Shi, Luyao; Shu, Huazhong; Gui, Zhiguo; Coatrieux, Gouenou; Luo, Limin
2017-03-01
This paper proposes a concise and effective approach termed discriminative feature representation (DFR) for low dose computerized tomography (LDCT) image processing, which is currently a challenging problem in medical imaging field. This DFR method assumes LDCT images as the superposition of desirable high dose CT (HDCT) 3D features and undesirable noise-artifact 3D features (the combined term of noise and artifact features induced by low dose scan protocols), and the decomposed HDCT features are used to provide the processed LDCT images with higher quality. The target HDCT features are solved via the DFR algorithm using a featured dictionary composed by atoms representing HDCT features and noise-artifact features. In this study, the featured dictionary is efficiently built using physical phantom images collected from the same CT scanner as the target clinical LDCT images to process. The proposed DFR method also has good robustness in parameter setting for different CT scanner types. This DFR method can be directly applied to process DICOM formatted LDCT images, and has good applicability to current CT systems. Comparative experiments with abdomen LDCT data validate the good performance of the proposed approach. This research was supported by National Natural Science Foundation under grants (81370040, 81530060), the Fundamental Research Funds for the Central Universities, and the Qing Lan Project in Jiangsu Province.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1992-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1991-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
Translational research of optical molecular imaging for personalized medicine.
Qin, C; Ma, X; Tian, J
2013-12-01
In the medical imaging field, molecular imaging is a rapidly developing discipline and forms many imaging modalities, providing us effective tools to visualize, characterize, and measure molecular and cellular mechanisms in complex biological processes of living organisms, which can deepen our understanding of biology and accelerate preclinical research including cancer study and medicine discovery. Among many molecular imaging modalities, although the penetration depth of optical imaging and the approved optical probes used for clinics are limited, it has evolved considerably and has seen spectacular advances in basic biomedical research and new drug development. With the completion of human genome sequencing and the emergence of personalized medicine, the specific drug should be matched to not only the right disease but also to the right person, and optical molecular imaging should serve as a strong adjunct to develop personalized medicine by finding the optimal drug based on an individual's proteome and genome. In this process, the computational methodology and imaging system as well as the biomedical application regarding optical molecular imaging will play a crucial role. This review will focus on recent typical translational studies of optical molecular imaging for personalized medicine followed by a concise introduction. Finally, the current challenges and the future development of optical molecular imaging are given according to the understanding of the authors, and the review is then concluded.
Images of Imaging: Notes on Doing Longitudinal Field Work.
ERIC Educational Resources Information Center
Barley, Stephen R.
1990-01-01
Discusses the processes involved in a field study of technological change in radiology and how researchers can design a qualitative study and then collect data in a systematic and explicit manner. Illustrates the social and human problems of gaining entry into a research site, constructing a research role, and managing relationships. (63…
Development of an inexpensive optical method for studies of dental erosion process in vitro
NASA Astrophysics Data System (ADS)
Nasution, A. M. T.; Noerjanto, B.; Triwanto, L.
2008-09-01
Teeth have important roles in digestion of food, supporting the facial-structure, as well as in articulation of speech. Abnormality in teeth structure can be initiated by an erosion process due to diet or beverages consumption that lead to destruction which affect their functionality. Research to study the erosion processes that lead to teeth's abnormality is important in order to be used as a care and prevention purpose. Accurate measurement methods would be necessary as a research tool, in order to be capable for quantifying dental destruction's degree. In this work an inexpensive optical method as tool to study dental erosion process is developed. It is based on extraction the parameters from the 3D dental visual information. The 3D visual image is obtained from reconstruction of multiple lateral projection of 2D images that captured from many angles. Using a simple motor stepper and a pocket digital camera, sequence of multi-projection 2D images of premolar tooth is obtained. This images are then reconstructed to produce a 3D image, which is useful for quantifying related dental erosion parameters. The quantification process is obtained from the shrinkage of dental volume as well as surface properties due to erosion process. Results of quantification is correlated to the ones of dissolved calcium atom which released from the tooth using atomic absorption spectrometry. This proposed method would be useful as visualization tool in many engineering, dentistry, and medical research. It would be useful also for the educational purposes.
NASA Technical Reports Server (NTRS)
2002-01-01
Retinex Imaging Processing, winner of NASA's 1999 Space Act Award, is commercially available through TruView Imaging Company. With this technology, amateur photographers use their personal computers to improve the brightness, scene contrast, detail, and overall sharpness of images with increased ease. The process was originally developed for remote sensing of the Earth by researchers at Langley Research Center and Science and Technology Corporation (STC). It automatically enhances a digital image in terms of dynamic range compression, color independence from the spectral distribution of the scene illuminant, and color/lightness rendition. As a result, the enhanced digital image is much closer to the scene perceived by the human visual system, under all kinds and levels of lighting variations. TruView believes there are other applications for the software in medical imaging, forensics, security, recognizance, mining, assembly, and other industrial areas.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
Social neuroscience and its potential contribution to psychiatry
Cacioppo, John T; Cacioppo, Stephanie; Dulawa, Stephanie; Palmer, Abraham A
2014-01-01
Most mental disorders involve disruptions of normal social behavior. Social neuroscience is an interdisciplinary field devoted to understanding the biological systems underlying social processes and behavior, and the influence of the social environment on biological processes, health and well-being. Research in this field has grown dramatically in recent years. Active areas of research include brain imaging studies in normal children and adults, animal models of social behavior, studies of stroke patients, imaging studies of psychiatric patients, and research on social determinants of peripheral neural, neuroendocrine and immunological processes. Although research in these areas is proceeding along largely independent trajectories, there is increasing evidence for connections across these trajectories. We focus here on the progress and potential of social neuroscience in psychiatry, including illustrative evidence for a rapid growth of neuroimaging and genetic studies of mental disorders. We also argue that neuroimaging and genetic research focused on specific component processes underlying social living is needed. PMID:24890058
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
... Process Research Training & Career Development Funded Grants & Grant History Research Resources Research at NIDDK Technology Advancement & Transfer Meetings & Workshops Health Information Diabetes Digestive ...
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process
NASA Technical Reports Server (NTRS)
Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.
2016-01-01
Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.
Portable laser speckle perfusion imaging system based on digital signal processor.
Tang, Xuejun; Feng, Nengyun; Sun, Xiaoli; Li, Pengcheng; Luo, Qingming
2010-12-01
The ability to monitor blood flow in vivo is of major importance in clinical diagnosis and in basic researches of life science. As a noninvasive full-field technique without the need of scanning, laser speckle contrast imaging (LSCI) is widely used to study blood flow with high spatial and temporal resolution. Current LSCI systems are based on personal computers for image processing with large size, which potentially limit the widespread clinical utility. The need for portable laser speckle contrast imaging system that does not compromise processing efficiency is crucial in clinical diagnosis. However, the processing of laser speckle contrast images is time-consuming due to the heavy calculation for enormous high-resolution image data. To address this problem, a portable laser speckle perfusion imaging system based on digital signal processor (DSP) and the algorithm which is suitable for DSP is described. With highly integrated DSP and the algorithm, we have markedly reduced the size and weight of the system as well as its energy consumption while preserving the high processing speed. In vivo experiments demonstrate that our portable laser speckle perfusion imaging system can obtain blood flow images at 25 frames per second with the resolution of 640 × 480 pixels. The portable and lightweight features make it capable of being adapted to a wide variety of application areas such as research laboratory, operating room, ambulance, and even disaster site.
1975-08-01
image analysis and processing tasks such as information extraction, image enhancement and restoration, coding, etc. The ultimate objective of this research is to form a basis for the development of technology relevant to military applications of machine extraction of information from aircraft and satellite imagery of the earth’s surface. This report discusses research activities during the three month period February 1 - April 30,
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.
2013-08-01
Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.
NASA Astrophysics Data System (ADS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-05-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally within the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging-terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on the limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
NASA Technical Reports Server (NTRS)
Johnson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Aerial images from the Follow-On Radar, Enhanced and Synthetic Vision Systems Integration Technology Evaluation (FORESITE) flight tests with the NASA Langley Research Center's research Boeing 757 were acquired during severe haze and haze/mixed clouds visibility conditions. These images were enhanced using the Visual Servo (VS) process that makes use of the Multiscale Retinex. The images were then quantified with visual quality metrics used internally with the VS. One of these metrics, the Visual Contrast Measure, has been computed for hundreds of FORESITE images, and for major classes of imaging--terrestrial (consumer), orbital Earth observations, orbital Mars surface imaging, NOAA aerial photographs, and underwater imaging. The metric quantifies both the degree of visual impairment of the original, un-enhanced images as well as the degree of visibility improvement achieved by the enhancement process. The large aggregate data exhibits trends relating to degree of atmospheric visibility attenuation, and its impact on limits of enhancement performance for the various image classes. Overall results support the idea that in most cases that do not involve extreme reduction in visibility, large gains in visual contrast are routinely achieved by VS processing. Additionally, for very poor visibility imaging, lesser, but still substantial, gains in visual contrast are also routinely achieved. Further, the data suggest that these visual quality metrics can be used as external standalone metrics for establishing performance parameters.
Advanced Digital Forensic and Steganalysis Methods
2009-02-01
investigation is simultaneously cropped, scaled, and processed, extending the technology when the digital image is printed, developing technology capable ...or other common processing operations). TECNOLOGY APPLICATIONS 1. Determining the origin of digital images 2. Matching an image to a camera...Technology Transfer and Innovation Partnerships Division of Research P.O. Box 6000 State University of New York Binghamton, NY 13902-6000 Phone: 607-777
NASA Astrophysics Data System (ADS)
Yi, Juan; Du, Qingyu; Zhang, Hong jiang; Zhang, Yao lei
2017-11-01
Target recognition is a leading key technology in intelligent image processing and application development at present, with the enhancement of computer processing ability, autonomous target recognition algorithm, gradually improve intelligence, and showed good adaptability. Taking the airport target as the research object, analysis the airport layout characteristics, construction of knowledge model, Gabor filter and Radon transform based on the target recognition algorithm of independent design, image processing and feature extraction of the airport, the algorithm was verified, and achieved better recognition results.
The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review
Sheridan, Heather; Reingold, Eyal M.
2017-01-01
In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise. PMID:29033865
The Holistic Processing Account of Visual Expertise in Medical Image Perception: A Review.
Sheridan, Heather; Reingold, Eyal M
2017-01-01
In the field of medical image perception, the holistic processing perspective contends that experts can rapidly extract global information about the image, which can be used to guide their subsequent search of the image (Swensson, 1980; Nodine and Kundel, 1987; Kundel et al., 2007). In this review, we discuss the empirical evidence supporting three different predictions that can be derived from the holistic processing perspective: Expertise in medical image perception is domain-specific, experts use parafoveal and/or peripheral vision to process large regions of the image in parallel, and experts benefit from a rapid initial glimpse of an image. In addition, we discuss a pivotal recent study (Litchfield and Donovan, 2016) that seems to contradict the assumption that experts benefit from a rapid initial glimpse of the image. To reconcile this finding with the existing literature, we suggest that global processing may serve multiple functions that extend beyond the initial glimpse of the image. Finally, we discuss future research directions, and we highlight the connections between the holistic processing account and similar theoretical perspectives and findings from other domains of visual expertise.
NASA Technical Reports Server (NTRS)
Buckner, J. D.; Council, H. W.; Edwards, T. R.
1974-01-01
Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.
Research and Analysis of Image Processing Technologies Based on DotNet Framework
NASA Astrophysics Data System (ADS)
Ya-Lin, Song; Chen-Xi, Bai
Microsoft.Net is a kind of most popular program development tool. This paper gave a detailed analysis concluded about some image processing technologies of the advantages and disadvantages by .Net processed image while the same algorithm is used in Programming experiments. The result shows that the two best efficient methods are unsafe pointer and Direct 3D, and Direct 3D used to 3D simulation development, and the others are useful in some fields while these technologies are poor efficiency and not suited to real-time processing. The experiment results in paper will help some projects about image processing and simulation based DotNet and it has strong practicability.
Using quantum filters to process images of diffuse axonal injury
NASA Astrophysics Data System (ADS)
Pineda Osorio, Mateo
2014-06-01
Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.
Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot
2004-12-01
The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.
NASA Astrophysics Data System (ADS)
Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang
2008-03-01
The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.
Colour Based Image Processing Method for Recognizing Ribbed Smoked Sheet Grade
NASA Astrophysics Data System (ADS)
Fibriani, Ike; Sumardi; Bayu Satriya, Alfredo; Budi Utomo, Satryo
2017-03-01
This research proposes a colour based image processing technique to recognize the Ribbed Smoked Sheet (RSS) grade so that the RSS sorting process can be faster and more accurate than the traditional one. The RSS sheet image captured by the camera is transformed into grayscale image to simplify the recognition of rust and mould on the RSS sheet. Then the grayscale image is transformed into binary image using threshold value which is obtained from the RSS 1 reference colour. The grade recognition is determined by counting the white pixel percentage. The result shows that the system has 88% of accuracy. Most faults exist on RSS 2 recognition. This is due to the illumination distribution which is not equal over the RSS image.
Gunn, Martin L; Marin, Jennifer R; Mills, Angela M; Chong, Suzanne T; Froemming, Adam T; Johnson, Jamlik O; Kumaravel, Manickam; Sodickson, Aaron D
2016-08-01
In May 2015, the Academic Emergency Medicine consensus conference "Diagnostic imaging in the emergency department: a research agenda to optimize utilization" was held. The goal of the conference was to develop a high-priority research agenda regarding emergency diagnostic imaging on which to base future research. In addition to representatives from the Society of Academic Emergency Medicine, the multidisciplinary conference included members of several radiology organizations: American Society for Emergency Radiology, Radiological Society of North America, the American College of Radiology, and the American Association of Physicists in Medicine. The specific aims of the conference were to (1) understand the current state of evidence regarding emergency department (ED) diagnostic imaging utilization and identify key opportunities, limitations, and gaps in knowledge; (2) develop a consensus-driven research agenda emphasizing priorities and opportunities for research in ED diagnostic imaging; and (3) explore specific funding mechanisms available to facilitate research in ED diagnostic imaging. Through a multistep consensus process, participants developed targeted research questions for future research in six content areas within emergency diagnostic imaging: clinical decision rules; use of administrative data; patient-centered outcomes research; training, education, and competency; knowledge translation and barriers to imaging optimization; and comparative effectiveness research in alternatives to traditional computed tomography use.
An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis
NASA Astrophysics Data System (ADS)
Kim, Yongmin; Alexander, Thomas
1986-06-01
In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Single molecule image formation, reconstruction and processing: introduction.
Ashok, Amit; Piestun, Rafael; Stallinga, Sjoerd
2016-07-01
The ability to image at the single molecule scale has revolutionized research in molecular biology. This feature issue presents a collection of articles that provides new insights into the fundamental limits of single molecule imaging and reports novel techniques for image formation and analysis.
ERIC Educational Resources Information Center
Gordon, Roger L., Ed.
This guide to multi-image program production for practitioners describes the process from the beginning stages through final presentation, examines historical perspectives, theory, and research in multi-image, and provides examples of successful utilization. Ten chapters focus on the following topics: (1) definition of multi-image field and…
[Progress in Application of Measuring Skeleton by CT in Forensic Anthropology Research].
Miao, C Y; Xu, L; Wang, N; Zhang, M; Li, Y S; Lü, J X
2017-02-01
Individual identification by measuring the human skeleton is an important research in the field of forensic anthropology. Computed tomography (CT) technology can provide high-resolution image of skeleton. Skeleton image can be reformed by software in the post-processing workstation. Different skeleton measurement indexes of anthropology, such as diameter, angle, area and volume, can be measured on section and reformative images. Measurement process is barely affected by human factors. This paper reviews the literatures at home and abroad about the application of measuring skeleton by CT in forensic anthropology research for individual identification in four aspects, including sex determination, height infer, facial soft tissue thickness measurement and age estimation. The major technology and the application of CT in forensic anthropology research are compared and discussed, respectively. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Fuzzy Logic Enhanced Digital PIV Processing Software
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1999-01-01
Digital Particle Image Velocimetry (DPIV) is an instantaneous, planar velocity measurement technique that is ideally suited for studying transient flow phenomena in high speed turbomachinery. DPIV is being actively used at the NASA Glenn Research Center to study both stable and unstable operating conditions in a high speed centrifugal compressor. Commercial PIV systems are readily available which provide near real time feedback of the PIV image data quality. These commercial systems are well designed to facilitate the expedient acquisition of PIV image data. However, as with any general purpose system, these commercial PIV systems do not meet all of the data processing needs required for PIV image data reduction in our compressor research program. An in-house PIV PROCessing (PIVPROC) code has been developed for reducing PIV data. The PIVPROC software incorporates fuzzy logic data validation for maximum information recovery from PIV image data. PIVPROC enables combined cross-correlation/particle tracking wherein the highest possible spatial resolution velocity measurements are obtained.
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Electrocortical consequences of image processing: The influence of working memory load and worry.
White, Evan J; Grant, DeMond M
2017-03-30
Research suggests that worry precludes emotional processing as well as biases attentional processes. Although there is burgeoning evidence for the relationship between executive functioning and worry, more research in this area is needed. A recent theory suggests one mechanism for the negative effects of worry on neural indicators of attention may be working memory load, however few studies have examined this directly. The goal of the current study was to document the influence of both visual and verbal working memory load and worry on attention allocation during processing of emotional images in a cued image paradigm. It was hypothesized that working memory load will decrease attention allocation during processing of emotional images. This was tested among 38 participants using a modified S1-S2 paradigm. Results indicated that both the visual and verbal working memory tasks resulted in a reduction of attention allocation to the processing of images across stimulus types compared to the baseline task, although only for individuals low in worry. These data extend the literature by documenting decreased neural responding (i.e., LPP amplitude) to imagery both the visual and verbal working memory load, particularly among individuals low in worry. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
a Novel 3d Intelligent Fuzzy Algorithm Based on Minkowski-Clustering
NASA Astrophysics Data System (ADS)
Toori, S.; Esmaeily, A.
2017-09-01
Assessing and monitoring the state of the earth surface is a key requirement for global change research. In this paper, we propose a new consensus fuzzy clustering algorithm that is based on the Minkowski distance. This research concentrates on Tehran's vegetation mass and its changes during 29 years using remote sensing technology. The main purpose of this research is to evaluate the changes in vegetation mass using a new process by combination of intelligent NDVI fuzzy clustering and Minkowski distance operation. The dataset includes the images of Landsat8 and Landsat TM, from 1989 to 2016. For each year three images of three continuous days were used to identify vegetation impact and recovery. The result was a 3D NDVI image, with one dimension for each day NDVI. The next step was the classification procedure which is a complicated process of categorizing pixels into a finite number of separate classes, based on their data values. If a pixel satisfies a certain set of standards, the pixel is allocated to the class that corresponds to those criteria. This method is less sensitive to noise and can integrate solutions from multiple samples of data or attributes for processing data in the processing industry. The result was a fuzzy one dimensional image. This image was also computed for the next 28 years. The classification was done in both specified urban and natural park areas of Tehran. Experiments showed that our method worked better in classifying image pixels in comparison with the standard classification methods.
Web-based document image processing
NASA Astrophysics Data System (ADS)
Walker, Frank L.; Thoma, George R.
1999-12-01
Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.
AMISS - Active and passive MIcrowaves for Security and Subsurface imaging
NASA Astrophysics Data System (ADS)
Soldovieri, Francesco; Slob, Evert; Turk, Ahmet Serdar; Crocco, Lorenzo; Catapano, Ilaria; Di Matteo, Francesca
2013-04-01
The FP7-IRSES project AMISS - Active and passive MIcrowaves for Security and Subsurface imaging is based on a well-combined network among research institutions of EU, Associate and Third Countries (National Research Council of Italy - Italy, Technische Universiteit Delft - The Netherlands, Yildiz Technical University - Turkey, Bauman Moscow State Technical University - Russia, Usikov Institute for Radio-physics and Electronics and State Research Centre of Superconductive Radioelectronics "Iceberg" - Ukraine and University of Sao Paulo - Brazil) with the aims of achieving scientific advances in the framework of microwave and millimeter imaging systems and techniques for security and safety social issues. In particular, the involved partners are leaders in the scientific areas of passive and active imaging and are sharing their complementary knowledge to address two main research lines. The first one regards the design, characterization and performance evaluation of new passive and active microwave devices, sensors and measurement set-ups able to mitigate clutter and increase information content. The second line faces the requirements to make State-of-the-Art processing tools compliant with the instrumentations developed in the first line, suitable to work in electromagnetically complex scenarios and able to exploit the unexplored possibilities offered by new instrumentations. The main goals of the project are: 1) Development/improvement and characterization of new sensors and systems for active and passive microwave imaging; 2) Set up, analysis and validation of state of art/novel data processing approach for GPR in critical infrastructure and subsurface imaging; 3) Integration of state of art and novel imaging hardware and characterization approaches to tackle realistic situations in security, safety and subsurface prospecting applications; 4) Development and feasibility study of bio-radar technology (system and data processing) for vital signs detection and detection/characterization of human beings in complex scenarios. These goals are planned to be reached following a plan of research activities and researchers secondments which cover a period of three years. ACKNOWLEDGMENTS This research has been performed in the framework of the "Active and Passive Microwaves for Security and Subsurface imaging (AMISS)" EU 7th Framework Marie Curie Actions IRSES project (PIRSES-GA-2010-269157).
Earth Observation Services (Image Processing Software)
NASA Technical Reports Server (NTRS)
1992-01-01
San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.
Basic research planning in mathematical pattern recognition and image analysis
NASA Technical Reports Server (NTRS)
Bryant, J.; Guseman, L. F., Jr.
1981-01-01
Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.
DOCLIB: a software library for document processing
NASA Astrophysics Data System (ADS)
Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit
2006-01-01
Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.
[Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].
Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang
2007-02-01
Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.
Data Processing of LAPAN-A3 Thermal Imager
NASA Astrophysics Data System (ADS)
Hartono, R.; Hakim, P. R.; Syafrudin, AH
2018-04-01
As an experimental microsatellite, LAPAN-A3/IPB satellite has an experimental thermal imager, which is called as micro-bolometer, to observe earth surface temperature for horizon observation. The imager data is transmitted from satellite to ground station by S-band video analog signal transmission, and then processed by ground station to become sequence of 8-bit enhanced and contrasted images. Data processing of LAPAN-A3/IPB thermal imager is more difficult than visual digital camera, especially for mosaic and classification purpose. This research aims to describe simple mosaic and classification process of LAPAN-A3/IPB thermal imager based on several videos data produced by the imager. The results show that stitching using Adobe Photoshop produces excellent result but can only process small area, while manual approach using ImageJ software can produce a good result but need a lot of works and time consuming. The mosaic process using image cross-correlation by Matlab offers alternative solution, which can process significantly bigger area in significantly shorter time processing. However, the quality produced is not as good as mosaic images of the other two methods. The simple classifying process that has been done shows that the thermal image can classify three distinct objects, i.e.: clouds, sea, and land surface. However, the algorithm fail to classify any other object which might be caused by distortions in the images. All of these results can be used as reference for development of thermal imager in LAPAN-A4 satellite.
SSME propellant path leak detection real-time
NASA Technical Reports Server (NTRS)
Crawford, R. A.; Smith, L. M.
1994-01-01
Included are four documents that outline the technical aspects of the research performed on NASA Grant NAG8-140: 'A System for Sequential Step Detection with Application to Video Image Processing'; 'Leak Detection from the SSME Using Sequential Image Processing'; 'Digital Image Processor Specifications for Real-Time SSME Leak Detection'; and 'A Color Change Detection System for Video Signals with Applications to Spectral Analysis of Rocket Engine Plumes'.
Different methods of image segmentation in the process of meat marbling evaluation
NASA Astrophysics Data System (ADS)
Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.
2015-07-01
The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.
Integration of instrumentation and processing software of a laser speckle contrast imaging system
NASA Astrophysics Data System (ADS)
Carrick, Jacob J.
Laser speckle contrast imaging (LSCI) has the potential to be a powerful tool in medicine, but more research in the field is required so it can be used properly. To help in the progression of Michigan Tech's research in the field, a graphical user interface (GUI) was designed in Matlab to control the instrumentation of the experiments as well as process the raw speckle images into contrast images while they are being acquired. The design of the system was successful and is currently being used by Michigan Tech's Biomedical Engineering department. This thesis describes the development of the LSCI GUI as well as offering a full introduction into the history, theory and applications of LSCI.
Joshi, Anuja; Gislason-Lee, Amber J; Keeble, Claire; Sivananthan, Uduvil M
2017-01-01
Objective: The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Methods: Incremental amounts of image noise were added to five PCI angiograms, simulating the angiogram as having been acquired at corresponding lower dose levels (10–89% dose reduction). 16 observers with relevant experience scored the image quality of these angiograms in 3 states—with no image processing and with 2 different modern image processing algorithms applied. These algorithms are used on state-of-the-art and previous generation cardiac interventional X-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction possible by the processing algorithms, for equivalent image quality scores. Results: Observers rated the quality of the images processed with the state-of-the-art and previous generation image processing with a 24.9% and 15.6% dose reduction, respectively, as equivalent in quality to the unenhanced images. The dose reduction facilitated by the state-of-the-art image processing relative to previous generation processing was 10.3%. Conclusion: Results demonstrate that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses. Advances in knowledge: Image enhancement was shown to maintain perceived image quality in coronary angiography at a reduced level of radiation dose using computer software to produce synthetic images from real angiograms simulating a reduction in dose. PMID:28124572
Image Registration Workshop Proceedings
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline (Editor)
1997-01-01
Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.
Artificial intelligence for geologic mapping with imaging spectrometers
NASA Technical Reports Server (NTRS)
Kruse, F. A.
1993-01-01
This project was a three year study at the Center for the Study of Earth from Space (CSES) within the Cooperative Institute for Research in Environmental Science (CIRES) at the University of Colorado, Boulder. The goal of this research was to develop an expert system to allow automated identification of geologic materials based on their spectral characteristics in imaging spectrometer data such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). This requirement was dictated by the volume of data produced by imaging spectrometers, which prohibits manual analysis. The research described is based on the development of automated techniques for analysis of imaging spectrometer data that emulate the analytical processes used by a human observer. The research tested the feasibility of such an approach, implemented an operational system, and tested the validity of the results for selected imaging spectrometer data sets.
Automatic detection of blurred images in UAV image sets
NASA Astrophysics Data System (ADS)
Sieberth, Till; Wackrow, Rene; Chandler, Jim H.
2016-12-01
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.
Concept Learning through Image Processing.
ERIC Educational Resources Information Center
Cifuentes, Lauren; Yi-Chuan, Jane Hsieh
This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…
Imaging live cells at high spatiotemporal resolution for lab-on-a-chip applications.
Chin, Lip Ket; Lee, Chau-Hwang; Chen, Bi-Chang
2016-05-24
Conventional optical imaging techniques are limited by the diffraction limit and difficult-to-image biomolecular and sub-cellular processes in living specimens. Novel optical imaging techniques are constantly evolving with the desire to innovate an imaging tool that is capable of seeing sub-cellular processes in a biological system, especially in three dimensions (3D) over time, i.e. 4D imaging. For fluorescence imaging on live cells, the trade-offs among imaging depth, spatial resolution, temporal resolution and photo-damage are constrained based on the limited photons of the emitters. The fundamental solution to solve this dilemma is to enlarge the photon bank such as the development of photostable and bright fluorophores, leading to the innovation in optical imaging techniques such as super-resolution microscopy and light sheet microscopy. With the synergy of microfluidic technology that is capable of manipulating biological cells and controlling their microenvironments to mimic in vivo physiological environments, studies of sub-cellular processes in various biological systems can be simplified and investigated systematically. In this review, we provide an overview of current state-of-the-art super-resolution and 3D live cell imaging techniques and their lab-on-a-chip applications, and finally discuss future research trends in new and breakthrough research areas of live specimen 4D imaging in controlled 3D microenvironments.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun
2015-01-01
To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.
2015-01-01
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.
2015-03-15
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Differential effects of cognitive load on emotion: Emotion maintenance versus passive experience.
DeFraine, William C
2016-06-01
Two separate lines of research have examined the effects of cognitive load on emotional processing with similar tasks but seemingly contradictory results. Some research has shown that the emotions elicited by passive viewing of emotional images are reduced by subsequent cognitive load. Other research has shown that such emotions are not reduced by cognitive load if the emotions are actively maintained. The present study sought to compare and resolve these 2 lines of research. Participants either passively viewed negative emotional images or maintained the emotions elicited by the images, and after a delay rated the intensity of the emotion they were feeling. Half of trials included a math task during the delay to induce cognitive load, and the other half did not. Results showed that cognitive load reduced the intensity of negative emotions during passive-viewing of emotional images but not during emotion maintenance. The present study replicates the findings of both lines of research, and shows that the key factor is whether or not emotions are actively maintained. Also, in the context of previous emotion maintenance research, the present results support the theoretical idea of a separable emotion maintenance process. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
Automated on-line fecal detection - digital eye guards against fecal contamination
USDA-ARS?s Scientific Manuscript database
Agricultural Research Service scientists in Athens, GA., have been granted a patent on a method to detect contaminants on food surfaces with imaging systems. Using a real-time imaging system in the processing plant, researchers Bob Windham, Kurt, Lawrence, Bosoon Park, and Doug Smith in the ARS Poul...
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1992-11-01
The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.
Display nonlinearity in digital image processing for visual communications
NASA Astrophysics Data System (ADS)
Peli, Eli
1991-11-01
The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.
Photogrammetry on glaciers: Old and new knowledge
NASA Astrophysics Data System (ADS)
Pfeffer, W. T.; Welty, E.; O'Neel, S.
2014-12-01
In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.
Automated Reduction of Data from Images and Holograms
NASA Technical Reports Server (NTRS)
Lee, G. (Editor); Trolinger, James D. (Editor); Yu, Y. H. (Editor)
1987-01-01
Laser techniques are widely used for the diagnostics of aerodynamic flow and particle fields. The storage capability of holograms has made this technique an even more powerful. Over 60 researchers in the field of holography, particle sizing and image processing convened to discuss these topics. The research program of ten government laboratories, several universities, industry and foreign countries were presented. A number of papers on holographic interferometry with applications to fluid mechanics were given. Several papers on combustion and particle sizing, speckle velocimetry and speckle interferometry were given. A session on image processing and automated fringe data reduction techniques and the type of facilities for fringe reduction was held.
Advanced Secure Optical Image Processing for Communications
NASA Astrophysics Data System (ADS)
Al Falou, Ayman
2018-04-01
New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.
High-performance image processing on the desktop
NASA Astrophysics Data System (ADS)
Jordan, Stephen D.
1996-04-01
The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.
NASA Technical Reports Server (NTRS)
1993-01-01
Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.
USDA-ARS?s Scientific Manuscript database
Organic residues on equipment surfaces in poultry processing plants can generate cross- contamination and increase the risk of unsafe food for consumers. This research was aimed to investigate the potential of LED-induced fluorescence imaging technique for rapid inspection of stainless steel proces...
Detection of fuze defects by image-processing methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, M.J.
1988-03-01
This paper describes experimental studies of the detection of mechanical defects by the application of computer-processing methods to real-time radiographic images of fuze assemblies. The experimental results confirm that a new algorithm developed at Materials Research Laboratory has potential for the automatic inspection of these assemblies and of others that contain discrete components. The algorithm was applied to images that contain a range of grey levels and has been found to be tolerant to image variations encountered under simulated production conditions.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
NASA Technical Reports Server (NTRS)
Irwin, Daniel E.
2004-01-01
The overall purpose of this training session is to familiarize Central American project cooperators with the remote sensing and image processing research that is being conducted by the NASA research team and to acquaint them with the data products being produced in the areas of Land Cover and Land Use Change and carbon modeling under the NASA SERVIR project. The training session, therefore, will be both informative and practical in nature. Specifically, the course will focus on the physics of remote sensing, various satellite and airborne sensors (Landsat, MODIS, IKONOS, Star-3i), processing techniques, and commercial off the shelf image processing software.
GPU-Based High-performance Imaging for Mingantu Spectral RadioHeliograph
NASA Astrophysics Data System (ADS)
Mei, Ying; Wang, Feng; Wang, Wei; Chen, Linjie; Liu, Yingbo; Deng, Hui; Dai, Wei; Liu, Cuiyin; Yan, Yihua
2018-01-01
As a dedicated solar radio interferometer, the MingantU SpEctral RadioHeliograph (MUSER) generates massive observational data in the frequency range of 400 MHz-15 GHz. High-performance imaging forms a significantly important aspect of MUSER’s massive data processing requirements. In this study, we implement a practical high-performance imaging pipeline for MUSER data processing. At first, the specifications of the MUSER are introduced and its imaging requirements are analyzed. Referring to the most commonly used radio astronomy software such as CASA and MIRIAD, we then implement a high-performance imaging pipeline based on the Graphics Processing Unit technology with respect to the current operational status of the MUSER. A series of critical algorithms and their pseudo codes, i.e., detection of the solar disk and sky brightness, automatic centering of the solar disk and estimation of the number of iterations for clean algorithms, are proposed in detail. The preliminary experimental results indicate that the proposed imaging approach significantly increases the processing performance of MUSER and generates images with high-quality, which can meet the requirements of the MUSER data processing. Supported by the National Key Research and Development Program of China (2016YFE0100300), the Joint Research Fund in Astronomy (No. U1531132, U1631129, U1231205) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and the Chinese Academy of Sciences (CAS), the National Natural Science Foundation of China (Nos. 11403009 and 11463003).
A Software Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.
2007-01-01
Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
Application of near-infrared image processing in agricultural engineering
NASA Astrophysics Data System (ADS)
Chen, Ming-hong; Zhang, Guo-ping; Xia, Hongxing
2009-07-01
Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing characteristic of near-infrared image. The NIR imaging would be very useful in the nondestructive external and internal quality inspecting of agricultural products. It is important to detect stored-grain insects by the application of near-infrared spectroscopy. Computer vision detection base on the NIR imaging would be help to manage food logistics. Application of NIR imaging promoted quality management of agricultural products. In the further application research fields of NIR image in the agricultural engineering, Some advices and prospect were put forward.
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myronakis, M; Cai, W; Dhou, S
Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing,more » our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.« less
1983-10-19
knowledge -based symbolic reasoning, it nonetheless remains de- pendent on the lower levels of iconic processing for its raw information . Both sorts of...priori knowledge of where any particular line might go, and therefore no information regarding the extent of memory access required for the local...IC FILE COPY ,. c 4/t/7 ISG Report 104 IMAGE UNDERSTANDING RESEARCH Final Technical Report Covering Research Activity During the Period October 1
Coal Layer Identification using Electrical Resistivity Imaging Method in Sinjai Area South Sulawesi
NASA Astrophysics Data System (ADS)
Ilham Samanlangi, Andi
2018-03-01
The purpose of this research is to image subsurface resistivity for coal identification in Panaikang Village, Sinjai, South Sulawesi.Resistivity measurements were conducted in 3 lines of length 400 meters and 300 meter using resistivity imaging, dipole-dipole configuration. Resistivity data was processed using Res2DInv software to image resistivity variation and interpret lithology. The research results shown that coal resistivity in Line is about 70-200 Ωm, Line 2 is about 70-90 Ωm, and Line 3 is about 70-200 Ωm with average thickness about 10 meters and distributed to the east of research area.
Challenges for data storage in medical imaging research.
Langer, Steve G
2011-04-01
Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.
Advancing Patient-centered Outcomes in Emergency Diagnostic Imaging: A Research Agenda.
Kanzaria, Hemal K; McCabe, Aileen M; Meisel, Zachary M; LeBlanc, Annie; Schaffer, Jason T; Bellolio, M Fernanda; Vaughan, William; Merck, Lisa H; Applegate, Kimberly E; Hollander, Judd E; Grudzen, Corita R; Mills, Angela M; Carpenter, Christopher R; Hess, Erik P
2015-12-01
Diagnostic imaging is integral to the evaluation of many emergency department (ED) patients. However, relatively little effort has been devoted to patient-centered outcomes research (PCOR) in emergency diagnostic imaging. This article provides background on this topic and the conclusions of the 2015 Academic Emergency Medicine consensus conference PCOR work group regarding "Diagnostic Imaging in the Emergency Department: A Research Agenda to Optimize Utilization." The goal was to determine a prioritized research agenda to establish which outcomes related to emergency diagnostic imaging are most important to patients, caregivers, and other key stakeholders and which methods will most optimally engage patients in the decision to undergo imaging. Case vignettes are used to emphasize these concepts as they relate to a patient's decision to seek care at an ED and the care received there. The authors discuss applicable research methods and approaches such as shared decision-making that could facilitate better integration of patient-centered outcomes and patient-reported outcomes into decisions regarding emergency diagnostic imaging. Finally, based on a modified Delphi process involving members of the PCOR work group, prioritized research questions are proposed to advance the science of patient-centered outcomes in ED diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
2016-01-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
Identification of seedling cabbages and weeds using hyperspectral imaging
USDA-ARS?s Scientific Manuscript database
Target detectionis one of research focues for precision chemical application. This study developed a method to identify seedling cabbages and weeds using hyperspectral spectral imaging. In processing the image data, with ENVI software, after dimension reduction, noise reduction, de-correlation for h...
ERIC Educational Resources Information Center
Melchiori, Gerlinda S.
1990-01-01
A managerial process for enhancing the image and public reputation of a higher education institution is outlined. It consists of five stages: market research; data analysis and market positioning; communication of results and recommendations to the administration; development of a global image program; and impact evaluation. (MSE)
ERIC Educational Resources Information Center
Smolík, Filip; Kríž, Adam
2015-01-01
Imageability is the ability of words to elicit mental sensory images of their referents. Recent research has suggested that imageability facilitates the processing and acquisition of inflected word forms. The present study examined whether inflected word forms are acquired earlier in highly imageable words in Czech children. Parents of 317…
Supervised restoration of degraded medical images using multiple-point geostatistics.
Pham, Tuan D
2012-06-01
Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm
NASA Astrophysics Data System (ADS)
Tan, Linglong; Li, Changkai; Wang, Yueqin
2018-04-01
SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.
Application of Neutron Tomography in Culture Heritage research.
Mongy, T
2014-02-01
Neutron Tomography (NT) investigation of Culture Heritages (CH) is an efficient tool for understanding the culture of ancient civilizations. Neutron imaging (NI) is a-state-of-the-art non-destructive tool in the area of CH and plays an important role in the modern archeology. The NI technology can be widely utilized in the field of elemental analysis. At Egypt Second Research Reactor (ETRR-2), a collimated Neutron Radiography (NR) beam is employed for neutron imaging purposes. A digital CCD camera is utilized for recording the beam attenuation in the sample. This helps for the detection of hidden objects and characterization of material properties. Research activity can be extended to use computer software for quantitative neutron measurement. Development of image processing algorithms can be used to obtain high quality images. In this work, full description of ETRR-2 was introduced with up to date neutron imaging system as well. Tomographic investigation of a clay forged artifact represents CH object was studied by neutron imaging methods in order to obtain some hidden information and highlight some attractive quantitative measurements. Computer software was used for imaging processing and enhancement. Also the Astra Image 3.0 Pro software was employed for high precise measurements and imaging enhancement using advanced algorithms. This work increased the effective utilization of the ETRR-2 Neutron Radiography/Tomography (NR/T) technique in Culture Heritages activities. © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Murray, N. D.
1985-01-01
Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.
Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong
2014-12-01
We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mihaylova, Milena; Manahilov, Velitchko
2010-11-24
Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.
Research based on the SoPC platform of feature-based image registration
NASA Astrophysics Data System (ADS)
Shi, Yue-dong; Wang, Zhi-hui
2015-12-01
This paper focuses on the study of implementing feature-based image registration by System on a Programmable Chip (SoPC) hardware platform. We solidify the image registration algorithm on the FPGA chip, in which embedded soft core processor Nios II can speed up the image processing system. In this way, we can make image registration technology get rid of the PC. And, consequently, this kind of technology will be got an extensive use. The experiment result indicates that our system shows stable performance, particularly in terms of matching processing which noise immunity is good. And feature points of images show a reasonable distribution.
Image processing and reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chartrand, Rick
2012-06-15
This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Osteoarthritis Severity Determination using Self Organizing Map Based Gabor Kernel
NASA Astrophysics Data System (ADS)
Anifah, L.; Purnomo, M. H.; Mengko, T. L. R.; Purnama, I. K. E.
2018-02-01
The number of osteoarthritis patients in Indonesia is enormous, so early action is needed in order for this disease to be handled. The aim of this paper to determine osteoarthritis severity based on x-ray image template based on gabor kernel. This research is divided into 3 stages, the first step is image processing that is using gabor kernel. The second stage is the learning stage, and the third stage is the testing phase. The image processing stage is by normalizing the image dimension to be template to 50 □ 200 image. Learning stage is done with parameters initial learning rate of 0.5 and the total number of iterations of 1000. The testing stage is performed using the weights generated at the learning stage. The testing phase has been done and the results were obtained. The result shows KL-Grade 0 has an accuracy of 36.21%, accuracy for KL-Grade 2 is 40,52%, while accuracy for KL-Grade 2 and KL-Grade 3 are 15,52%, and 25,86%. The implication of this research is expected that this research as decision support system for medical practitioners in determining KL-Grade on X-ray images of knee osteoarthritis.
DICOMGrid: a middleware to integrate PACS and EELA-2 grid infrastructure
NASA Astrophysics Data System (ADS)
Moreno, Ramon A.; de Sá Rebelo, Marina; Gutierrez, Marco A.
2010-03-01
Medical images provide lots of information for physicians, but the huge amount of data produced by medical image equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical images can be used for the development of new visualization techniques, new algorithms for patient care and new image processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results, along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities. In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.
Integrating research and clinical neuroimaging for the evaluation of traumatic brain injury recovery
NASA Astrophysics Data System (ADS)
Senseney, Justin; Ollinger, John; Graner, John; Lui, Wei; Oakes, Terry; Riedy, Gerard
2015-03-01
Advanced MRI research and other imaging modalities may serve as biomarkers for the evaluation of traumatic brain injury (TBI) recovery. However, these advanced modalities typically require off-line processing which creates images that are incompatible with radiologist viewing software sold commercially. AGFA Impax is an example of such a picture archiving and communication system(PACS) that is used by many radiology departments in the United States Military Health System. By taking advantage of Impax's use of the Digital Imaging and Communications in Medicine (DICOM) standard, we developed a system that allows for advanced medical imaging to be incorporated into clinical PACS. Radiology research can now be conducted using existing clinical imaging display platforms resources in combination with image processingtechniques that are only available outside of the clinical scanning environment. We extracted the spatial and identification elements of theDICOM standard that are necessary to allow research images to be incorporatedinto a clinical radiology system, and developed a tool that annotates research images with the proper tags. This allows for the evaluation of imaging representations of biological markers that may be useful in theevaluation of TBI and TBI recovery.
NASA Astrophysics Data System (ADS)
Zhang, L.; Hao, T.; Zhao, B.
2009-12-01
Hydrocarbon seepage effects can cause magnetic alteration zones in near surface, and the magnetic anomalies induced by the alteration zones can thus be used to locate oil-gas potential regions. In order to reduce the inaccuracy and multi-resolution of the hydrocarbon anomalies recognized only by magnetic data, and to meet the requirement of integrated management and sythetic analysis of multi-source geoscientfic data, it is necessary to construct a recognition system that integrates the functions of data management, real-time processing, synthetic evaluation, and geologic mapping. In this paper research for the key techniques of the system is discussed. Image processing methods can be applied to potential field images so as to make it easier for visual interpretation and geological understanding. For gravity or magnetic images, the anomalies with identical frequency-domain characteristics but different spatial distribution will reflect differently in texture and relevant textural statistics. Texture is a description of structural arrangements and spatial variation of a dataset or an image, and has been applied in many research fields. Textural analysis is a procedure that extracts textural features by image processing methods and thus obtains a quantitative or qualitative description of texture. When the two kinds of anomalies have no distinct difference in amplitude or overlap in frequency spectrum, they may be distinguishable due to their texture, which can be considered as textural contrast. Therefore, for the recognition system we propose a new “magnetic spots” recognition method based on image processing techniques. The method can be divided into 3 major steps: firstly, separate local anomalies caused by shallow, relatively small sources from the total magnetic field, and then pre-process the local magnetic anomaly data by image processing methods such that magnetic anomalies can be expressed as points, lines and polygons with spatial correlation, which includes histogram-equalization based image display, object recognition and extraction; then, mine the spatial characteristics and correlations of the magnetic anomalies using textural statistics and analysis, and study the features of known anomalous objects (closures, hydrocarbon-bearing structures, igneous rocks, etc.) in the same research area; finally, classify the anomalies, cluster them according to their similarity, and predict hydrocarbon induced “magnetic spots” combined with geologic, drilling and rock core data. The system uses the ArcGIS as the secondary development platform, inherits the basic functions of the ArcGIS, and develops two main sepecial functional modules, the module for conventional potential-field data processing methods and the module for feature extraction and enhancement based on image processing and analysis techniques. The system can be applied to realize the geophysical detection and recognition of near-surface hydrocarbon seepage anomalies, provide technical support for locating oil-gas potential regions, and promote geophysical data processing and interpretation to advance more efficiently.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Spatial imaging in color and HDR: prometheus unchained
NASA Astrophysics Data System (ADS)
McCann, John J.
2013-03-01
The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P.; Young, K.; Halling-Brown, M. D.
2014-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. Over the past two decades both diagnostic and therapeutic imaging have undergone a rapid growth, the ability to be able to harness this large influx of medical images can provide an essential resource for research and training. Traditionally, the systematic collection of medical images for research from heterogeneous sites has not been commonplace within the NHS and is fraught with challenges including; data acquisition, storage, secure transfer and correct anonymisation. Here, we describe a semi-automated system, which comprehensively oversees the collection of both unprocessed and processed medical images from acquisition to a centralised database. The provision of unprocessed images within our repository enables a multitude of potential research possibilities that utilise the images. Furthermore, we have developed systems and software to integrate these data with their associated clinical data and annotations providing a centralised dataset for research. Currently we regularly collect digital mammography images from two sites and partially collect from a further three, with efforts to expand into other modalities and sites currently ongoing. At present we have collected 34,014 2D images from 2623 individuals. In this paper we describe our medical image collection system for research and discuss the wide spectrum of challenges faced during the design and implementation of such systems.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.
2017-06-01
Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.
CNTRICS Imaging Biomarkers Final Task Selection: Long-Term Memory and Reinforcement Learning
Ragland, John D.; Cohen, Neal J.; Cools, Roshan; Frank, Michael J.; Hannula, Deborah E.; Ranganath, Charan
2012-01-01
Functional imaging paradigms hold great promise as biomarkers for schizophrenia research as they can detect altered neural activity associated with the cognitive and emotional processing deficits that are so disabling to this patient population. In an attempt to identify the most promising functional imaging biomarkers for research on long-term memory (LTM), the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) initiative selected “item encoding and retrieval,” “relational encoding and retrieval,” and “reinforcement learning” as key LTM constructs to guide the nomination process. This manuscript reports on the outcome of the third CNTRICS biomarkers meeting in which nominated paradigms in each of these domains were discussed by a review panel to arrive at a consensus on which of the nominated paradigms could be recommended for immediate translational development. After briefly describing this decision process, information is presented from the nominating authors describing the 4 functional imaging paradigms that were selected for immediate development. In addition to describing the tasks, information is provided on cognitive and neural construct validity, sensitivity to behavioral or pharmacological manipulations, availability of animal models, psychometric characteristics, effects of schizophrenia, and avenues for future development. PMID:22102094
Text Information Extraction System (TIES) | Informatics Technology for Cancer Research (ITCR)
TIES is a service based software system for acquiring, deidentifying, and processing clinical text reports using natural language processing, and also for querying, sharing and using this data to foster tissue and image based research, within and between institutions.
BMC Ecology image competition: the winning images
2013-01-01
BMC Ecology announces the winning entries in its inaugural Ecology Image Competition, open to anyone affiliated with a research institute. The competition, which received more than 200 entries from international researchers at all career levels and a wide variety of scientific disciplines, was looking for striking visual interpretations of ecological processes. In this Editorial, our academic Section Editors and guest judge Dr Yan Wong explain what they found most appealing about their chosen winning entries, and highlight a few of the outstanding images that didn’t quite make it to the top prize. PMID:23517630
BMC Ecology image competition: the winning images.
Harold, Simon; Wong, Yan; Baguette, Michel; Bonsall, Michael B; Clobert, Jean; Royle, Nick J; Settele, Josef
2013-03-22
BMC Ecology announces the winning entries in its inaugural Ecology Image Competition, open to anyone affiliated with a research institute. The competition, which received more than 200 entries from international researchers at all career levels and a wide variety of scientific disciplines, was looking for striking visual interpretations of ecological processes. In this Editorial, our academic Section Editors and guest judge Dr Yan Wong explain what they found most appealing about their chosen winning entries, and highlight a few of the outstanding images that didn't quite make it to the top prize.
Application of AIS Technology to Forest Mapping
NASA Technical Reports Server (NTRS)
Yool, S. R.; Star, J. L.
1985-01-01
Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
Visual Information Processing for Television and Telerobotics
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Park, Stephen K. (Editor)
1989-01-01
This publication is a compilation of the papers presented at the NASA conference on Visual Information Processing for Television and Telerobotics. The conference was held at the Williamsburg Hilton, Williamsburg, Virginia on May 10 to 12, 1989. The conference was sponsored jointly by NASA Offices of Aeronautics and Space Technology (OAST) and Space Science and Applications (OSSA) and the NASA Langley Research Center. The presentations were grouped into three sessions: Image Gathering, Coding, and Advanced Concepts; Systems; and Technologies. The program was organized to provide a forum in which researchers from industry, universities, and government could be brought together to discuss the state of knowledge in image gathering, coding, and processing methods.
Hyperspectral imaging for food processing automation
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.
2002-11-01
This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.
NASA Astrophysics Data System (ADS)
Hou, H. S.
1985-07-01
An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.
NASA Astrophysics Data System (ADS)
Dlesk, A.; Raeva, P.; Vach, K.
2018-05-01
Processing of analog photogrammetric negatives using current methods brings new challenges and possibilities, for example, creation of a 3D model from archival images which enables the comparison of historical state and current state of cultural heritage objects. The main purpose of this paper is to present possibilities of processing archival analog images captured by photogrammetric camera Rollei 6006 metric. In 1994, the Czech company EuroGV s.r.o. carried out photogrammetric measurements of former limestone quarry the Great America located in the Central Bohemian Region in the Czech Republic. All the negatives of photogrammetric images, complete documentation, coordinates of geodetically measured ground control points, calibration reports and external orientation of images calculated in the Combined Adjustment Program are preserved and were available for the current processing. Negatives of images were scanned and processed using structure from motion method (SfM). The result of the research is a statement of what accuracy is possible to expect from the proposed methodology using Rollei metric images originally obtained for terrestrial intersection photogrammetry while adhering to the proposed methodology.
Booth, T C; Jackson, A; Wardlaw, J M; Taylor, S A; Waldman, A D
2010-01-01
Incidental findings found in “healthy” volunteers during research imaging are common and have important implications for study design and performance, particularly in the areas of informed consent, subjects' rights, clinical image analysis and disclosure. In this study, we aimed to determine current practice and regulations concerning information that should be given to research subjects when obtaining consent, reporting of research images, who should be informed about any incidental findings and the method of disclosure. We reviewed all UK, European and international humanitarian, legal and ethical agencies' guidance. We found that the guidance on what constitutes incidental pathology, how to recognise it and what to do about it is inconsistent between agencies, difficult to find and less complete in the UK than elsewhere. Where given, guidance states that volunteers should be informed during the consent process about how research images will be managed, whether a mechanism exists for identifying incidental findings, arrangements for their disclosure, the potential benefit or harm and therapeutic options. The effects of incidentally discovered pathology on the individual can be complex and far-reaching. Radiologist involvement in analysis of research images varies widely; many incidental findings might therefore go unrecognised. In conclusion, guidance on the management of research imaging is inconsistent, limited and does not address the interests of volunteers. Improved standards to guide management of research images and incidental findings are urgently required. PMID:20335427
Dual Language Use in Sign-Speech Bimodal Bilinguals: fNIRS Brain-Imaging Evidence
ERIC Educational Resources Information Center
Kovelman, Ioulia; Shalinsky, Mark H.; White, Katherine S.; Schmitt, Shawn N.; Berens, Melody S.; Paymer, Nora; Petitto, Laura-Ann
2009-01-01
The brain basis of bilinguals' ability to use two languages at the same time has been a hotly debated topic. On the one hand, behavioral research has suggested that bilingual dual language use involves complex and highly principled linguistic processes. On the other hand, brain-imaging research has revealed that bilingual language switching…
1976-09-30
Estimation and Detection of Images Degraded by Film Grain Noise - Firouz Naderi 200 5. 3 Image Restoration by Spline Functions...given for the choice of this number: (a) Higher order terms correspond to noise in the image and should be ignored. (b) An analytical...expansion ate sufficient to characterize the signal exactly. Results of experiaental evaluation signals containing noise are presented next
Code of Federal Regulations, 2012 CFR
2012-10-01
... restaurant trade, but whose primary business function is not the processing or packaging of fish or fish... processing vessels and any person in the business of acquiring (taking title to) fish directly from harvesters. Research means any type of research designed to advance the image, desirability, usage...
Code of Federal Regulations, 2011 CFR
2011-10-01
... restaurant trade, but whose primary business function is not the processing or packaging of fish or fish... processing vessels and any person in the business of acquiring (taking title to) fish directly from harvesters. Research means any type of research designed to advance the image, desirability, usage...
Code of Federal Regulations, 2013 CFR
2013-10-01
... restaurant trade, but whose primary business function is not the processing or packaging of fish or fish... processing vessels and any person in the business of acquiring (taking title to) fish directly from harvesters. Research means any type of research designed to advance the image, desirability, usage...
Code of Federal Regulations, 2010 CFR
2010-10-01
... restaurant trade, but whose primary business function is not the processing or packaging of fish or fish... processing vessels and any person in the business of acquiring (taking title to) fish directly from harvesters. Research means any type of research designed to advance the image, desirability, usage...
Code of Federal Regulations, 2014 CFR
2014-10-01
... restaurant trade, but whose primary business function is not the processing or packaging of fish or fish... processing vessels and any person in the business of acquiring (taking title to) fish directly from harvesters. Research means any type of research designed to advance the image, desirability, usage...
Using hyperspectral imaging technology to identify diseased tomato leaves
NASA Astrophysics Data System (ADS)
Li, Cuiling; Wang, Xiu; Zhao, Xueguan; Meng, Zhijun; Zou, Wei
2016-11-01
In the process of tomato plants growth, due to the effect of plants genetic factors, poor environment factors, or disoperation of parasites, there will generate a series of unusual symptoms on tomato plants from physiology, organization structure and external form, as a result, they cannot grow normally, and further to influence the tomato yield and economic benefits. Hyperspectral image usually has high spectral resolution, not only contains spectral information, but also contains the image information, so this study adopted hyperspectral imaging technology to identify diseased tomato leaves, and developed a simple hyperspectral imaging system, including a halogen lamp light source unit, a hyperspectral image acquisition unit and a data processing unit. Spectrometer detection wavelength ranged from 400nm to 1000nm. After hyperspectral images of tomato leaves being captured, it was needed to calibrate hyperspectral images. This research used spectrum angle matching method and spectral red edge parameters discriminant method respectively to identify diseased tomato leaves. Using spectral red edge parameters discriminant method produced higher recognition accuracy, the accuracy was higher than 90%. Research results have shown that using hyperspectral imaging technology to identify diseased tomato leaves is feasible, and provides the discriminant basis for subsequent disease control of tomato plants.
Advanced Image Processing for NASA Applications
NASA Technical Reports Server (NTRS)
LeMoign, Jacqueline
2007-01-01
The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.
[Present status and trend of heart fluid mechanics research based on medical image analysis].
Gan, Jianhong; Yin, Lixue; Xie, Shenghua; Li, Wenhua; Lu, Jing; Luo, Anguo
2014-06-01
With introduction of current main methods for heart fluid mechanics researches, we studied the characteristics and weakness for three primary analysis methods based on magnetic resonance imaging, color Doppler ultrasound and grayscale ultrasound image, respectively. It is pointed out that particle image velocity (PIV), speckle tracking and block match have the same nature, and three algorithms all adopt block correlation. The further analysis shows that, with the development of information technology and sensor, the research for cardiac function and fluid mechanics will focus on energy transfer process of heart fluid, characteristics of Chamber wall related to blood fluid and Fluid-structure interaction in the future heart fluid mechanics fields.
NASA Astrophysics Data System (ADS)
Wojcieszak, D.; Przybył, J.; Lewicki, A.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Witaszek, K.
2015-07-01
The aim of this research was investigate the possibility of using methods of computer image analysis and artificial neural networks for to assess the amount of dry matter in the tested compost samples. The research lead to the conclusion that the neural image analysis may be a useful tool in determining the quantity of dry matter in the compost. Generated neural model may be the beginning of research into the use of neural image analysis assess the content of dry matter and other constituents of compost. The presented model RBF 19:19-2-1:1 characterized by test error 0.092189 may be more efficient.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
2016-05-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.
High resolution imaging of objects located within a wall
NASA Astrophysics Data System (ADS)
Greneker, Eugene F.; Showman, Gregory A.; Trostel, John M.; Sylvester, Vincent
2006-05-01
Researchers at Georgia Tech Research Institute have developed a high resolution imaging radar technique that allows large sections of a test wall to be scanned in X and Y dimensions. The resulting images that can be obtained provide information on what is inside the wall, if anything. The scanning homodyne radar operates at a frequency of 24.1 GHz at with an output power level of approximately 10 milliwatts. An imaging technique that has been developed is currently being used to study the detection of toxic mold on the back surface of wallboard using radar as a sensor. The moisture that is associated with the mold can easily be detected. In addition to mold, the technique will image objects as small as a 4 millimeter sphere on the front or rear of the wallboard and will penetrate both sides of a wall made of studs and wallboard. Signal processing is performed on the resulting data to further sharpen the image. Photos of the scanner and images produced by the scanner are presented. A discussion of the signal processing and technical challenges are also discussed.
Digital enhancement of X-rays for NDT
NASA Technical Reports Server (NTRS)
Butterfield, R. L.
1980-01-01
Report is "cookbook" for digital processing of industrial X-rays. Computer techniques, previously used primarily in laboratory and developmental research, have been outlined and codified into step by step procedures for enhancing X-ray images. Those involved in nondestructive testing should find report valuable asset, particularly is visual inspection is method currently used to process X-ray images.
Processing of Fine-Scale Piezoelectric Ceramic/Polymer Composites for Sensors and Actuators
NASA Technical Reports Server (NTRS)
Janas, V. F.; Safari, A.
1996-01-01
The objective of the research effort at Rutgers is the development of lead zirconate titanate (PZT) ceramic/polymer composites with different designs for transducer applications including hydrophones, biomedical imaging, non-destructive testing, and air imaging. In this review, methods for processing both large area and multifunctional ceramic/polymer composites for acoustic transducers were discussed.
ERIC Educational Resources Information Center
Barak, Moshe; Asad, Khaled
2012-01-01
Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…
Measuring the complexity of design in real-time imaging software
NASA Astrophysics Data System (ADS)
Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.
2007-02-01
Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 10 2014-01-01 2014-01-01 false Research. 1208.24 Section 1208.24 Agriculture..., RESEARCH, AND INFORMATION ORDER Processed Raspberry Promotion, Research, and Information Order Definitions § 1208.24 Research. Research means any type of test, study, or analysis designed to advance the image...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 10 2013-01-01 2013-01-01 false Research. 1208.24 Section 1208.24 Agriculture..., RESEARCH, AND INFORMATION ORDER Processed Raspberry Promotion, Research, and Information Order Definitions § 1208.24 Research. Research means any type of test, study, or analysis designed to advance the image...
Advances in biologically inspired on/near sensor processing
NASA Astrophysics Data System (ADS)
McCarley, Paul L.
1999-07-01
As electro-optic sensors increase in size and frame rate, the data transfer and digital processing resource requirements also increase. In many missions, the spatial area of interest is but a small fraction of the available field of view. Choosing the right region of interest, however, is a challenge and still requires an enormous amount of downstream digital processing resources. In order to filter this ever-increasing amount of data, we look at how nature solves the problem. The Advanced Guidance Division of the Munitions Directorate, Air Force Research Laboratory at Elgin AFB, Florida, has been pursuing research in the are of advanced sensor and image processing concepts based on biologically inspired sensory information processing. A summary of two 'neuromorphic' processing efforts will be presented along with a seeker system concept utilizing this innovative technology. The Neuroseek program is developing a 256 X 256 2-color dual band IRFPA coupled to an optimized silicon CMOS read-out and processing integrated circuit that provides simultaneous full-frame imaging in MWIR/LWIR wavebands along with built-in biologically inspired sensor image processing functions. Concepts and requirements for future such efforts will also be discussed.
Web-based interactive 2D/3D medical image processing and visualization software.
Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid
2010-05-01
There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Fiber pixelated image database
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke
2016-08-01
Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.
Dedicated computer system AOTK for image processing and analysis of horse navicular bone
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Fojud, A.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.; Piekarska-Boniecka, H.
2017-07-01
The aim of the research was made the dedicated application AOTK (pol. Analiza Obrazu Trzeszczki Kopytowej) for image processing and analysis of horse navicular bone. The application was produced by using specialized software like Visual Studio 2013 and the .NET platform. To implement algorithms of image processing and analysis were used libraries of Aforge.NET. Implemented algorithms enabling accurate extraction of the characteristics of navicular bones and saving data to external files. Implemented in AOTK modules allowing the calculations of distance selected by user, preliminary assessment of conservation of structure of the examined objects. The application interface is designed in a way that ensures user the best possible view of the analyzed images.
Implementation of sobel method to detect the seed rubber plant leaves
NASA Astrophysics Data System (ADS)
Suyanto; Munte, J.
2018-03-01
This research was conducted to develop a system that can identify and recognize the type of rubber tree based on the pattern of leaves of the plant. The steps research are started with the identification of the image data acquisition, image processing, image edge detection and identification method template matching. Edge detection is using Sobel edge detection. Pattern recognition would detect image as input and compared with other images in a database called templates. Experiments carried out in one phase, identification of the leaf edge, using a rubber plant leaf image 14 are superior and 5 for each type of test images (clones) of the plant. From the experimental results obtained by the recognition rate of 91.79%.
Research Issues in Image Registration for Remote Sensing
NASA Technical Reports Server (NTRS)
Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.
Desktop publishing and medical imaging: paper as hardcopy medium for digital images.
Denslow, S
1994-08-01
Desktop-publishing software and hardware has progressed to the point that many widely used word-processing programs are capable of printing high-quality digital images with many shades of gray from black to white. Accordingly, it should be relatively easy to print digital medical images on paper for reports, instructional materials, and in research notes. Components were assembled that were necessary for extracting image data from medical imaging devices and converting the data to a form usable by word-processing software. A system incorporating these components was implemented in a medical setting and has been operating for 18 months. The use of this system by medical staff has been monitored.
NASA Astrophysics Data System (ADS)
Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.
2018-03-01
Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.
Research of flaw image collecting and processing technology based on multi-baseline stereo imaging
NASA Astrophysics Data System (ADS)
Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan
2008-03-01
Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.
[Research on spatially modulated Fourier transform imaging spectrometer data processing method].
Huang, Min; Xiangli, Bin; Lü, Qun-Bo; Zhou, Jin-Song; Jing, Juan-Juan; Cui, Yan
2010-03-01
Fourier transform imaging spectrometer is a new technic, and has been developed very rapidly in nearly ten years. The data catched by Fourier transform imaging spectrometer is indirect data, can not be used by user, and need to be processed by various approaches, including data pretreatment, apodization, phase correction, FFT, and spectral radicalization calibration. No paper so far has been found roundly to introduce this method. In the present paper, the author will give an effective method to process the interfering data to spectral data, and with this method we can obtain good result.
Low-cost digital image processing at the University of Oklahoma
NASA Technical Reports Server (NTRS)
Harrington, J. A., Jr.
1981-01-01
Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.
Raspberry Pi-powered imaging for plant phenotyping.
Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A
2018-03-01
Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.
Research on assessment and improvement method of remote sensing image reconstruction
NASA Astrophysics Data System (ADS)
Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping
2018-01-01
Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.
A forensic science perspective on the role of images in crime investigation and reconstruction.
Milliet, Quentin; Delémont, Olivier; Margot, Pierre
2014-12-01
This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Molecular imaging promotes progress in orthopedic research.
Mayer-Kuckuk, Philipp; Boskey, Adele L
2006-11-01
Modern orthopedic research is directed towards the understanding of molecular mechanisms that determine development, maintenance and health of musculoskeletal tissues. In recent years, many genetic and proteomic discoveries have been made which necessitate investigation under physiological conditions in intact, living tissues. Molecular imaging can meet this demand and is, in fact, the only strategy currently available for noninvasive, quantitative, real-time biology studies in living subjects. In this review, techniques of molecular imaging are summarized, and applications to bone and joint biology are presented. The imaging modality most frequently used in the past was optical imaging, particularly bioluminescence and near-infrared fluorescence imaging. Alternate technologies including nuclear and magnetic resonance imaging were also employed. Orthopedic researchers have applied molecular imaging to murine models including transgenic mice to monitor gene expression, protein degradation, cell migration and cell death. Within the bone compartment, osteoblasts and their stem cells have been investigated, and the organic and mineral bone phases have been assessed. These studies addressed malignancy and injury as well as repair, including fracture healing and cell/gene therapy for skeletal defects. In the joints, molecular imaging has focused on the inflammatory and tissue destructive processes that cause arthritis. As described in this review, the feasibility of applying molecular imaging to numerous areas of orthopedic research has been demonstrated and will likely result in an increase in research dedicated to this powerful strategy. Molecular imaging holds great promise in the future for preclinical orthopedic research as well as next-generation clinical musculoskeletal diagnostics.
NASA Technical Reports Server (NTRS)
1995-01-01
The Interactive Data Language (IDL), developed by Research Systems, Inc., is a tool for scientists to investigate their data without having to write a custom program for each study. IDL is based on the Mariners Mars spectral Editor (MMED) developed for studies from NASA's Mars spacecraft flights. The company has also developed Environment for Visualizing Images (ENVI), an image processing system for easily analyzing remotely sensed data written in IDL. The Visible Human CD, another Research Systems product, is the first complete digital reference of photographic images for exploring human anatomy.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
2003-09-30
Physical Modeling for Processing Geosynchronous Imaging Fourier Transform Spectrometer ( GIFTS ) Hyperspectral Data Dr. Allen H.-L. Huang...ssec.wisc.edu Award Number: N000140110850 Grant Number: 144KE70 http://www.ssec.wisc.edu/ gifts /navy/ LONG-TERM GOALS This Office of Naval...objective of this DoD research effort is to develop and demonstrate a fully functional GIFTS hyperspectral data processing system with the potential for a
NASA Astrophysics Data System (ADS)
Kuntoro, Hadiyan Yusuf; Hudaya, Akhmad Zidni; Dinaryanto, Okto; Majid, Akmal Irfan; Deendarlianto
2016-06-01
Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methods and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (hL) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuntoro, Hadiyan Yusuf, E-mail: hadiyan.y.kuntoro@mail.ugm.ac.id; Majid, Akmal Irfan; Deendarlianto, E-mail: deendarlianto@ugm.ac.id
Due to the importance of the two-phase flow researches for the industrial safety analysis, many researchers developed various methods and techniques to study the two-phase flow phenomena on the industrial cases, such as in the chemical, petroleum and nuclear industries cases. One of the developing methods and techniques is image processing technique. This technique is widely used in the two-phase flow researches due to the non-intrusive capability to process a lot of visualization data which are contain many complexities. Moreover, this technique allows to capture direct-visual information data of the flow which are difficult to be captured by other methodsmore » and techniques. The main objective of this paper is to present an improved algorithm of image processing technique from the preceding algorithm for the stratified flow cases. The present algorithm can measure the film thickness (h{sub L}) of stratified flow as well as the geometrical properties of the interfacial waves with lower processing time and random-access memory (RAM) usage than the preceding algorithm. Also, the measurement results are aimed to develop a high quality database of stratified flow which is scanty. In the present work, the measurement results had a satisfactory agreement with the previous works.« less
ClearedLeavesDB: an online database of cleared plant leaf images
2014-01-01
Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985
ClearedLeavesDB: an online database of cleared plant leaf images.
Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S
2014-03-28
Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.
Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão
2018-05-24
A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.
Image Reconstruction is a New Frontier of Machine Learning.
Wang, Ge; Ye, Jong Chu; Mueller, Klaus; Fessler, Jeffrey A
2018-06-01
Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.
1991-01-01
Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.
ERIC Educational Resources Information Center
Vogelaar, Robert J.
2005-01-01
In this project a product to aid educational leaders in the process of communicating in crisis situations is presented. The product was created and received a formative evaluation using an educational research and development methodology. Ultimately, an administrative training course that utilized an Image Repair Situational Theory was developed.…
Santiesteban, Daniela Y; Kubelick, Kelsey; Dhada, Kabir S; Dumani, Diego; Suggs, Laura; Emelianov, Stanislav
2016-03-01
The past three decades have seen numerous advances in tissue engineering and regenerative medicine (TERM) therapies. However, despite the successes there is still much to be done before TERM therapies become commonplace in clinic. One of the main obstacles is the lack of knowledge regarding complex tissue engineering processes. Imaging strategies, in conjunction with exogenous contrast agents, can aid in this endeavor by assessing in vivo therapeutic progress. The ability to uncover real-time treatment progress will help shed light on the complex tissue engineering processes and lead to development of improved, adaptive treatments. More importantly, the utilized exogenous contrast agents can double as therapeutic agents. Proper use of these Monitoring/Imaging and Regenerative Agents (MIRAs) can help increase TERM therapy successes and allow for clinical translation. While other fields have exploited similar particles for combining diagnostics and therapy, MIRA research is still in its beginning stages with much of the current research being focused on imaging or therapeutic applications, separately. Advancing MIRA research will have numerous impacts on achieving clinical translations of TERM therapies. Therefore, it is our goal to highlight current MIRA progress and suggest future research that can lead to effective TERM treatments.
PACS in an intensive care unit: results from a randomized controlled trial
NASA Astrophysics Data System (ADS)
Bryan, Stirling; Weatherburn, Gwyneth C.; Watkins, Jessamy; Walker, Samantha; Wright, Carl; Waters, Brian; Evans, Jeff; Buxton, Martin J.
1998-07-01
The objective of this research was to assess the costs and benefits associated with the introduction of a small PACS system into an intensive care unit (ICU) at a district general hospital in north Wales. The research design adopted for this study was a single center randomized controlled trial (RCT). Patients were randomly allocated either to a trial arm where their x-ray imaging was solely film-based or to a trial arm where their x-ray imaging was solely PACS based. Benefit measures included examination-based process measures, such as image turn-round time, radiation dose and image unavailability; and patient-related process measures, which included adverse events and length of stay. The measurement of costs focused on additional 'radiological' costs and the costs of patient management. The study recruited 600 patients. The key findings from this study were that the installation of PACS was associated with important benefits in terms of image availability, and important costs in both monetary and radiation dose terms. PACS-related improvements in terms of more timely 'clinical actions' were not found. However, the qualitative aspect of the research found that clinicians were advocates of the technology and believed that an important benefit of PACS related to improved image availability.
Platform-independent software for medical image processing on the Internet
NASA Astrophysics Data System (ADS)
Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin
1997-05-01
We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
Traumatic Brain Injury Diffusion Magnetic Resonance Imaging Research Roadmap Development Project
2011-10-01
promising technology on the horizon is the Diffusion Tensor Imaging ( DTI ). Diffusion tensor imaging ( DTI ) is a magnetic resonance imaging (MRI)-based...in the brain. The potential for DTI to improve our understanding of TBI has not been fully explored and challenges associated with non-existent...processing tools, quality control standards, and a shared image repository. The recommendations will be disseminated and pilot tested. A DTI of TBI
Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A
2015-07-01
The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. © 2014 John Wiley & Sons Ltd.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
ERIC Educational Resources Information Center
Lee, Il-Sun; Byeon, Jung-Ho; Kim, Young-shin; Kwon, Yong-Ju
2014-01-01
The purpose of this study was to develop a model for measuring experimental design ability based on functional magnetic resonance imaging (fMRI) during biological inquiry. More specifically, the researchers developed an experimental design task that measures experimental design ability. Using the developed experimental design task, they measured…
Choy, Garry; Choyke, Peter; Libutti, Steven K
2003-10-01
Recently, there has been tremendous interest in developing techniques such as MRI, micro-CT, micro-PET, and SPECT to image function and processes in small animals. These technologies offer deep tissue penetration and high spatial resolution, but compared with noninvasive small animal optical imaging, these techniques are very costly and time consuming to implement. Optical imaging is cost-effective, rapid, easy to use, and can be readily applied to studying disease processes and biology in vivo. In vivo optical imaging is the result of a coalescence of technologies from chemistry, physics, and biology. The development of highly sensitive light detection systems has allowed biologists to use imaging in studying physiological processes. Over the last few decades, biochemists have also worked to isolate and further develop optical reporters such as GFP, luciferase, and cyanine dyes. This article reviews the common types of fluorescent and bioluminescent optical imaging, the typical system platforms and configurations, and the applications in the investigation of cancer biology.
An advanced software suite for the processing and analysis of silicon luminescence images
NASA Astrophysics Data System (ADS)
Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.
2017-06-01
Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.
Simulations for Improved Imaging of Faint Objects at Maui Space Surveillance Site
NASA Astrophysics Data System (ADS)
Holmes, R.; Roggemann, M.; Werth, M.; Lucas, J.; Thompson, D.
A detailed wave-optics simulation is used in conjunction with advanced post-processing algorithms to explore the trade space between image post-processing and adaptive optics for improved imaging of low signal-to-noise ratio (SNR) targets. Target-based guidestars are required for imaging of most active Earth-orbiting satellites because of restrictions on using laser-backscatter-based guidestars in the direction of such objects. With such target-based guidestars and Maui conditions, it is found that significant reductions in adaptive optics actuator and subaperture density can result in improved imaging of fainter objects. Simulation indicates that elimination of adaptive optics produces sub-optimal results for all of the faint-object cases considered. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Jensen, Chad D; Duraccio, Kara M; Barnett, Kimberly A; Stevens, Kimberly S
2016-12-01
Research examining effects of visual food cues on appetite-related brain processes and eating behavior has proliferated. Recently investigators have developed food image databases for use across experimental studies examining appetite and eating behavior. The food-pics image database represents a standardized, freely available image library originally validated in a large sample primarily comprised of adults. The suitability of the images for use with adolescents has not been investigated. The aim of the present study was to evaluate the appropriateness of the food-pics image library for appetite and eating research with adolescents. Three hundred and seven adolescents (ages 12-17) provided ratings of recognizability, palatability, and desire to eat, for images from the food-pics database. Moreover, participants rated the caloric content (high vs. low) and healthiness (healthy vs. unhealthy) of each image. Adolescents rated approximately 75% of the food images as recognizable. Approximately 65% of recognizable images were correctly categorized as high vs. low calorie and 63% were correctly classified as healthy vs. unhealthy in 80% or more of image ratings. These results suggest that a smaller subset of the food-pics image database is appropriate for use with adolescents. With some modifications to included images, the food-pics image database appears to be appropriate for use in experimental appetite and eating-related research conducted with adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.
Recent Developments in Molecular Brain Imaging of Neuropsychiatric Disorders.
Slifstein, Mark; Abi-Dargham, Anissa
2017-01-01
Molecular imaging with PET or SPECT has been an important research tool in psychiatry for as long as these modalities have been available. Here, we discuss two areas of neuroimaging relevant to current psychiatry research. The first is the use of imaging to study neurotransmission. We discuss the use of pharmacologic probes to induce changes in levels of neurotransmitters that can be inferred through their effects on outcome measures of imaging experiments, from their historical origins focusing on dopamine transmission through recent developments involving serotonin, GABA, and glutamate. Next, we examine imaging of neuroinflammation in the context of psychiatry. Imaging markers of neuroinflammation have been studied extensively in other areas of brain research, but they have more recently attracted interest in psychiatry research, based on accumulating evidence that there may be an inflammatory component to some psychiatric conditions. Furthermore, new probes are under development that would allow unprecedented insights into cellular processes. In summary, molecular imaging would continue to offer great potential as a unique tool to further our understanding of brain function in health and disease. Copyright © 2017 Elsevier Inc. All rights reserved.
Visualizing Chemistry: The Progess and Promise of Advanced Chemical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Committee on Revealing Chemistry Through Advanced Chemical Imaging
2006-09-01
The field of chemical imaging can provide detailed structural, functional, and applicable information about chemistry and chemical engineering phenomena that have enormous impacts on medicine, materials, and technology. In recognizing the potential for more research development in the field of chemical imaging, the National Academies was asked by the National Science Foundation, Department of Energy, U.S. Army, and National Cancer Institute to complete a study that would review the current state of molecular imaging technology, point to promising future developments and their applications, and suggest a research and educational agenda to enable breakthrough improvements in the ability to image molecularmore » processes simultaneously in multiple physical dimensions as well as time. The study resulted in a consensus report that provides guidance for a focused research and development program in chemical imaging and identifies research needs and possible applications of imaging technologies that can provide the breakthrough knowledge in chemistry, materials science, biology, and engineering for which we should strive. Public release of this report is expected in early October.« less
NASA Technical Reports Server (NTRS)
2003-01-01
In order to rapidly and efficiently grow crystals, tools were needed to automatically identify and analyze the growing process of protein crystals. To meet this need, Diversified Scientific, Inc. (DSI), with the support of a Small Business Innovation Research (SBIR) contract from NASA s Marshall Space Flight Center, developed CrystalScore(trademark), the first automated image acquisition, analysis, and archiving system designed specifically for the macromolecular crystal growing community. It offers automated hardware control, image and data archiving, image processing, a searchable database, and surface plotting of experimental data. CrystalScore is currently being used by numerous pharmaceutical companies and academic and nonprofit research centers. DSI, located in Birmingham, Alabama, was awarded the patent Method for acquiring, storing, and analyzing crystal images on March 4, 2003. Another DSI product made possible by Marshall SBIR funding is VaporPro(trademark), a unique, comprehensive system that allows for the automated control of vapor diffusion for crystallization experiments.
Similarity analysis between quantum images
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou
2018-06-01
Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.
Visual Communications And Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell; Tzou, Kou-Hu
1989-07-01
This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.
2011-01-01
Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176
SIproc: an open-source biomedical data processing platform for large hyperspectral images.
Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David
2017-04-10
There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.
Quantitative imaging features: extension of the oncology medical image database
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.
2015-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.
NASA Astrophysics Data System (ADS)
Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda
2009-11-01
In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.
Space-based optical image encryption.
Chen, Wen; Chen, Xudong
2010-12-20
In this paper, we propose a new method based on a three-dimensional (3D) space-based strategy for the optical image encryption. The two-dimensional (2D) processing of a plaintext in the conventional optical encryption methods is extended to a 3D space-based processing. Each pixel of the plaintext is considered as one particle in the proposed space-based optical image encryption, and the diffraction of all particles forms an object wave in the phase-shifting digital holography. The effectiveness and advantages of the proposed method are demonstrated by numerical results. The proposed method can provide a new optical encryption strategy instead of the conventional 2D processing, and may open up a new research perspective for the optical image encryption.
Processing Digital Imagery to Enhance Perceptions of Realism
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2003-01-01
Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.
MIRIADS: miniature infrared imaging applications development system description and operation
NASA Astrophysics Data System (ADS)
Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.
2001-10-01
A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.
NASA Technical Reports Server (NTRS)
2001-01-01
Image of soot (smoke) plume made for the Laminar Soot Processes (LSP) experiment during the Microgravity Sciences Lab-1 mission in 1997. LSP-2 will fly in the STS-107 Research 1 mission in 2002. The principal investigator is Dr. Gerard Faeth of the University of Michigan. LSP uses a small jet burner, similar to a classroom butane lighter, that produces flames up to 60 mm (2.3 in) long. Measurements include color TV cameras and a temperature sensor, and laser images whose darkness indicates the quantity of soot produced in the flame. Glenn Research in Cleveland, OH, manages the project.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
Institute for Molecular Medicine Research Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phelps, Michael E
2012-12-14
The objectives of the project are the development of new Positron Emission Tomography (PET) imaging instrumentation, chemistry technology platforms and new molecular imaging probes to examine the transformations from normal cellular and biological processes to those of disease in pre-clinical animal models. These technology platforms and imaging probes provide the means to: 1. Study the biology of disease using pre-clinical mouse models and cells. 2. Develop molecular imaging probes for imaging assays of proteins in pre-clinical models. 3. Develop imaging assays in pre-clinical models to provide to other scientists the means to guide and improve the processes for discovering newmore » drugs. 4. Develop imaging assays in pre-clinical models for others to use in judging the impact of drugs on the biology of disease.« less
NASA Astrophysics Data System (ADS)
Avdelidis, N. P.; Kappatos, V.; Georgoulas, G.; Karvelis, P.; Deli, C. K.; Theodorakeas, P.; Giakas, G.; Tsiokanos, A.; Koui, M.; Jamurtas, A. Z.
2017-04-01
Exercise induced muscle damage (EIMD), is usually experienced in i) humans who have been physically inactive for prolonged periods of time and then begin with sudden training trials and ii) athletes who train over their normal limits. EIMD is not so easy to be detected and quantified, by means of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48 and 72 hours after an acute bout of eccentric exercise (5 sets of 15 maximum repetitions), on males and females (20-30 year-old). Results indicate that the semi-automated approach provides an excellent bench-mark that can be used as a clinical reliable tool.
Development of real-time extensometer based on image processing
NASA Astrophysics Data System (ADS)
Adinanta, H.; Puranto, P.; Suryadi
2017-04-01
An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.
"Minding the gap": imagination, creativity and human cognition.
Pelaprat, Etienne; Cole, Michael
2011-12-01
Inquiry into the nature of mental images is a major topic in psychology where research is focused on the psychological faculties of imagination and creativity. In this paper, we draw on the work of L.S. Vygotsky to develop a cultural-historical approach to the study of imagination as central to human cognitive processes. We characterize imagination as a process of image making that resolves "gaps" arising from biological and cultural-historical constraints, and that enables ongoing time-space coordination necessary for thought and action. After presenting some basic theoretical considerations, we offer a series of examples to illustrate for the reader the diversity of processes of imagination as image making. Applying our arguments to contemporary digital media, we argue that a cultural-historical approach to image formation is important for understanding how imagination and creativity are distinct, yet inter-penetrating processes.
Promise of new imaging technologies for assessing ovarian function.
Singh, Jaswant; Adams, Gregg P; Pierson, Roger A
2003-10-15
Advancements in imaging technologies over the last two decades have ushered a quiet revolution in research approaches to the study of ovarian structure and function. The most significant changes in our understanding of the ovary have resulted from the use of ultrasonography which has enabled sequential analyses in live animals. Computer-assisted image analysis and mathematical modeling of the dynamic changes within the ovary has permitted exciting new avenues of research with readily quantifiable endpoints. Spectral, color-flow and power Doppler imaging now facilitate physiologic interpretations of vascular dynamics over time. Similarly, magnetic resonance imaging (MRI) is emerging as a research tool in ovarian imaging. New technologies, such as three-dimensional ultrasonography and MRI, ultrasound-based biomicroscopy and synchrotron-based techniques each have the potential to enhance our real-time picture of ovarian function to the near-cellular level. Collectively, information available in ultrasonography, MRI, computer-assisted image analysis and mathematical modeling heralds a new era in our understanding of the basic processes of female and male reproduction.
Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network
NASA Astrophysics Data System (ADS)
Nasution, T. H.; Andayani, U.
2017-03-01
The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.
Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil
2018-01-25
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Present status and trends of image fusion
NASA Astrophysics Data System (ADS)
Xiang, Dachao; Fu, Sheng; Cai, Yiheng
2009-10-01
Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.
3CCD image segmentation and edge detection based on MATLAB
NASA Astrophysics Data System (ADS)
He, Yong; Pan, Jiazhi; Zhang, Yun
2006-09-01
This research aimed to identify weeds from crops in early stage in the field operation by using image-processing technology. As 3CCD images offer greater binary value difference between weed and crop section than ordinary digital images taken by common cameras. It has 3 channels (green, red, ifred) which takes a snap-photo of the same area, and the three images can be composed into one image, which facilitates the segmentation of different areas. By the application of image-processing toolkit on MATLAB, the different areas in the image can be segmented clearly. As edge detection technique is the first and very important step in image processing, The different result of different processing method was compared. Especially, by using the wavelet packet transform toolkit on MATLAB, An image was preprocessed and then the edge was extracted, and getting more clearly cut image of edge. The segmentation methods include operations as erosion, dilation and other algorithms to preprocess the images. It is of great importance to segment different areas in digital images in field real time, so as to be applied in precision farming, to saving energy and herbicide and many other materials. At present time Large scale software as MATLAB on PC was used, but the computation can be reduced and integrated into a small embed system, which means that the application of this technique in agricultural engineering is feasible and of great economical value.
How Many Words Is a Picture Worth? Integrating Visual Literacy in Language Learning with Photographs
ERIC Educational Resources Information Center
Baker, Lottie
2015-01-01
Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…
Instructional image processing on a university mainframe: The Kansas system
NASA Technical Reports Server (NTRS)
Williams, T. H. L.; Siebert, J.; Gunn, C.
1981-01-01
An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.
ERIC Educational Resources Information Center
Weiss, Ruth Palombo
2000-01-01
Discusses brain research and how new imaging technologies allow scientists to explore how human brains process memory, emotion, attention, patterning, motivation, and context. Explains how brain research is being used to revise learning theories. (JOW)
Implementation of the Pan-STARRS Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Fang, Julia; Aspin, C.
2007-12-01
Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.
Medical Image Analysis Facility
NASA Technical Reports Server (NTRS)
1978-01-01
To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.
An Adaptive Inpainting Algorithm Based on DCT Induced Wavelet Regularization
2013-01-01
research in image processing. Applications of image inpainting include old films restoration, video inpainting [4], de -interlacing of video sequences...show 5 (a) (b) (c) (d) (e) (f) Fig. 1. Performance of various inpainting algorithms for a cartoon image with text. (a) the original test image; (b...the test image with text; inpainted images by (c) SF (PSNR=37.38 dB); (d) SF-LDCT (PSNR=37.37 dB); (e) MCA (PSNR=37.04 dB); and (f) the proposed
Involving Undergraduates in Solar Physics Research
NASA Astrophysics Data System (ADS)
Lopresto, James C.; Jenkins, Nancy
1996-05-01
Via a combination of local funding, Cottrell Research Corporation and a pending NSF proposal, I am actively involved in including undergraduates in solar physics research. Severl undergraduates, about 2-3 per academic year over the past several years have participated in a combination of activities. This project has been ongoing since November of 1992. Student involvement includes; 1)acquiring image and other data via the INTERNET, 2) reducing dat via inhouse programs and image processing, 3) traveling to Kitt Peak to obtain solar spectral index data.
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
A real time mobile-based face recognition with fisherface methods
NASA Astrophysics Data System (ADS)
Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.
2018-03-01
Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.
DOT National Transportation Integrated Search
2016-10-01
This report details the research undertaken and software tools that were developed that enable digital : images of gusset plates to be converted into orthophotos, establish physical dimensions, collect : geometric information from them, and conduct s...
Brain imaging registry for neurologic diagnosis and research
NASA Astrophysics Data System (ADS)
Hoo, Kent S., Jr.; Wong, Stephen T. C.; Knowlton, Robert C.; Young, Geoffrey S.; Walker, John; Cao, Xinhua; Dillon, William P.; Hawkins, Randall A.; Laxer, Kenneth D.
2002-05-01
The purpose of this paper is to demonstrate the importance of building a brain imaging registry (BIR) on top of existing medical information systems including Picture Archiving Communication Systems (PACS) environment. We describe the design framework for a cluster of data marts whose purpose is to provide clinicians and researchers efficient access to a large volume of raw and processed patient images and associated data originating from multiple operational systems over time and spread out across different hospital departments and laboratories. The framework is designed using object-oriented analysis and design methodology. The BIR data marts each contain complete image and textual data relating to patients with a particular disease.
ERIC Educational Resources Information Center
Plum, Terry; Smalley, Topsy N.
1994-01-01
Discussion of humanities research focuses on the humanist patron as author of the text. Highlights include the research process; style of expression; interpretation; multivocality; reflexivity; social validation; repatriation; the image of the library for the author; patterns of searching behavior; and reference librarian responses. (37…
Yap, Florence G H; Yen, Hong-Hsu
2014-02-20
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.
Yap, Florence G. H.; Yen, Hong-Hsu
2014-01-01
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs. PMID:24561401
NASA Astrophysics Data System (ADS)
Made, Pertiwi Jaya Ni; Miura, Fusanori; Besse Rimba, A.
2016-06-01
A large-scale earthquake and tsunami affect thousands of people and cause serious damages worldwide every year. Quick observation of the disaster damage is extremely important for planning effective rescue operations. In the past, acquiring damage information was limited to only field surveys or using aerial photographs. In the last decade, space-borne images were used in many disaster researches, such as tsunami damage detection. In this study, SAR data of ALOS/PALSAR satellite images were used to estimate tsunami damage in the form of inundation areas in Talcahuano, the area near the epicentre of the 2010 Chile earthquake. The image processing consisted of three stages, i.e. pre-processing, analysis processing, and post-processing. It was conducted using multi-temporal images before and after the disaster. In the analysis processing, inundation areas were extracted through the masking processing. It consisted of water masking using a high-resolution optical image of ALOS/AVNIR-2 and elevation masking which built upon the inundation height using DEM image of ASTER-GDEM. The area result was 8.77 Km2. It showed a good result and corresponded to the inundation map of Talcahuano. Future study in another area is needed in order to strengthen the estimation processing method.
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.
Fluorescence lidar multi-color imaging of vegetation
NASA Technical Reports Server (NTRS)
Johansson, J.; Wallinder, E.; Edner, H.; Svanberg, S.
1992-01-01
Multi-color imaging of vegetation fluorescence following laser excitation is reported for distances of 50 m. A mobile laser radar system equipped with a Nd:YAG laser transmitter and a 40 cm diameter telescope was used. Image processing allows extraction of information related to the physiological status of the vegetation and might prove useful in forest decline research.
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
Automatic cloud coverage assessment of Formosat-2 image
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2011-11-01
Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Syed, Hazari I.
1995-01-01
This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.
NASA Technical Reports Server (NTRS)
Harrison, D. C.; Sandler, H.; Miller, H. A.
1975-01-01
The present collection of papers outlines advances in ultrasonography, scintigraphy, and commercialization of medical technology as applied to cardiovascular diagnosis in research and clinical practice. Particular attention is given to instrumentation, image processing and display. As necessary concomitants to mathematical analysis, recently improved magnetic recording methods using tape or disks and high-speed computers of large capacity are coming into use. Major topics include Doppler ultrasonic techniques, high-speed cineradiography, three-dimensional imaging of the myocardium with isotopes, sector-scanning echocardiography, and commercialization of the echocardioscope. Individual items are announced in this issue.
Image object recognition based on the Zernike moment and neural networks
NASA Astrophysics Data System (ADS)
Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu
1998-03-01
This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.
Experiment research on infrared targets signature in mid and long IR spectral bands
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Hong, Pu; Lei, Bo; Yue, Song; Zhang, Zhijie; Ren, Tingting
2013-09-01
Since the infrared imaging system has played a significant role in the military self-defense system and fire control system, the radiation signature of IR target becomes an important topic in IR imaging application technology. IR target signature can be applied in target identification, especially for small and dim targets, as well as the target IR thermal design. To research and analyze the targets IR signature systematically, a practical and experimental project is processed under different backgrounds and conditions. An infrared radiation acquisition system based on a MWIR cooled thermal imager and a LWIR cooled thermal imager is developed to capture the digital infrared images. Furthermore, some instruments are introduced to provide other parameters. According to the original image data and the related parameters in a certain scene, the IR signature of interested target scene can be calculated. Different background and targets are measured with this approach, and a comparison experiment analysis shall be presented in this paper as an example. This practical experiment has proved the validation of this research work, and it is useful in detection performance evaluation and further target identification research.
a Geographic Data Gathering System for Image Geolocalization Refining
NASA Astrophysics Data System (ADS)
Semaan, B.; Servières, M.; Moreau, G.; Chebaro, B.
2017-09-01
Image geolocalization has become an important research field during the last decade. This field is divided into two main sections. The first is image geolocalization that is used to find out which country, region or city the image belongs to. The second one is refining image localization for uses that require more accuracy such as augmented reality and three dimensional environment reconstruction using images. In this paper we present a processing chain that gathers geographic data from several sources in order to deliver a better geolocalization than the GPS one of an image and precise camera pose parameters. In order to do so, we use multiple types of data. Among this information some are visible in the image and are extracted using image processing, other types of data can be extracted from image file headers or online image sharing platforms related information. Extracted information elements will not be expressive enough if they remain disconnected. We show that grouping these information elements helps finding the best geolocalization of the image.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Magnetospheric Radio Tomography: Observables, Algorithms, and Experimental Analysis
NASA Technical Reports Server (NTRS)
Cummer, Steven
2005-01-01
This grant supported research towards developing magnetospheric electron density and magnetic field remote sensing techniques via multistatic radio propagation and tomographic image reconstruction. This work was motivated by the need to better develop the basic technique of magnetospheric radio tomography, which holds substantial promise as a technology uniquely capable of imaging magnetic field and electron density in the magnetosphere on large scales with rapid cadence. Such images would provide an unprecedented and needed view into magnetospheric processes. By highlighting the systems-level interconnectedness of different regions, our understanding of space weather processes and ability to predict them would be dramatically enhanced. Three peer-reviewed publications and 5 conference presentations have resulted from this work, which supported 1 PhD student and 1 postdoctoral researcher. One more paper is in progress and will be submitted shortly. Because the main results of this research have been published or are soon to be published in refereed journal articles listed in the reference section of this document, we provide here an overview of the research and accomplishments without describing all of the details that are contained in the articles.
Semi-automated Digital Imaging and Processing System for Measuring Lake Ice Thickness
NASA Astrophysics Data System (ADS)
Singh, Preetpal
Canada is home to thousands of freshwater lakes and rivers. Apart from being sources of infinite natural beauty, rivers and lakes are an important source of water, food and transportation. The northern hemisphere of Canada experiences extreme cold temperatures in the winter resulting in a freeze up of regional lakes and rivers. Frozen lakes and rivers tend to offer unique opportunities in terms of wildlife harvesting and winter transportation. Ice roads built on frozen rivers and lakes are vital supply lines for industrial operations in the remote north. Monitoring the ice freeze-up and break-up dates annually can help predict regional climatic changes. Lake ice impacts a variety of physical, ecological and economic processes. The construction and maintenance of a winter road can cost millions of dollars annually. A good understanding of ice mechanics is required to build and deem an ice road safe. A crucial factor in calculating load bearing capacity of ice sheets is the thickness of ice. Construction costs are mainly attributed to producing and maintaining a specific thickness and density of ice that can support different loads. Climate change is leading to warmer temperatures causing the ice to thin faster. At a certain point, a winter road may not be thick enough to support travel and transportation. There is considerable interest in monitoring winter road conditions given the high construction and maintenance costs involved. Remote sensing technologies such as Synthetic Aperture Radar have been successfully utilized to study the extent of ice covers and record freeze-up and break-up dates of ice on lakes and rivers across the north. Ice road builders often used Ultrasound equipment to measure ice thickness. However, an automated monitoring system, based on machine vision and image processing technology, which can measure ice thickness on lakes has not been thought of. Machine vision and image processing techniques have successfully been used in manufacturing to detect equipment failure and identify defective products at the assembly line. The research work in this thesis combines machine vision and image processing technology to build a digital imaging and processing system for monitoring and measuring lake ice thickness in real time. An ultra-compact USB camera is programmed to acquire and transmit high resolution imagery for processing with MATLAB Image Processing toolbox. The image acquisition and transmission process is fully automated; image analysis is semi-automated and requires limited user input. Potential design changes to the prototype and ideas on fully automating the imaging and processing procedure are presented to conclude this research work.
Addressing the potential adverse effects of school-based BMI assessments on children's wellbeing.
Gibbs, Lisa; O'Connor, Thea; Waters, Elizabeth; Booth, Michael; Walsh, Orla; Green, Julie; Bartlett, Jenny; Swinburn, Boyd
2008-01-01
INTRODUCTION. Do child obesity prevention research and intervention measures have the potential to generate adverse concerns about body image by focussing on food, physical activity and body weight? Research findings now demonstrate the emergence of body image concerns in children as young as 5 years. In the context of a large school-community-based child health promotion and obesity prevention study, we aimed to address the potential negative effects of height and weight measures on child wellbeing by developing and implementing an evidence-informed protocol to protect and prevent body image concerns. fun 'n healthy in Moreland! is a cluster randomised controlled trial of a child health promotion and obesity prevention intervention in 23 primary schools in an inner urban area of Melbourne, Australia. Body image considerations were incorporated into the study philosophies, aims, methods, staff training, language, data collection and reporting procedures of this study. This was informed by the published literature, professional body image expertise, pilot testing and implementation in the conduct of baseline data collection and the intervention. This study is the first record of a body image protection protocol being an integral part of the research processes of a child obesity prevention study. Whilst we are yet to measure its impact and outcome, we have developed and tested a protocol based on the evidence and with support from stakeholders in order to minimise the adverse impact of study processes on child body image concerns.
ERIC Educational Resources Information Center
Hafner, Mathias
2008-01-01
Cell biology and molecular imaging technologies have made enormous progress in basic research. However, the transfer of this knowledge to the pharmaceutical drug discovery process, or even therapeutic improvements for disorders such as neuronal diseases, is still in its infancy. This transfer needs scientists who can integrate basic research with…
NASA Astrophysics Data System (ADS)
Wang, Fu-Bin; Tu, Paul; Wu, Chen; Chen, Lei; Feng, Ding
2018-01-01
In femtosecond laser processing, the field of view of each image frame of the microscale structure is extremely small. In order to obtain the morphology of the whole microstructure, a multi-image mosaic with partially overlapped regions is required. In the present work, the SIFT algorithm for mosaic images was analyzed theoretically, and by using multiple images of a microgroove structure processed by femtosecond laser, a stitched image of the whole groove structure could be studied experimentally and realized. The object of our research concerned a silicon wafer with a microgroove structure ablated by femtosecond laser. First, we obtained microgrooves at a width of 380 μm at different depths. Second, based on the gray image of the microgroove, a multi-image mosaic with slot width and slot depth was realized. In order to improve the image contrast between the target and the background, and taking the slot depth image as an example, a multi-image mosaic was then realized using pseudo color enhancement. Third, in order to measure the structural size of the microgroove with the image, a known width streak ablated by femtosecond laser at 20 mW was used as a calibration sample. Through edge detection, corner extraction, and image correction for the streak images, we calculated the pixel width of the streak image and found the measurement ratio constant Kw in the width direction, and then obtained the proportional relationship between a pixel and a micrometer. Finally, circular spot marks ablated by femtosecond laser at 2 mW and 15 mW were used as test images, and proving that the value Kw was correct, the measurement ratio constant Kh in the height direction was obtained, and the image measurements for a microgroove of 380 × 117 μm was realized based on a measurement ratio constant Kw and Kh. The research and experimental results show that the image mosaic, image calibration, and geometric image parameter measurements for the microstructural image ablated by femtosecond laser were realized effectively.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1988-01-01
This final report describes the accomplishments of the General Purpose Intelligent Sensor Interface task of the Applications of Artificial Intelligence to Space Station grant for the period from October 1, 1987 through September 30, 1988. Portions of the First Biannual Report not revised will not be included but only referenced. The goal is to develop an intelligent sensor system that will simplify the design and development of expert systems using sensors of the physical phenomena as a source of data. This research will concentrate on the integration of image processing sensors and voice processing sensors with a computer designed for expert system development. The result of this research will be the design and documentation of a system in which the user will not need to be an expert in such areas as image processing algorithms, local area networks, image processor hardware selection or interfacing, television camera selection, voice recognition hardware selection, or analog signal processing. The user will be able to access data from video or voice sensors through standard LISP statements without any need to know about the sensor hardware or software.
GIFT-Cloud: A data sharing and collaboration platform for medical imaging research.
Doel, Tom; Shakir, Dzhoshkun I; Pratt, Rosalind; Aertsen, Michael; Moggridge, James; Bellon, Erwin; David, Anna L; Deprest, Jan; Vercauteren, Tom; Ourselin, Sébastien
2017-02-01
Clinical imaging data are essential for developing research software for computer-aided diagnosis, treatment planning and image-guided surgery, yet existing systems are poorly suited for data sharing between healthcare and academia: research systems rarely provide an integrated approach for data exchange with clinicians; hospital systems are focused towards clinical patient care with limited access for external researchers; and safe haven environments are not well suited to algorithm development. We have established GIFT-Cloud, a data and medical image sharing platform, to meet the needs of GIFT-Surg, an international research collaboration that is developing novel imaging methods for fetal surgery. GIFT-Cloud also has general applicability to other areas of imaging research. GIFT-Cloud builds upon well-established cross-platform technologies. The Server provides secure anonymised data storage, direct web-based data access and a REST API for integrating external software. The Uploader provides automated on-site anonymisation, encryption and data upload. Gateways provide a seamless process for uploading medical data from clinical systems to the research server. GIFT-Cloud has been implemented in a multi-centre study for fetal medicine research. We present a case study of placental segmentation for pre-operative surgical planning, showing how GIFT-Cloud underpins the research and integrates with the clinical workflow. GIFT-Cloud simplifies the transfer of imaging data from clinical to research institutions, facilitating the development and validation of medical research software and the sharing of results back to the clinical partners. GIFT-Cloud supports collaboration between multiple healthcare and research institutions while satisfying the demands of patient confidentiality, data security and data ownership. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Cocaine, Appetitive Memory and Neural Connectivity
Ray, Suchismita
2013-01-01
This review examines existing cognitive experimental and brain imaging research related to cocaine addiction. In section 1, previous studies that have examined cognitive processes, such as implicit and explicit memory processes in cocaine users are reported. Next, in section 2, brain imaging studies are reported that have used chronic users of cocaine as study participants. In section 3, several conclusions are drawn. They are: (a) in cognitive experimental literature, no study has examined both implicit and explicit memory processes involving cocaine related visual information in the same cocaine user, (b) neural mechanisms underlying implicit and explicit memory processes for cocaine-related visual cues have not been directly investigated in cocaine users in the imaging literature, and (c) none of the previous imaging studies has examined connectivity between the memory system and craving system in the brain of chronic users of cocaine. Finally, future directions in the field of cocaine addiction are suggested. PMID:25009766
Comparing an FPGA to a Cell for an Image Processing Application
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.
2010-12-01
Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.
Chemistry of the Konica Dry Color System
NASA Astrophysics Data System (ADS)
Suda, Yoshihiko; Ohbayashi, Keiji; Onodera, Kaoru
1991-08-01
While silver halide photosensitive materials offer superiority in image quality -- both in color and black-and-white -- they require chemical solutions for processing, and this can be a drawback. To overcome this, researchers turned to the thermal development of silver halide photographic materials, and met their first success with black-and-white images. Later, with the development of the Konica Dry Color System, color images were finally obtained from a completely dry thermal development system, without the use of water or chemical solutions. The dry color system is characterized by a novel chromogenic color image-forming technology and comprises four processes. (1) With the application of heat, a color developer precursor (CDP) decomposes to generate a p-phenylenediamine color developer (CD). (2) The CD then develops silver salts. (3) Oxidized CD then reacts with couplers to generate color image dyes. (4) Finally, the dyes diffuse from the system's photosensitive sheet to its image-receiving sheet. The authors have analyzed the kinetics of each of the system's four processes. In this paper, they report the kinetics of the system's first process, color developer (CD) generation.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Wójcicki, Tomasz; Nowicki, Michał
2016-01-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed. PMID:28773389
NASA Technical Reports Server (NTRS)
2003-01-01
This sequence of three images in northern Colorado was taken by NASA's Airborne Synthetic Aperture Radar (AirSar) for the joint NASA-National Oceanic and Atmospheric Administration Cold Land Processes Experiment. The images were produced from data acquired on February 19, 21 and 23, 2002 (top to bottom), and demonstrate the effects of snow on the radar backscatter at different frequencies. The images are centered at 40 degrees north latitude and 106 degrees west longitude, 12 kilometers (7.5 miles) west of the town of Fraser. The colors red, green and blue indicate the relative total power of the radar backscatter at P-, L-, and C-bands, respectively.
The top image was acquired before snowfall; the middle image was acquired the morning after the snow. When the snow melted, the most prominent changes were visible and can be seen in the bottom image. In this image, melting snow allows less of the radar signal to backscatter and some features appear darker.The Cold Land Processes Experiment is a multi-year experiment to study how snow processes work and how snow-covered areas affect weather and climate. Fraser, Colo., is one of three study areas in northern Colorado and southern Wyoming providing ideal natural laboratories for snow research. AirSar flies aboard a NASA DC-8 based at NASA's Dryden Flight Research Center, Edwards, Calif. Built, operated and managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., AirSar is part of NASA's Earth Science Enterprise program. JPL is a division of the California Institute of Technology in Pasadena.Automated Formosat Image Processing System for Rapid Response to International Disasters
NASA Astrophysics Data System (ADS)
Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.
2016-06-01
FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
Study of pipe thickness loss using a neutron radiography method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed, Abdul Aziz; Wahab, Aliff Amiru Bin; Yazid, Hafizal B.
2014-02-12
The purpose of this preliminary work is to study for thickness changes in objects using neutron radiography. In doing the project, the technique for the radiography was studied. The experiment was done at NUR-2 facility at TRIGA research reactor in Malaysian Nuclear Agency, Malaysia. Test samples of varying materials were used in this project. The samples were radiographed using direct technique. Radiographic images were recorded using Nitrocellulose film. The films obtained were digitized to processed and analyzed. Digital processing is done on the images using software Isee!. The images were processed to produce better image for analysis. The thickness changesmore » in the image were measured to be compared with real thickness of the objects. From the data collected, percentages difference between measured and real thickness are below than 2%. This is considerably very low variation from original values. Therefore, verifying the neutron radiography technique used in this project.« less
Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.
Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe
2018-01-01
Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.
Fractional domain varying-order differential denoising method
NASA Astrophysics Data System (ADS)
Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran
2014-10-01
Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.
Fundamental Concepts of Digital Image Processing
DOE R&D Accomplishments Database
Twogood, R. E.
1983-03-01
The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.
CR softcopy display presets based on optimum visualization of specific findings
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.
1999-07-01
The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.
Image-based automatic recognition of larvae
NASA Astrophysics Data System (ADS)
Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai
2010-08-01
As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.
NASA Astrophysics Data System (ADS)
Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur
2017-06-01
In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications
Moussa, Adel; El-Sheimy, Naser; Habib, Ayman
2017-01-01
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847
Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.
Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman
2017-10-18
Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.
Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms
2004-08-01
inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared
Automatic high throughput empty ISO container verification
NASA Astrophysics Data System (ADS)
Chalmers, Alex
2007-04-01
Encouraging results are presented for the automatic analysis of radiographic images of a continuous stream of ISO containers to confirm they are truly empty. A series of image processing algorithms are described that process real-time data acquired during the actual inspection of each container and assigns each to one of the classes "empty", "not empty" or "suspect threat". This research is one step towards achieving fully automated analysis of cargo containers.
Reducing uncertainty in wind turbine blade health inspection with image processing techniques
NASA Astrophysics Data System (ADS)
Zhang, Huiyi
Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
NASA Astrophysics Data System (ADS)
Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde
2018-03-01
This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.
Image recognition of clipped stigma traces in rice seeds
NASA Astrophysics Data System (ADS)
Cheng, F.; Ying, YB
2005-11-01
The objective of this research is to develop algorithm to recognize clipped stigma traces in rice seeds using image processing. At first, the micro-configuration of clipped stigma traces was observed with electronic scanning microscope. Then images of rice seeds were acquired with a color machine vision system. A digital image-processing algorithm based on morphological operations and Hough transform was developed to inspect the occurrence of clipped stigma traces. Five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and you3207 were evaluated. The algorithm was implemented with all image sets using a Matlab 6.5 procedure. The results showed that the algorithm achieved an average accuracy of 96%. The algorithm was proved to be insensitive to the different rice seed varieties.
[Quantitative data analysis for live imaging of bone.
Seno, Shigeto
Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.
From Mars to man - Biomedical research at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Beckenbach, E. S.
1984-01-01
In the course of the unmanned exploration of the solar system, which the California Institute of Technology's Jet Propulsion Laboratory has managed for NASA, major advances in computerized image processing, materials research, and miniature electronics design have been accomplished. This presentation shows some of the imaging results from space exploration missions, as well as biomedical research tasks based in these technologies. Among other topics, the use of polymeric microspheres in cancer therapy is discussed. Also included are ceramic applications to prosthesis development, laser applications in the treatment of coronary artery disease, multispectral imaging as used in the diagnosis of thermal burn injury, and some examples of telemetry systems as they can be involved in biological systems.
Image correlation method for DNA sequence alignment.
Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván
2012-01-01
The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.
Superfast 3D shape measurement of a flapping flight process with motion based segmentation
NASA Astrophysics Data System (ADS)
Li, Beiwen
2018-02-01
Flapping flight has drawn interests from different fields including biology, aerodynamics and robotics. For such research, the digital fringe projection technology using defocused binary image projection has superfast (e.g. several kHz) measurement capabilities with digital-micromirror-device, yet its measurement quality is still subject to the motion of flapping flight. This research proposes a novel computational framework for dynamic 3D shape measurement of a flapping flight process. The fast and slow motion parts are separately reconstructed with Fourier transform and phase shifting. Experiments demonstrate its success by measuring a flapping wing robot (image acquisition rate: 5000 Hz; flapping speed: 25 cycles/second).
Color management in textile application
NASA Astrophysics Data System (ADS)
De Lucia, Maurizio; Vannucci, Massimiliano; Buonopane, Massimo; Fabroni, Cosimo; Fabrini, Francesco
2002-03-01
The aim of this research was to study a system of acquisition and processing of images capable of confronting colored wool with a reference specimen, in order to define the conformity using objective parameters. The first step of the research was to comprise and to analyze in depth the problem: there has been numerous implications of technical, physical, cultural, biological and also psychological character, that come down from the attempt of giving a quantitative appraisal to the color. In the scene of the national and international scientific and technological research, little has been made as regards measurement of color through digital processing of the images through linear CCD. The reason is fundamentally of technological nature: only during the last years we found the presence on the market of low cost equipment capable of acquiring and processing images with adequate performances and qualities. The job described has permitted to create a first prototype of system for the color measuring with use of CCD linear devices. -Hardware identification to carry out a series of tests and experiments in laboratory. -Verification of such device in a textile facility. -Statistics analysis of the collected data and of the employed models.
Overall design of imaging spectrometer on-board light aircraft
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhongqi, H.; Zhengkui, C.; Changhua, C.
1996-11-01
Aerial remote sensing is the earliest remote sensing technical system and has gotten rapid development in recent years. The development of aerial remote sensing was dominated by high to medium altitude platform in the past, and now it is characterized by the diversity platform including planes of high-medium-low flying altitude, helicopter, airship, remotely controlled airplane, glider, and balloon. The widely used and rapidly developed platform recently is light aircraft. Early in the close of 1970s, Beijing Research Institute of Uranium Geology began aerial photography and geophysical survey using light aircraft, and put forward the overall design scheme of light aircraftmore » imaging spectral application system (LAISAS) in 19905. LAISAS is comprised of four subsystem. They are called measuring platform, data acquiring subsystem, ground testing and data processing subsystem respectively. The principal instruments of LAISAS include measuring platform controlled by inertia gyroscope, aerial spectrometer with high spectral resolution, imaging spectrometer, 3-channel scanner, 128-channel imaging spectrometer, GPS, illuminance-meter, and devices for atmospheric parameters measuring, ground testing, data correction and processing. LAISAS has the features of integrity from data acquisition to data processing and to application; of stability which guarantees the image quality and is comprised of measuring, ground testing device, and in-door data correction system; of exemplariness of integrated the technology of GIS, GPS, and Image Processing System; of practicality which embodied LAISAS with flexibility and high ratio of performance to cost. So, it can be used in the fields of fundamental research of Remote Sensing and large-scale mapping for resource exploration, environmental monitoring, calamity prediction, and military purpose.« less
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Zhang, Xintong; Bi, Anyao; Gao, Quansheng; Zhang, Shuai; Huang, Kunzhu; Liu, Zhiguo; Gao, Tang; Zeng, Wenbin
2016-01-20
The olfactory system of organisms serves as a genetically and anatomically model for studying how sensory input can be translated into behavior output. Some neurologic diseases are considered to be related to olfactory disturbance, especially Alzheimer's disease, Parkinson's disease, multiple sclerosis, and so forth. However, it is still unclear how the olfactory system affects disease generation processes and olfaction delivery processes. Molecular imaging, a modern multidisciplinary technology, can provide valid tools for the early detection and characterization of diseases, evaluation of treatment, and study of biological processes in living subjects, since molecular imaging applies specific molecular probes as a novel approach to produce special data to study biological processes in cellular and subcellular levels. Recently, molecular imaging plays a key role in studying the activation of olfactory system, thus it could help to prevent or delay some diseases. Herein, we present a comprehensive review on the research progress of the imaging probes for visualizing olfactory system, which is classified on different imaging modalities, including PET, MRI, and optical imaging. Additionally, the probes' design, sensing mechanism, and biological application are discussed. Finally, we provide an outlook for future studies in this field.
e-phenology: monitoring leaf phenology and tracking climate changes in the tropics
NASA Astrophysics Data System (ADS)
Morellato, Patrícia; Alberton, Bruna; Almeida, Jurandy; Alex, Jefersson; Mariano, Greice; Torres, Ricardo
2014-05-01
The e-phenology is a multidisciplinary project combining research in Computer Science and Phenology. Its goal is to attack theoretical and practical problems involving the use of new technologies for remote phenological observation aiming to detect local environmental changes. It is geared towards three objectives: (a) the use of new technologies of environmental monitoring based on remote phenology monitoring systems; (b) creation of a protocol for a Brazilian long term phenology monitoring program and for the integration across disciplines, advancing our knowledge of seasonal responses within tropics to climate change; and (c) provide models, methods and algorithms to support management, integration and analysis of data of remote phenology systems. The research team is composed by computer scientists and biology researchers in Phenology. Our first results include: Phenology towers - We set up the first phenology tower in our core cerrado-savanna 1 study site at Itirapina, São Paulo, Brazil. The tower received a complete climatic station and a digital camera. The digital camera is set up to take daily sequence of images (five images per hour, from 6:00 to 18:00 h). We set up similar phenology towers with climatic station and cameras in five more sites: cerrado-savanna 2 (Pé de Gigante, SP), cerrado grassland 3 (Itirapina, SP), rupestrian fields 4 ( Serra do Cipo, MG), seasonal forest 5 (Angatuba, SP) and Atlantic raiforest 6 (Santa Virginia, SP). Phenology database - We finished modeling and validation of a phenology database that stores ground phenology and near-remote phenology, and we are carrying out the implementation with data ingestion. Remote phenology and image processing - We performed the first analyses of the cerrado sites 1 to 4 phenology derived from digital images. Analysis were conducted by extracting color information (RGB Red, Green and Blue color channels) from selected parts of the image named regions of interest (ROI). using the green color channel. We analyzed a daily sequence of images (6:00 to 18:00 h). Our results are innovative and indicate the great variation in color change response for tropical trees. We validate the camera phenology with our on the ground direct observation in the core cerrado site 1. We are developing a Image processing software to authomatic process the digital images and to generate the time series for further analyses. New techniques and image features have been used to extract seasonal features from data and for data processing, such as machine learning and visual rhythms. Machine learning was successful applied to identify similar species within the image. Visual rhythms show up as a new analytic tool for phenological interpretation. Next research steps include the analyses of longer data series, correlation with local climatic data, analyses and comparison of patterns among different vegetation sites, prepare a compressive protocol for digital camera phenology and develop new technologies to access vegetation changes using digital cameras. Support: FAPESP-Micorsoft Research, CNPq, CAPES.
Standardization efforts of digital pathology in Europe.
Rojo, Marcial García; Daniel, Christel; Schrader, Thomas
2012-01-01
EURO-TELEPATH is a European COST Action IC0604. It started in 2007 and will end in November 2011. Its main objectives are evaluating and validating the common technological framework and communication standards required to access, transmit, and manage digital medical records by pathologists and other medical specialties in a networked environment. Working Group 1, "Business Modelling in Pathology," has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy - using Business Process Modelling Notation (BPMN). Working Group 2 has been dedicated to promoting the application of informatics standards in pathology, collaborating with Integrating Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Health terminology standardization research has become a topic of great interest. Future research work should focus on standardizing automatic image analysis and tissue microarrays imaging.
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Electromagnetic Imaging Methods for Nondestructive Evaluation Applications
Deng, Yiming; Liu, Xin
2011-01-01
Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693
NASA Technical Reports Server (NTRS)
Lawson, R. Paul
2000-01-01
SPEC incorporated designed, built and operated a new instrument, called a pi-Nephelometer, on the NASA DC-8 for the SUCCESS field project. The pi-Nephelometer casts an image of a particle on a 400,000 pixel solid-state camera by freezing the motion of the particle using a 25 ns pulsed, high-power (60 W) laser diode. Unique optical imaging and particle detection systems precisely detect particles and define the depth-of-field so that at least one particle in the image is almost always in focus. A powerful image processing engine processes frames from the solid-state camera, identifies and records regions of interest (i.e. particle images) in real time. Images of ice crystals are displayed and recorded with 5 micron pixel resolution. In addition, a scattered light system simultaneously measures the scattering phase function of the imaged particle. The system consists of twenty-eight 1-mm optical fibers connected to microlenses bonded on the surface of avalanche photo diodes (APDs). Data collected with the pi-Nephelometer during the SUCCESS field project was reported in a special issue of Geophysical Research Letters. The pi-Nephelometer provided the basis for development of a commercial imaging probe, called the cloud particle imager (CPI), which has been installed on several research aircraft and used in More than a dozen field programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-10-21
Conventional software interfaces that use imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal-processing software (SIG). As an alternative, ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal LISP Environment (ISLE) is an example of an interpreted functional language interface based on common LISP. Advantages of ISLE include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence (AI)more » software. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-05-01
Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less
Moreno-Martínez, Francisco Javier; Montoro, Pedro R
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Willse, Alan R.
The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.
Colour application on mammography image segmentation
NASA Astrophysics Data System (ADS)
Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.
2017-09-01
The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Geyuan
My research projects are focused on application of photonics, optics and micro- fabrication technology in energy related fields. Photonic crystal fabrication research has the potential to help us generate and use light more efficiently. In order to fabricate active 3D woodpile photonic structure devices, a woodpile template is needed to enable the crystal growth process. We developed a silica woodpile template fabrication process based on two polymer transfer molding technique. A silica woodpile template is demonstrated to work with temperature up to 900 C. It provides a more economical way to explore making better 3D active woodpile photonic devices likemore » 3D photonic light emitting diodes (LED). Optical research on solar cell testing has the potential to make our energy generation more e cient and greener. PL imaging and LBIC mapping are used to measure CdTe solar cells with different back contacts. A strong correlation between PL image defects and LBIC map defects is observed. This opens up potential application for PL imaging in fast solar cell inspection. 2D laser IV scan shows its usage in 2D parameter mapping. We show its ability to generate important information about solar cell performance locally around PL image defects.« less
NASA Astrophysics Data System (ADS)
Jackson, Christopher Robert
"Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.
Imaging of cerebrovascular pathology in animal models of Alzheimer's disease
Klohs, Jan; Rudin, Markus; Shimshek, Derya R.; Beckmann, Nicolau
2014-01-01
In Alzheimer's disease (AD), vascular pathology may interact with neurodegeneration and thus aggravate cognitive decline. As the relationship between these two processes is poorly understood, research has been increasingly focused on understanding the link between cerebrovascular alterations and AD. This has at last been spurred by the engineering of transgenic animals, which display pathological features of AD and develop cerebral amyloid angiopathy to various degrees. Transgenic models are versatile for investigating the role of amyloid deposition and vascular dysfunction, and for evaluating novel therapeutic concepts. In addition, research has benefited from the development of novel imaging techniques, which are capable of characterizing vascular pathology in vivo. They provide vascular structural read-outs and have the ability to assess the functional consequences of vascular dysfunction as well as to visualize and monitor the molecular processes underlying these pathological alterations. This article focusses on recent in vivo small animal imaging studies addressing vascular aspects related to AD. With the technical advances of imaging modalities such as magnetic resonance, nuclear and microscopic imaging, molecular, functional and structural information related to vascular pathology can now be visualized in vivo in small rodents. Imaging vascular and parenchymal amyloid-β (Aβ) deposition as well as Aβ transport pathways have been shown to be useful to characterize their dynamics and to elucidate their role in the development of cerebral amyloid angiopathy and AD. Structural and functional imaging read-outs have been employed to describe the deleterious affects of Aβ on vessel morphology, hemodynamics and vascular integrity. More recent imaging studies have also addressed how inflammatory processes partake in the pathogenesis of the disease. Moreover, imaging can be pivotal in the search for novel therapies targeting the vasculature. PMID:24659966
Lewiss, Resa E; Chan, Wilma; Sheng, Alexander Y; Soto, Jorge; Castro, Alexandra; Meltzer, Andrew C; Cherney, Alan; Kumaravel, Manickam; Cody, Dianna; Chen, Esther H
2015-12-01
The appropriate selection and accurate interpretation of diagnostic imaging is a crucial skill for emergency practitioners. To date, the majority of the published literature and research on competency assessment comes from the subspecialty of point-of-care ultrasound. A group of radiologists, physicists, and emergency physicians convened at the 2015 Academic Emergency Medicine consensus conference to discuss and prioritize a research agenda related to education, assessment, and competency in ordering and interpreting diagnostic imaging. A set of questions for the continued development of an educational curriculum on diagnostic imaging for trainees and competency assessment using specific assessment methods based on current best practices was delineated. The research priorities were developed through an iterative consensus-driven process using a modified nominal group technique that culminated in an in-person breakout session. The four recommendations are: 1) develop a diagnostic imaging curriculum for emergency medicine (EM) residency training; 2) develop, study, and validate tools to assess competency in diagnostic imaging interpretation; 3) evaluate the role of simulation in education, assessment, and competency measures for diagnostic imaging; 4) study is needed regarding the American College of Radiology Appropriateness Criteria, an evidence-based peer-reviewed resource in determining the use of diagnostic imaging, to maximize its value in EM. In this article, the authors review the supporting reliability and validity evidence and make specific recommendations for future research on the education, competency, and assessment of learning diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
Development of AN All-Purpose Free Photogrammetric Tool
NASA Astrophysics Data System (ADS)
González-Aguilera, D.; López-Fernández, L.; Rodriguez-Gonzalvez, P.; Guerrero, D.; Hernandez-Lopez, D.; Remondino, F.; Menna, F.; Nocerino, E.; Toschi, I.; Ballabeni, A.; Gaiani, M.
2016-06-01
Photogrammetry is currently facing some challenges and changes mainly related to automation, ubiquitous processing and variety of applications. Within an ISPRS Scientific Initiative a team of researchers from USAL, UCLM, FBK and UNIBO have developed an open photogrammetric tool, called GRAPHOS (inteGRAted PHOtogrammetric Suite). GRAPHOS allows to obtain dense and metric 3D point clouds from terrestrial and UAV images. It encloses robust photogrammetric and computer vision algorithms with the following aims: (i) increase automation, allowing to get dense 3D point clouds through a friendly and easy-to-use interface; (ii) increase flexibility, working with any type of images, scenarios and cameras; (iii) improve quality, guaranteeing high accuracy and resolution; (iv) preserve photogrammetric reliability and repeatability. Last but not least, GRAPHOS has also an educational component reinforced with some didactical explanations about algorithms and their performance. The developments were carried out at different levels: GUI realization, image pre-processing, photogrammetric processing with weight parameters, dataset creation and system evaluation. The paper will present in detail the developments of GRAPHOS with all its photogrammetric components and the evaluation analyses based on various image datasets. GRAPHOS is distributed for free for research and educational needs.
NASA Astrophysics Data System (ADS)
Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.
The increasing influence of medical image processing in clinical neuroimaging.
Barillot, Christian
2007-01-20
This paper review the evolution of clinical neuroinformatics domain in the passed and gives an outlook how this research field will evolve in clinical neurology (e.g. Epilepsy, Multiple Sclerosis, Dementia) and neurosurgery (e.g. image guided surgery, intra-operative imaging, the definition of the Operation Room of the future). These different issues, as addressed by the VisAGeS research team, are discussed in more details and the benefits of a close collaboration between clinical scientists (radiologist, neurologist and neurosurgeon) and computer scientists are shown to give adequate answers to the series of problems which needs to be solved for a more effective use of medical images in clinical neurosciences.
Teaching by research at undergraduate schools: an experience
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.
1997-12-01
On this communication I will report a pedagogical experience undertaken in the 1995 class of Image Processing of the course of Applied Physics of the University of Minho. The learning process requires always an active critical participation of the student in an experience essentially personal that should and must be rewarding and fulfilling. To us scientists virtually nothing gives us more pleasure and fulfillment than the research processes. Furthermore it is our main way to improve our, and I stress our, knowledge. Thus I decided to center my undergraduate students' learning process of the basics of digital image processing on a simple applied research program. The proposed project was to develop a process of inspection to be introduced in a generic production line. Measured should be the transversal distance between an object and the extremity of a conveyor belt where it is transported. The measurement method was proposed to be optical triangulation combined with shadow analysis. To the students was given almost entire liberty and responsibility. I limited my self to asses the development of the project orienting them and point out different or pertinent points of view only when strictly necessary.
NASA Technical Reports Server (NTRS)
Beckenbach, E. S. (Editor)
1984-01-01
It is more important than ever that engineers have an understanding of the future needs of clinical and research medicine, and that physicians know somthing about probable future developments in instrumentation capabilities. Only by maintaining such a dialog can the most effective application of technological advances to medicine be achieved. This workshop attempted to provide this kind of information transfer in the limited field of diagnostic imaging. Biomedical research at the Jet Propulsion Laboratory is discussed, taking into account imaging results from space exploration missions, as well as biomedical research tasks based in these technologies. Attention is also given to current and future indications for magnetic resonance in medicine, high speed quantitative digital microscopy, computer processing of radiographic images, computed tomography and its modern applications, position emission tomography, and developments related to medical ultrasound.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleissl, J.; Urquhart, B.; Ghonima, M.
During the University of California, San Diego (UCSD) Sky Imager Cloud Position Study, two University of California, San Diego Sky Imagers (USI) (Figure 1) were deployed the U.S. Department of Energy(DOE)’s Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains SGP) research facility. The UCSD Sky Imagers were placed 1.7 km apart to allow for stereographic determination of the cloud height for clouds over approximately 1.5 km. Images with a 180-degree field of view were captured from both systems during daylight hours every 30 seconds beginning on March 11, 2013 and ending on November 4, 2013. The spatial resolutionmore » of the images was 1,748 × 1,748, and the intensity resolution was 16 bits using a high-dynamic-range capture process. The cameras use a fisheye lens, so the images are distorted following an equisolid angle projection.« less
High Resolution Near Real Time Image Processing and Support for MSSS Modernization
NASA Astrophysics Data System (ADS)
Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.
2012-09-01
This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.
Keeping your eye on the process: body image, older women, and countertransference.
Altschuler, Joanne; Katz, Anne D
2010-04-01
Research on body image and older women has grown in the past decade. However, there is a gap in the literature regarding body image, older women, and countertransference. This article provides 7 case examples of racially and ethnically diverse women over 60, drawn from MSW student and agency staff supervision, and participant feedback from a national conference on aging workshop. Themes related to loss and grief, adult daughter and aging mother issues, incest, anger, disability, personality disorders, phobic reactions, and shame are discussed. Recommendations and implications for social work practice, education and research are provided.
Texture Feature Extraction and Classification for Iris Diagnosis
NASA Astrophysics Data System (ADS)
Ma, Lin; Li, Naimin
Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.
1991-01-01
The development of low-cost fabrication processes for high-performance composites is of paramount importance in the economical use of composites in...This proposal offers to evaluate the feasibility of marrying multiscale image processing techniques to multisensor image data. The product would be a...biotechnology to the production of 4-hydroxybenzocyclobutene will allow bulk manufacture of this polymer precursor by more economical means than is
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Kaminski, Clemens F.; Kaminski Schierle, Gabriele S.
2016-01-01
Abstract. The misfolding and self-assembly of intrinsically disordered proteins into insoluble amyloid structures are central to many neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases. Optical imaging of this self-assembly process in vitro and in cells is revolutionizing our understanding of the molecular mechanisms behind these devastating conditions. In contrast to conventional biophysical methods, optical imaging and, in particular, optical superresolution imaging, permits the dynamic investigation of the molecular self-assembly process in vitro and in cells, at molecular-level resolution. In this article, current state-of-the-art imaging methods are reviewed and discussed in the context of research into neurodegeneration. PMID:27413767
Illuminating magma shearing processes via synchrotron imaging
NASA Astrophysics Data System (ADS)
Lavallée, Yan; Cai, Biao; Coats, Rebecca; Kendrick, Jackie E.; von Aulock, Felix W.; Wallace, Paul A.; Le Gall, Nolwenn; Godinho, Jose; Dobson, Katherine; Atwood, Robert; Holness, Marian; Lee, Peter D.
2017-04-01
Our understanding of geomaterial behaviour and processes has long fallen short due to inaccessibility into material as "something" happens. In volcanology, research strategies have increasingly sought to illuminate the subsurface of materials at all scales, from the use of muon tomography to image the inside of volcanoes to the use of seismic tomography to image magmatic bodies in the crust, and most recently, we have added synchrotron-based x-ray tomography to image the inside of material as we test it under controlled conditions. Here, we will explore some of the novel findings made on the evolution of magma during shearing. These will include observations and discussions of magma flow and failure as well as petrological reaction kinetics.
Better safe than sorry: simplistic fear-relevant stimuli capture attention.
Forbes, Sarah J; Purkis, Helena M; Lipp, Ottmar V
2011-08-01
It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention.
Review of chart recognition in document images
NASA Astrophysics Data System (ADS)
Liu, Yan; Lu, Xiaoqing; Qin, Yeyang; Tang, Zhi; Xu, Jianbo
2013-01-01
As an effective information transmitting way, chart is widely used to represent scientific statistics datum in books, research papers, newspapers etc. Though textual information is still the major source of data, there has been an increasing trend of introducing graphs, pictures, and figures into the information pool. Text recognition techniques for documents have been accomplished using optical character recognition (OCR) software. Chart recognition techniques as a necessary supplement of OCR for document images are still an unsolved problem due to the great subjectiveness and variety of charts styles. This paper reviews the development process of chart recognition techniques in the past decades and presents the focuses of current researches. The whole process of chart recognition is presented systematically, which mainly includes three parts: chart segmentation, chart classification, and chart Interpretation. In each part, the latest research work is introduced. In the last, the paper concludes with a summary and promising future research direction.
ERIC Educational Resources Information Center
Tataw, Oben Moses
2013-01-01
Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…
Realization of a single image haze removal system based on DaVinci DM6467T processor
NASA Astrophysics Data System (ADS)
Liu, Zhuang
2014-10-01
Video monitoring system (VMS) has been extensively applied in domains of target recognition, traffic management, remote sensing, auto navigation and national defence. However the VMS has a strong dependence on the weather, for instance, in foggy weather, the quality of images received by the VMS are distinct degraded and the effective range of VMS is also decreased. All in all, the VMS performs terribly in bad weather. Thus the research of fog degraded images enhancement has very high theoretical and practical application value. A design scheme of a fog degraded images enhancement system based on the TI DaVinci processor is presented in this paper. The main function of the referred system is to extract and digital cameras capture images and execute image enhancement processing to obtain a clear image. The processor used in this system is the dual core TI DaVinci DM6467T - ARM@500MHz+DSP@1GH. A MontaVista Linux operating system is running on the ARM subsystem which handles I/O and application processing. The DSP handles signal processing and the results are available to the ARM subsystem in shared memory.The system benefits from the DaVinci processor so that, with lower power cost and smaller volume, it provides the equivalent image processing capability of a X86 computer. The outcome shows that the system in this paper can process images at 25 frames per second on D1 resolution.
Latourette, Matthew T; Siebert, James E; Barto, Robert J; Marable, Kenneth L; Muyepa, Anthony; Hammond, Colleen A; Potchen, Michael J; Kampondeni, Samuel D; Taylor, Terrie E
2011-08-01
As part of an NIH-funded study of malaria pathogenesis, a magnetic resonance (MR) imaging research facility was established in Blantyre, Malaŵi to enhance the clinical characterization of pediatric patients with cerebral malaria through application of neurological MR methods. The research program requires daily transmission of MR studies to Michigan State University (MSU) for clinical research interpretation and quantitative post-processing. An intercontinental satellite-based network was implemented for transmission of MR image data in Digital Imaging and Communications in Medicine (DICOM) format, research data collection, project communications, and remote systems administration. Satellite Internet service costs limited the bandwidth to symmetrical 384 kbit/s. DICOM routers deployed at both the Malaŵi MRI facility and MSU manage the end-to-end encrypted compressed data transmission. Network performance between DICOM routers was measured while transmitting both mixed clinical MR studies and synthetic studies. Effective network latency averaged 715 ms. Within a mix of clinical MR studies, the average transmission time for a 256 × 256 image was ~2.25 and ~6.25 s for a 512 × 512 image. Using synthetic studies of 1,000 duplicate images, the interquartile range for 256 × 256 images was [2.30, 2.36] s and [5.94, 6.05] s for 512 × 512 images. Transmission of clinical MRI studies between the DICOM routers averaged 9.35 images per minute, representing an effective channel utilization of ~137% of the 384-kbit/s satellite service as computed using uncompressed image file sizes (including the effects of image compression, protocol overhead, channel latency, etc.). Power unreliability was the primary cause of interrupted operations in the first year, including an outage exceeding 10 days.
Neuroimaging Techniques: a Conceptual Overview of Physical Principles, Contribution and History
NASA Astrophysics Data System (ADS)
Minati, Ludovico
2006-06-01
This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Given the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minati, Ludovico
This paper is meant to provide a brief overview of the techniques currently used to image the brain and to study non-invasively its anatomy and function. After a historical summary in the first section, general aspects are outlined in the second section. The subsequent six sections survey, in order, computed tomography (CT), morphological magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), diffusion-tensor magnetic resonance imaging (DWI/DTI), positron emission tomography (PET), and electro- and magneto-encephalography (EEG/MEG) based imaging. Underlying physical principles, modelling and data processing approaches, as well as clinical and research relevance are briefly outlined for each technique. Givenmore » the breadth of the scope, there has been no attempt to be comprehensive. The ninth and final section outlines some aspects of active research in neuroimaging.« less
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.
Scalable Topic Modeling: Online Learning, Diagnostics, and Recommendation
2017-03-01
Chinese restaurant processes. Journal of Machine Learning Research, 12:2461–2488, 2011. 15. L. Hannah, D. Blei and W. Powell. Dirichlet process mixtures of...34. S. Ghosh, A. Ungureanu, E. Sudderth, and D. Blei. A Spatial distance dependent Chinese restaurant process for image segmentation. In Neural
The Viking Mosaic Catalog, Volume 2
NASA Technical Reports Server (NTRS)
Evans, N.
1982-01-01
A collection of more than 500 mosaics prepared from Viking Orbiter images is given. Accompanying each mosaic is a footprint plot, which identifies by location, picture number, and order number, each frame in the mosaic. Corner coordinates and pertinent imaging information are also included. A short text provides the camera characteristics, image format, and data processing information necessary for using the mosaic plates as a research aide. Procedures for ordering mosaic enlargements and individual images are also provided.
Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.
2016-01-01
In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528
Harrigan, Robert L; Yvernault, Benjamin C; Boyd, Brian D; Damon, Stephen M; Gibney, Kyla David; Conrad, Benjamin N; Phillips, Nicholas S; Rogers, Baxter P; Gao, Yurui; Landman, Bennett A
2016-01-01
The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has developed a database built on XNAT housing over a quarter of a million scans. The database provides framework for (1) rapid prototyping, (2) large scale batch processing of images and (3) scalable project management. The system uses the web-based interfaces of XNAT and REDCap to allow for graphical interaction. A python middleware layer, the Distributed Automation for XNAT (DAX) package, distributes computation across the Vanderbilt Advanced Computing Center for Research and Education high performance computing center. All software are made available in open source for use in combining portable batch scripting (PBS) grids and XNAT servers. Copyright © 2015 Elsevier Inc. All rights reserved.
Northern Everglades, Florida, satellite image map
Thomas, Jean-Claude; Jones, John W.
2002-01-01
These satellite image maps are one product of the USGS Land Characteristics from Remote Sensing project, funded through the USGS Place-Based Studies Program with support from the Everglades National Park. The objective of this project is to develop and apply innovative remote sensing and geographic information system techniques to map the distribution of vegetation, vegetation characteristics, and related hydrologic variables through space and over time. The mapping and description of vegetation characteristics and their variations are necessary to accurately simulate surface hydrology and other surface processes in South Florida and to monitor land surface changes. As part of this research, data from many airborne and satellite imaging systems have been georeferenced and processed to facilitate data fusion and analysis. These image maps were created using image fusion techniques developed as part of this project.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
A software to digital image processing to be used in the voxel phantom development.
Vieira, J W; Lima, F R A
2009-11-15
Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image, this is saved as a JPEG file in the Windows default; when it involves an image stack, the output binary file is denominated SGI (Simulações Gráficas Interativas (Interactive Graphic Simulations), an acronym already used in other publications of the GDN/CNPq.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods. PMID:22163903
NASA Astrophysics Data System (ADS)
Xu, Weidong; Lei, Zhu; Yuan, Zhang; Gao, Zhenqing
2018-03-01
The application of visual recognition technology in industrial robot crawling and placing operation is one of the key tasks in the field of robot research. In order to improve the efficiency and intelligence of the material sorting in the production line, especially to realize the sorting of the scattered items, the robot target recognition and positioning crawling platform based on binocular vision is researched and developed. The images were collected by binocular camera, and the images were pretreated. Harris operator was used to identify the corners of the images. The Canny operator was used to identify the images. Hough-chain code recognition was used to identify the images. The target image in the image, obtain the coordinates of each vertex of the image, calculate the spatial position and posture of the target item, and determine the information needed to capture the movement and transmit it to the robot control crawling operation. Finally, In this paper, we use this method to experiment the wrapping problem in the express sorting process The experimental results show that the platform can effectively solve the problem of sorting of loose parts, so as to achieve the purpose of efficient and intelligent sorting.
Research-oriented image registry for multimodal image integration.
Tanaka, M; Sadato, N; Ishimori, Y; Yonekura, Y; Yamashita, Y; Komuro, H; Hayahsi, N; Ishii, Y
1998-01-01
To provide multimodal biomedical images automatically, we constructed the research-oriented image registry, Data Delivery System (DDS). DDS was constructed on the campus local area network. Machines which generate images (imagers: DSA, ultrasound, PET, MRI, SPECT and CT) were connected to the campus LAN. Once a patient is registered, all his images are automatically picked up by DDS as they are generated, transferred through the gateway server to the intermediate server, and copied into the directory of the user who registered the patient. DDS informs the user through e-mail that new data have been generated and transferred. Data format is automatically converted into one which is chosen by the user. Data inactive for a certain period in the intermediate server are automatically achieved into the final and permanent data server based on compact disk. As a soft link is automatically generated through this step, a user has access to all (old or new) image data of the patient of his interest. As DDS runs with minimal maintenance, cost and time for data transfer are significantly saved. By making the complex process of data transfer and conversion invisible, DDS has made it easy for naive-to-computer researchers to concentrate on their biomedical interest.
NASA Astrophysics Data System (ADS)
Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.
2015-06-01
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier-transform-spectrometer-based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.
NASA Astrophysics Data System (ADS)
Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.
2015-02-01
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier transform spectrometer based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.
The Imaging and Medical Beam Line at the Australian Synchrotron
NASA Astrophysics Data System (ADS)
Hausermann, Daniel; Hall, Chris; Maksimenko, Anton; Campbell, Colin
2010-07-01
As a result of the enthusiastic support from the Australian biomedical, medical and clinical communities, the Australian Synchrotron is constructing a world-class facility for medical research, the `Imaging and Medical Beamline'. The IMBL began phased commissioning in late 2008 and is scheduled to commence the first clinical research programs with patients in 2011. It will provide unrivalled x-ray facilities for imaging and radiotherapy for a wide range of research applications in diseases, treatments and understanding of physiological processes. The main clinical research drivers are currently high resolution and sensitivity cardiac and breast imaging, cell tracking applied to regenerative and stem cell medicine and cancer therapies. The beam line has a maximum source to sample distance of 136 m and will deliver a 60 cm by 4 cm x-ray beam1—monochromatic and white—to a three storey satellite building fully equipped for pre-clinical and clinical research. Currently operating with a 1.4 Tesla multi-pole wiggler, it will upgrade to a 4.2 Tesla device which requires the ability to handle up to 21 kW of x-ray power at any point along the beam line. The applications envisaged for this facility include imaging thick objects encompassing materials, humans and animals. Imaging can be performed in the range 15-150 keV. Radiotherapy research typically requires energies between 30 and 120 keV, for both monochromatic and broad beam.
SIP: A Web-Based Astronomical Image Processing Program
NASA Astrophysics Data System (ADS)
Simonetti, J. H.
1999-12-01
I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.
NASA Astrophysics Data System (ADS)
Liu, W. C.; Wu, B.
2018-04-01
High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading) is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo). Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC) images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this research contribute to optimal exploitation of image information for high-resolution 3D modelling of the lunar surface, which is of significance for the advancement of lunar and planetary mapping.
Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees. PMID:28749977
Wu, Chunyan; Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees.
NASA Technical Reports Server (NTRS)
1988-01-01
Estee Lauder uses digital image analyzer and software based on NASA lunar research in evaluation of cosmetic products for skincare. Digital image processing brings out subtleties otherwise undetectable, and allows better determination of product's effectiveness. Technique allows Estee Lauder to quantify changes in skin surface form and structure caused by application of cosmetic preparations.
Imaging interferometer using dual broadband quantum well infrared photodetectors
NASA Technical Reports Server (NTRS)
Reininger, F.; Gunapala, S.; Bandara, S.; Grimm, M.; Johnson, D.; Peters, D.; Leland, S.; Liu, J.; Mumolo, J.; Rafol, D.;
2002-01-01
The Jet Propulsion Laboratory is developing a new imaging interferometer that has double the efficiency of conventional interferometers and only a fraction of the mass and volume. The project is being funded as part of the Defense Advanced Research Projects Agency (DARPA) Photonic Wavelength And Spatial Signal Processing program (PWASSSP).
USDA-ARS?s Scientific Manuscript database
The U. S. Department of Agriculture, Agricultural Research Service has been developing a method and system to detect fecal contamination on processed poultry carcasses with hyperspectral and multispectral imaging systems. The patented method utilizes a three step approach to contaminant detection. S...
Research in remote sensing of agriculture, earth resources, and man's environment
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1975-01-01
Progress is reported for several projects involving the utilization of LANDSAT remote sensing capabilities. Areas under study include crop inventory, crop identification, crop yield prediction, forest resources evaluation, land resources evaluation and soil classification. Numerical methods for image processing are discussed, particularly those for image enhancement and analysis.
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Impact induced damage assessment by means of Lamb wave image processing
NASA Astrophysics Data System (ADS)
Kudela, Pawel; Radzienski, Maciej; Ostachowicz, Wieslaw
2018-03-01
The aim of this research is an analysis of full wavefield Lamb wave interaction with impact-induced damage at various impact energies in order to find out the limitation of the wavenumber adaptive image filtering method. In other words, the relation between impact energy and damage detectability will be shown. A numerical model based on the time domain spectral element method is used for modeling of Lamb wave propagation and interaction with barely visible impact damage in a carbon-epoxy laminate. Numerical studies are followed by experimental research on the same material with an impact damage induced by various energy and also a Teflon insert simulating delamination. Wavenumber adaptive image filtering and signal processing are used for damage visualization and assessment for both numerical and experimental full wavefield data. It is shown that it is possible to visualize and assess the impact damage location, size and to some extent severity by using the proposed technique.
NASA Astrophysics Data System (ADS)
Ramirez, Andres; Rahnemoonfar, Maryam
2017-04-01
A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.
Segmentation of anatomical structures of the heart based on echocardiography
NASA Astrophysics Data System (ADS)
Danilov, V. V.; Skirnevskiy, I. P.; Gerget, O. M.
2017-01-01
Nowadays, many practical applications in the field of medical image processing require valid and reliable segmentation of images in the capacity of input data. Some of the commonly used imaging techniques are ultrasound, CT, and MRI. However, the main difference between the other medical imaging equipment and EchoCG is that it is safer, low cost, non-invasive and non-traumatic. Three-dimensional EchoCG is a non-invasive imaging modality that is complementary and supplementary to two-dimensional imaging and can be used to examine the cardiovascular function and anatomy in different medical settings. The challenging problems, presented by EchoCG image processing, such as speckle phenomena, noise, temporary non-stationarity of processes, unsharp boundaries, attenuation, etc. forced us to consider and compare existing methods and then to develop an innovative approach that can tackle the problems connected with clinical applications. Actual studies are related to the analysis and development of a cardiac parameters automatic detection system by EchoCG that will provide new data on the dynamics of changes in cardiac parameters and improve the accuracy and reliability of the diagnosis. Research study in image segmentation has highlighted the capabilities of image-based methods for medical applications. The focus of the research is both theoretical and practical aspects of the application of the methods. Some of the segmentation approaches can be interesting for the imaging and medical community. Performance evaluation is carried out by comparing the borders, obtained from the considered methods to those manually prescribed by a medical specialist. Promising results demonstrate the possibilities and the limitations of each technique for image segmentation problems. The developed approach allows: to eliminate errors in calculating the geometric parameters of the heart; perform the necessary conditions, such as speed, accuracy, reliability; build a master model that will be an indispensable assistant for operations on a beating heart.
Trucco, E; Cameron, J R; Dhillon, B; Houston, J G; van Beek, E J R
2014-01-01
The black void behind the pupil was optically impenetrable before the invention of the ophthalmoscope by von Helmholtz over 150 years ago. Advances in retinal imaging and image processing, especially over the past decade, have opened a route to another unexplored landscape, the retinal neurovascular architecture and the retinal ganglion pathways linking to the central nervous system beyond. Exploiting these research opportunities requires multidisciplinary teams to explore the interface sitting at the border between ophthalmology, neurology and computing science. It is from the detail and depth of retinal phenotyping that novel metrics and candidate biomarkers are likely to emerge. Confirmation that in vivo retinal neurovascular measures are predictive of microvascular change in the brain and other organs is likely to be a major area of research activity over the next decade. Unlocking this hidden potential within the retina requires integration of structural and functional data sets, that is, multimodal mapping and longitudinal studies spanning the natural history of the disease process. And with further advances in imaging, it is likely that this area of retinal research will remain active and clinically relevant for many years to come. Accordingly, this review looks at state-of-the-art retinal imaging and its application to diagnosis, characterization and prognosis of chronic illness or long-term conditions. PMID:24936979
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
NASA Astrophysics Data System (ADS)
Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting
2018-02-01
Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.
NASA Astrophysics Data System (ADS)
Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.
1988-12-01
The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.
Yang, Fan; Paindavoine, M
2003-01-01
This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.
Special Software for Planetary Image Processing and Research
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.
2016-06-01
The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).
Information Processing of Remote-Sensing Data.
ERIC Educational Resources Information Center
Berry, P. A. M.; Meadows, A. J.
1987-01-01
Reviews the current status of satellite remote sensing data, including problems with efficient storage and rapid retrieval of the data, and appropriate computer graphics to process images. Areas of research concerned with overcoming these problems are described. (16 references) (CLB)
Non-rigid ultrasound image registration using generalized relaxation labeling process
NASA Astrophysics Data System (ADS)
Lee, Jong-Ha; Seong, Yeong Kyeong; Park, MoonHo; Woo, Kyoung-Gu; Ku, Jeonghun; Park, Hee-Jun
2013-03-01
This research proposes a novel non-rigid registration method for ultrasound images. The most predominant anatomical features in medical images are tissue boundaries, which appear as edges. In ultrasound images, however, other features can be identified as well due to the specular reflections that appear as bright lines superimposed on the ideal edge location. In this work, an image's local phase information (via the frequency domain) is used to find the ideal edge location. The generalized relaxation labeling process is then formulated to align the feature points extracted from the ideal edge location. In this work, the original relaxation labeling method was generalized by taking n compatibility coefficient values to improve non-rigid registration performance. This contextual information combined with a relaxation labeling process is used to search for a correspondence. Then the transformation is calculated by the thin plate spline (TPS) model. These two processes are iterated until the optimal correspondence and transformation are found. We have tested our proposed method and the state-of-the-art algorithms with synthetic data and bladder ultrasound images of in vivo human subjects. Experiments show that the proposed method improves registration performance significantly, as compared to other state-of-the-art non-rigid registration algorithms.
De-identification of Medical Images with Retention of Scientific Research Value
Maffitt, David R.; Smith, Kirk E.; Kirby, Justin S.; Clark, Kenneth W.; Freymann, John B.; Vendt, Bruce A.; Tarbox, Lawrence R.; Prior, Fred W.
2015-01-01
Online public repositories for sharing research data allow investigators to validate existing research or perform secondary research without the expense of collecting new data. Patient data made publicly available through such repositories may constitute a breach of personally identifiable information if not properly de-identified. Imaging data are especially at risk because some intricacies of the Digital Imaging and Communications in Medicine (DICOM) format are not widely understood by researchers. If imaging data still containing protected health information (PHI) were released through a public repository, a number of different parties could be held liable, including the original researcher who collected and submitted the data, the original researcher’s institution, and the organization managing the repository. To minimize these risks through proper de-identification of image data, one must understand what PHI exists and where that PHI resides, and one must have the tools to remove PHI without compromising the scientific integrity of the data. DICOM public elements are defined by the DICOM Standard. Modality vendors use private elements to encode acquisition parameters that are not yet defined by the DICOM Standard, or the vendor may not have updated an existing software product after DICOM defined new public elements. Because private elements are not standardized, a common de-identification practice is to delete all private elements, removing scientifically useful data as well as PHI. Researchers and publishers of imaging data can use the tools and process described in this article to de-identify DICOM images according to current best practices. ©RSNA, 2015 PMID:25969931
Deng, Yufeng; Rouze, Ned C.; Palmeri, Mark L.; Nightingale, Kathryn R.
2017-01-01
Ultrasound elasticity imaging has been developed over the last decade to estimate tissue stiffness. Shear wave elasticity imaging (SWEI) quantifies tissue stiffness by measuring the speed of propagating shear waves following acoustic radiation force excitation. This work presents the sequencing and data processing protocols of SWEI using a Verasonics system. The selection of the sequence parameters in a Verasonics programming script is discussed in detail. The data processing pipeline to calculate group shear wave speed (SWS), including tissue motion estimation, data filtering, and SWS estimation is demonstrated. In addition, the procedures for calibration of beam position, scanner timing, and transducer face heating are provided to avoid SWS measurement bias and transducer damage. PMID:28092508
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
NASA Technical Reports Server (NTRS)
1983-01-01
This report summarizes the results of a study conducted by Engineering and Economics Research (EER), Inc. under NASA Contract Number NAS5-27513. The study involved the development of preliminary concepts for automatic and semiautomatic quality assurance (QA) techniques for ground image processing. A distinction is made between quality assessment and the more comprehensive quality assurance which includes decision making and system feedback control in response to quality assessment.
Noninvasive imaging of protein-protein interactions in living organisms.
Haberkorn, Uwe; Altmann, Annette
2003-06-01
Genomic research is expected to generate new types of complex observational data, changing the types of experiments as well as our understanding of biological processes. The investigation and definition of relationships among proteins is essential for understanding the function of each gene and the mechanisms of biological processes that specific genes are involved in. Recently, a study by Paulmurugan et al. demonstrated a tool for in vivo noninvasive imaging of protein-protein interactions and intracellular networks.
Syntactic Processing in Bilinguals: An fNIRS Study
ERIC Educational Resources Information Center
Scherer, Lilian Cristine; Fonseca, Rochele Paz; Amiri, Mahnoush; Adrover-Roig, Daniel; Marcotte, Karine; Giroux, Francine; Senhadji, Noureddine; Benali, Habib; Lesage, Frederic; Ansaldo, Ana Ines
2012-01-01
The study of the neural basis of syntactic processing has greatly benefited from neuroimaging techniques. Research on syntactic processing in bilinguals has used a variety of techniques, including mainly functional magnetic resonance imaging (fMRI) and event-related potentials (ERP). This paper reports on a functional near-infrared spectroscopy…
Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James
2016-01-01
Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.
A research on radiation calibration of high dynamic range based on the dual channel CMOS
NASA Astrophysics Data System (ADS)
Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua
2017-10-01
The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.
A portable low-cost long-term live-cell imaging platform for biomedical research and education.
Walzik, Maria P; Vollmar, Verena; Lachnit, Theresa; Dietz, Helmut; Haug, Susanne; Bachmann, Holger; Fath, Moritz; Aschenbrenner, Daniel; Abolpour Mofrad, Sepideh; Friedrich, Oliver; Gilbert, Daniel F
2015-02-15
Time-resolved visualization and analysis of slow dynamic processes in living cells has revolutionized many aspects of in vitro cellular studies. However, existing technology applied to time-resolved live-cell microscopy is often immobile, costly and requires a high level of skill to use and maintain. These factors limit its utility to field research and educational purposes. The recent availability of rapid prototyping technology makes it possible to quickly and easily engineer purpose-built alternatives to conventional research infrastructure which are low-cost and user-friendly. In this paper we describe the prototype of a fully automated low-cost, portable live-cell imaging system for time-resolved label-free visualization of dynamic processes in living cells. The device is light-weight (3.6 kg), small (22 × 22 × 22 cm) and extremely low-cost (<€1250). We demonstrate its potential for biomedical use by long-term imaging of recombinant HEK293 cells at varying culture conditions and validate its ability to generate time-resolved data of high quality allowing for analysis of time-dependent processes in living cells. While this work focuses on long-term imaging of mammalian cells, the presented technology could also be adapted for use with other biological specimen and provides a general example of rapidly prototyped low-cost biosensor technology for application in life sciences and education. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
An application of viola jones method for face recognition for absence process efficiency
NASA Astrophysics Data System (ADS)
Rizki Damanik, Rudolfo; Sitanggang, Delima; Pasaribu, Hendra; Siagian, Hendrik; Gulo, Frisman
2018-04-01
Absence was a list of documents that the company used to record the attendance time of each employee. The most common problem in a fingerprint machine is the identification of a slow sensor or a sensor not recognizing a finger. The employees late to work because they get difficulties at fingerprint system, they need about 3 – 5 minutes to absence when the condition of finger is wet or not fit. To overcome this problem, this research tried to utilize facial recognition for attendance process. The method used for facial recognition was Viola Jones. Through the processing phase of the RGB face image was converted into a histogram equalization face image for the next stage of recognition. The result of this research was the absence process could be done less than 1 second with a maximum slope of ± 700 and a distance of 20-200 cm. After implement facial recognition the process of absence is more efficient, just take less 1 minute to absence.
NASA Astrophysics Data System (ADS)
Muench, R.; Jones, M.; Herndon, K. E.; Bell, J. R.; Anderson, E. R.; Markert, K. N.; Molthan, A.; Adams, E. C.; Shultz, L.; Cherrington, E. A.; Flores, A.; Lucey, R.; Munroe, T.; Layne, G.; Pulla, S. T.; Weigel, A. M.; Tondapu, G.
2017-12-01
On August 25, 2017, Hurricane Harvey made landfall between Port Aransas and Port O'Connor, Texas, bringing with it unprecedented amounts of rainfall and flooding. In times of natural disasters of this nature, emergency responders require timely and accurate information about the hazard in order to assess and plan for disaster response. Due to the extreme flooding impacts associated with Hurricane Harvey, delineations of water extent were crucial to inform resource deployment. Through the USGS's Hazards Data Distribution System, government and commercial vendors were able to acquire and distribute various satellite imagery to analysts to create value-added products that can be used by these emergency responders. Rapid-response water extent maps were created through a collaborative multi-organization and multi-sensor approach. One team of researchers created Synthetic Aperture Radar (SAR) water extent maps using modified Copernicus Sentinel data (2017), processed by ESA. This group used backscatter images, pre-processed by the Alaska Satellite Facility's Hybrid Pluggable Processing Pipeline (HyP3), to identify and apply a threshold to identify water in the image. Quality control was conducted by manually examining the image and correcting for potential errors. Another group of researchers and graduate student volunteers derived water masks from high resolution DigitalGlobe and SPOT images. Through a system of standardized image processing, quality control measures, and communication channels the team provided timely and fairly accurate water extent maps to support a larger NASA Disasters Program response. The optical imagery was processed through a combination of various band thresholds by using Normalized Difference Water Index (NDWI), Modified Normalized Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), and cloud masking. Several aspects of the pre-processing and image access were run on internal servers to expedite the provision of images to analysts who could focus on manipulating thresholds and quality control checks for maximum accuracy within the time constraints. The combined results of the radar- and optical-derived value-added products through the coordination of multiple organizations provided timely information for emergency response and recovery efforts
NASA Technical Reports Server (NTRS)
Muench, Rebekke; Jones, Madeline; Herndon, Kelsey; Schultz, Lori; Bell, Jordan; Anderson, Eric; Markert, Kel; Molthan, Andrew; Adams, Emily; Cherrington, Emil;
2017-01-01
On August 25, 2017, Hurricane Harvey made landfall between Port Aransas and Port O'Connor, Texas, bringing with it unprecedented amounts of rainfall and record flooding. In times of natural disasters of this nature, emergency responders require timely and accurate information about the hazard in order to assess and plan for disaster response. Due to the extreme flooding impacts associated with Hurricane Harvey, delineations of water extent were crucial to inform resource deployment. Through the USGS's Hazards Data Distribution System, government and commercial vendors were able to acquire and distribute various satellite imagery to analysts to create value-added products that can be used by these emergency responders. Rapid-response water extent maps were created through a collaborative multi-organization and multi-sensor approach. One team of researchers created Synthetic Aperture Radar (SAR) water extent maps using modified Copernicus Sentinel data (2017), processed by ESA. This group used backscatter images, pre-processed by the Alaska Satellite Facility's Hybrid Pluggable Processing Pipeline (HyP3), to identify and apply a threshold to identify water in the image. Quality control was conducted by manually examining the image and correcting for potential errors. Another group of researchers and graduate student volunteers derived water masks from high resolution DigitalGlobe and SPOT images. Through a system of standardized image processing, quality control measures, and communication channels the team provided timely and fairly accurate water extent maps to support a larger NASA Disasters Program response. The optical imagery was processed through a combination of various band thresholds and by using Normalized Difference Water Index (NDWI), Modified Normalized Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), and cloud masking. Several aspects of the pre-processing and image access were run on internal servers to expedite the provision of images to analysts who could focus on manipulating thresholds and quality control checks for maximum accuracy within the time constraints. The combined results of the radar- and optical-derived value-added products through the coordination of multiple organizations provided timely information for emergency response and recovery efforts.
NASA Astrophysics Data System (ADS)
Baird, Richard
2006-03-01
The mission of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) is to improve human health by promoting the development and translation of emerging technologies in biomedical imaging and bioengineering. To this end, NIBIB supports a coordinated agenda of research programs in advanced imaging technologies and engineering methods that enable fundamental biomedical discoveries across a broad spectrum of biological processes, disorders, and diseases and have significant potential for direct medical application. These research programs dramatically advance the Nation's healthcare by improving the detection, management and, ultimately, the prevention of disease. The research promoted and supported by NIBIB also is strongly synergistic with other NIH Institutes and Centers as well as across government agencies. This presentation will provide an overview of the scientific programs and funding opportunities supported by NIBIB, highlighting those that are of particular important to the field of medical physics.
Optical design and testing: introduction.
Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin
2014-10-10
Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Imaging and the new biology: What's wrong with this picture?
NASA Astrophysics Data System (ADS)
Vannier, Michael W.
2004-05-01
The Human Genome has been defined, giving us one part of the equation that stems from the central dogma of molecular biology. Despite this awesome scientific achievement, the correspondence between genomics and imaging is weak, since we cannot predict an organism's phenotype from even perfect knowledge of its genetic complement. Biological knowledge comes in several forms, and the genome is perhaps the best known and most completely understood type. Imaging creates another form of biological information, providing the ability to study morphology, growth and development, metabolic processes, and diseases in vitro and in vivo at many levels of scale. The principal challenge in biomedical imaging for the future lies in the need to reconcile the data provided by one or multiple modalities with other forms of biological knowledge, most importantly the genome, proteome, physiome, and other "-ome's." To date, the imaging science community has not set a high priority on the unification of their results with genomics, proteomics, and physiological functions in most published work. Images are relatively isolated from other forms of biological data, impairing our ability to conceive and address many fundamental questions in research and clinical practice. This presentation will explain the challenge of biological knowledge integration in basic research and clinical applications from the standpoint of imaging and image processing. The impediments to progress, isolation of the imaging community, and mainstream of new and future biological science will be identified, so the critical and immediate need for change can be highlighted.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
Characterization of Biogenic Gas and Mineral Formation Process by Denitrification in Porous Media
NASA Astrophysics Data System (ADS)
Hall, C. A.; Kim, D.; Mahabadi, N.; van Paassen, L. A.
2017-12-01
Biologically mediated processes have been regarded and developed as an alternative approach to traditional ground improvement techniques. Denitrification has been investigated as a potential ground improvement process towards liquefaction hazard mitigation. During denitrification, microorganisms reduce nitrate to dinitrogen gas and facilitate calcium carbonate precipitation as a by-product under adequate environmental conditions. The formation of dinitrogen gas desaturates soils and allows for potential pore pressure dampening during earthquake events. While, precipitation of calcium carbonate can improve the mechanical properties by filling the voids and cementing soil particles. As a result of small changes in gas and mineral phases, the mechanical properties of soils can be significantly affected. Prior research has primarily focused on quantitative analysis of overall residual calcium carbonate mineral and biogenic gas products in lab-scale porous media. However, the distribution of these products at the pore-scale has not been well-investigated. In this research, denitrification is activated in a microfluidic chip simulating a homogenous pore structure. The denitrification process is monitored by sequential image capture, where gas and mineral phase changes are evaluated by image processing. Analysis of these images correspond with previous findings, which demonstrate that biogenic gas behaviour at the pore scale is affected by the balance between reaction, diffusion, and convection rates.
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
Grinvald, A
1992-01-01
Long standing questions related to brain mechanisms underlying perception can finally be resolved by direct visualization of the architecture and function of mammalian cortex. This advance has been accomplished with the aid of two optical imaging techniques with which one can literally see how the brain functions. The upbringing of this technology required a multi-disciplinary approach integrating brain research with organic chemistry, spectroscopy, biophysics, computer sciences, optics and image processing. Beyond the technological ramifications, recent research shed new light on cortical mechanisms underlying sensory perception. Clinical applications of this technology for precise mapping of the cortical surface of patients during neurosurgery have begun. Below is a brief summary of our own research and a description of the technical specifications of the two optical imaging techniques. Like every technique, optical imaging also suffers from severe limitations. Here we mostly emphasize some of its advantages relative to all alternative imaging techniques currently in use. The limitations are critically discussed in our recent reviews. For a series of other reviews, see Cohen (1989).
Brain responses to body image stimuli but not food are altered in women with bulimia nervosa
2013-01-01
Background Research into the neural correlates of bulimia nervosa (BN) psychopathology remains limited. Methods In this functional magnetic resonance imaging study, 21 BN patients and 23 healthy controls (HCs) completed two paradigms: 1) processing of visual food stimuli and 2) comparing their own appearance with that of slim women. Participants also rated food craving and anxiety levels. Results Brain activation patterns in response to food cues did not differ between women with and without BN. However, when evaluating themselves against images of slim women, BN patients engaged the insula more and the fusiform gyrus less, compared to HCs, suggesting increased self-focus among women with BN whilst comparing themselves to a ‘slim ideal’. In these BN patients, exposure to food and body image stimuli increased self-reported levels of anxiety, but not craving. Conclusions Our findings suggest that women with BN differ from HCs in the way they process body image, but not in the way they process food stimuli. PMID:24238299
Recent Advances in Techniques for Hyperspectral Image Processing
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony;
2009-01-01
Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms
USC orthogonal multiprocessor for image processing with neural networks
NASA Astrophysics Data System (ADS)
Hwang, Kai; Panda, Dhabaleswar K.; Haddadi, Navid
1990-07-01
This paper presents the architectural features and imaging applications of the Orthogonal MultiProcessor (OMP) system, which is under construction at the University of Southern California with research funding from NSF and assistance from several industrial partners. The prototype OMP is being built with 16 Intel i860 RISC microprocessors and 256 parallel memory modules using custom-designed spanning buses, which are 2-D interleaved and orthogonally accessed without conflicts. The 16-processor OMP prototype is targeted to achieve 430 MIPS and 600 Mflops, which have been verified by simulation experiments based on the design parameters used. The prototype OMP machine will be initially applied for image processing, computer vision, and neural network simulation applications. We summarize important vision and imaging algorithms that can be restructured with neural network models. These algorithms can efficiently run on the OMP hardware with linear speedup. The ultimate goal is to develop a high-performance Visual Computer (Viscom) for integrated low- and high-level image processing and vision tasks.
High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired.
Moshtael, Howard; Aslam, Tariq; Underwood, Ian; Dhillon, Baljean
2015-08-01
Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.; Jayani, A. P. S.
2018-05-01
LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
1979-11-01
a generalized cooccurrence matrix. Describing image texture is an important problem in the design of image understanding systems . Applications as...display system design optimization and video signal processing. Based on a study by Southern Research Institute , a number of options were identified...Specification for Target Acquisition Designation System (U), RFP # AMC-DP-AAH-H4020, i2 Apr 77. 4. Terminal Homing Applications of Solid State Image
Kedia, Saurabh; Sharma, Raju; Makharia, Govind K; Ahuja, Vineet; Desai, Devendra; Kandasamy, Devasenathipathy; Eapen, Anu; Ganesan, Karthik; Ghoshal, Uday C; Kalra, Naveen; Karthikeyan, D; Madhusudhan, Kumble Seetharama; Philip, Mathew; Puri, Amarender Singh; Puri, Sunil; Sinha, Saroj K; Banerjee, Rupa; Bhatia, Shobna; Bhat, Naresh; Dadhich, Sunil; Dhali, G K; Goswami, B D; Issar, S K; Jayanthi, V; Misra, S P; Nijhawan, Sandeep; Puri, Pankaj; Sarkar, Avik; Singh, S P; Srivastava, Anshu; Abraham, Philip; Ramakrishna, B S
2017-11-01
The Indian Society of Gastroenterology (ISG) Task Force on Inflammatory Bowel Disease and the Indian Radiological and Imaging Association (IRIA) developed combined ISG-IRIA evidence-based best-practice guidelines for imaging of the small intestine in patients with suspected or known Crohn's disease. These 29 position statements, developed through a modified Delphi process, are intended to serve as reference for teaching, clinical practice, and research.
Research study on stellar X-ray imaging experiment, volume 1
NASA Technical Reports Server (NTRS)
Wilson, H. H.; Vanspeybroeck, L. P.
1972-01-01
The use of microchannel plates as focal plane readout devices and the evaluation of mirrors for X-ray telescopes applied to stellar X-ray imaging is discussed. The microchannel plate outputs were either imaged on a phosphor screen which was viewed by a low light level vidicon or on a wire array which was read out by digitally processing the output of a charge division network attached to the wires. A service life test which was conducted on two image intensifiers is described.
Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views
1988-08-31
technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming
Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts
NASA Technical Reports Server (NTRS)
Alter-Gartenberg, Rachel
1998-01-01
This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.
Research on tomato seed vigor based on X-ray digital image
NASA Astrophysics Data System (ADS)
Zhao, Xueguan; Gao, Yuanyuan; Wang, Xiu; Li, Cuiling; Wang, Songlin; Feng, Qinghun
2016-10-01
Seed size, interior abnormal and damage of the tomato seeds will affect the germination. The purpose of this paper was to study the relationship between the internal morphology, seed size and seed germination of tomato. The preprocessing algorithm of X-ray image of tomato seeds was studied, and the internal structure characteristics of tomato seeds were extracted by image processing algorithm. By developing the image processing software, the cavity area between embryo and endosperm and the whole seed zone were determined. According to the difference of area of embryo and endosperm and Internal structural condition, seeds were divided into six categories, Respectively for three kinds of tomato seed germination test, the relationship between seed vigor and seed size , internal free cavity was explored through germination experiment. Through seedling evaluation test found that X-ray image analysis provide a perfect view of the inside part of the seed and seed morphology research methods. The larger the area of the endosperm and the embryo, the greater the probability of healthy seedlings sprout from the same size seeds. Mechanical damage adversely effects on seed germination, deterioration of tissue prone to produce week seedlings and abnormal seedlings.
Assessing clutter reduction in parallel coordinates using image processing techniques
NASA Astrophysics Data System (ADS)
Alhamaydh, Heba; Alzoubi, Hussein; Almasaeid, Hisham
2018-01-01
Information visualization has appeared as an important research field for multidimensional data and correlation analysis in recent years. Parallel coordinates (PCs) are one of the popular techniques to visual high-dimensional data. A problem with the PCs technique is that it suffers from crowding, a clutter which hides important data and obfuscates the information. Earlier research has been conducted to reduce clutter without loss in data content. We introduce the use of image processing techniques as an approach for assessing the performance of clutter reduction techniques in PC. We use histogram analysis as our first measure, where the mean feature of the color histograms of the possible alternative orderings of coordinates for the PC images is calculated and compared. The second measure is the extracted contrast feature from the texture of PC images based on gray-level co-occurrence matrices. The results show that the best PC image is the one that has the minimal mean value of the color histogram feature and the maximal contrast value of the texture feature. In addition to its simplicity, the proposed assessment method has the advantage of objectively assessing alternative ordering of PC visualization.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
"Seeing is believing": perspectives of applying imaging technology in discovery toxicology.
Xu, Jinghai James; Dunn, Margaret Condon; Smith, Arthur Russell
2009-11-01
Efficiency and accuracy in addressing drug safety issues proactively are critical in minimizing late-stage drug attritions. Discovery toxicology has become a specialty subdivision of toxicology seeking to effectively provide early predictions and safety assessment in the drug discovery process. Among the many technologies utilized to select safer compounds for further development, in vitro imaging technology is one of the best characterized and validated to provide translatable biomarkers towards clinically-relevant outcomes of drug safety. By carefully applying imaging technologies in genetic, hepatic, and cardiac toxicology, and integrating them with the rest of the drug discovery processes, it was possible to demonstrate significant impact of imaging technology on drug research and development and substantial returns on investment.
Research@ARL. Imaging & Image Processing. Volume 3, Issue 1
2014-01-01
goal, the focal plane arrays (FPAs) the Army deploys must excel in all areas of performance including thermal sensitivity, image resolution, speed of...are available only in relatively small sizes. Further, the difference in thermal expansion coefficients between a CZT substrate and its silicon (Si...read-out integrated circuitry reduces the reliability of large format FPAs due to repeated thermal cycling. Some in the community believed this
ISLE (Image and Signal Processing LISP Environment) reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherwood, R.J.; Searfus, R.M.
1990-01-01
ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less
Finding-specific display presets for computed radiography soft-copy reading.
Andriole, K P; Gould, R G; Webb, W R
1999-05-01
Much work has been done to optimize the display of cross-sectional modality imaging examinations for soft-copy reading (i.e., window/level tissue presets, and format presentations such as tile and stack modes, four-on-one, nine-on-one, etc). Less attention has been paid to the display of digital forms of the conventional projection x-ray. The purpose of this study is to assess the utility of providing presets for computed radiography (CR) soft-copy display, based not on the window/level settings, but on processing applied to the image optimized for visualization of specific findings, pathologies, etc (i.e., pneumothorax, tumor, tube location). It is felt that digital display of CR images based on finding-specific processing presets has the potential to: speed reading of digital projection x-ray examinations on soft copy; improve diagnostic efficacy; standardize display across examination type, clinical scenario, important key findings, and significant negatives; facilitate image comparison; and improve confidence in and acceptance of soft-copy reading. Clinical chest images are acquired using an Agfa-Gevaert (Mortsel, Belgium) ADC 70 CR scanner and Fuji (Stamford, CT) 9000 and AC2 CR scanners. Those demonstrating pertinent findings are transferred over the clinical picture archiving and communications system (PACS) network to a research image processing station (Agfa PS5000), where the optimal image-processing settings per finding, pathologic category, etc, are developed in conjunction with a thoracic radiologist, by manipulating the multiscale image contrast amplification (Agfa MUSICA) algorithm parameters. Soft-copy display of images processed with finding-specific settings are compared with the standard default image presentation for 50 cases of each category. Comparison is scored using a 5-point scale with the positive scale denoting the standard presentation is preferred over the finding-specific processing, the negative scale denoting the finding-specific processing is preferred over the standard presentation, and zero denoting no difference. Processing settings have been developed for several findings including pneumothorax and lung nodules, and clinical cases are currently being collected in preparation for formal clinical trials. Preliminary results indicate a preference for the optimized-processing presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.
Research on spatial-variant property of bistatic ISAR imaging plane of space target
NASA Astrophysics Data System (ADS)
Guo, Bao-Feng; Wang, Jun-Ling; Gao, Mei-Guo
2015-04-01
The imaging plane of inverse synthetic aperture radar (ISAR) is the projection plane of the target. When taking an image using the range-Doppler theory, the imaging plane may have a spatial-variant property, which causes the change of scatter’s projection position and results in migration through resolution cells. In this study, we focus on the spatial-variant property of the imaging plane of a three-axis-stabilized space target. The innovative contributions are as follows. 1) The target motion model in orbit is provided based on a two-body model. 2) The instantaneous imaging plane is determined by the method of vector analysis. 3) Three Euler angles are introduced to describe the spatial-variant property of the imaging plane, and the image quality is analyzed. The simulation results confirm the analysis of the spatial-variant property. The research in this study is significant for the selection of the imaging segment, and provides the evidence for the following data processing and compensation algorithm. Project supported by the National Natural Science Foundation of China (Grant No. 61401024), the Shanghai Aerospace Science and Technology Innovation Foundation, China (Grant No. SAST201240), and the Basic Research Foundation of Beijing Institute of Technology (Grant No. 20140542001).
Experimental research of digital holographic microscopic measuring
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Chen, Feifei; Li, Jicheng
2013-06-01
Digital holography is a new imaging technique, which is developed on the base of optical holography, Digital processing, and Computer techniques. It is using CCD instead of the conventional silver to record hologram, and then reproducing the 3D contour of the object by the way of computer simulation. Compared with the traditional optical holographic, the whole process is of simple measuring, lower production cost, faster the imaging speed, and with the advantages of non-contact real-time measurement. At present, it can be used in the fields of the morphology detection of tiny objects, micro deformation analysis, and biological cells shape measurement. It is one of the research hot spot at home and abroad. This paper introduced the basic principles and relevant theories about the optical holography and Digital holography, and researched the basic questions which influence the reproduce images in the process of recording and reconstructing of the digital holographic microcopy. In order to get a clear digital hologram, by analyzing the optical system structure, we discussed the recording distance and of the hologram. On the base of the theoretical studies, we established a measurement and analyzed the experimental conditions, then adjusted them to the system. To achieve a precise measurement of tiny object in three-dimension, we measured MEMS micro device for example, and obtained the reproduction three-dimensional contour, realized the three dimensional profile measurement of tiny object. According to the experiment results consider: analysis the reference factors between the zero-order term and a pair of twin-images by the choice of the object light and the reference light and the distance of the recording and reconstructing and the characteristics of reconstruction light on the measurement, the measurement errors were analyzed. The research result shows that the device owns certain reliability.
Zugaj, D; Chenet, A; Petit, L; Vaglio, J; Pascual, T; Piketty, C; Bourdes, V
2018-02-04
Currently, imaging technologies that can accurately assess or provide surrogate markers of the human cutaneous microvessel network are limited. Dynamic optical coherence tomography (D-OCT) allows the detection of blood flow in vivo and visualization of the skin microvasculature. However, image processing is necessary to correct images, filter artifacts, and exclude irrelevant signals. The objective of this study was to develop a novel image processing workflow to enhance the technical capabilities of D-OCT. Single-center, vehicle-controlled study including healthy volunteers aged 18-50 years. A capsaicin solution was applied topically on the subject's forearm to induce local inflammation. Measurements of capsaicin-induced increase in dermal blood flow, within the region of interest, were performed by laser Doppler imaging (LDI) (reference method) and D-OCT. Sixteen subjects were enrolled. A good correlation was shown between D-OCT and LDI, using the image processing workflow. Therefore, D-OCT offers an easy-to-use alternative to LDI, with good repeatability, new robust morphological features (dermal-epidermal junction localization), and quantification of the distribution of vessel size and changes in this distribution induced by capsaicin. The visualization of the vessel network was improved through bloc filtering and artifact removal. Moreover, the assessment of vessel size distribution allows a fine analysis of the vascular patterns. The newly developed image processing workflow enhances the technical capabilities of D-OCT for the accurate detection and characterization of microcirculation in the skin. A direct clinical application of this image processing workflow is the quantification of the effect of topical treatment on skin vascularization. © 2018 The Authors. Skin Research and Technology Published by John Wiley & Sons Ltd.
Dixit, Sudeepa; Fox, Mark; Pal, Anupam
2014-01-01
Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229
Design of polarization imaging system based on CIS and FPGA
NASA Astrophysics Data System (ADS)
Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding
2008-02-01
As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.
The Wide-Field Imaging Interferometry Testbed (WIIT): Recent Progress and Results
NASA Technical Reports Server (NTRS)
Rinehart, Stephen A.; Frey, Bradley J.; Leisawitz, David T.; Lyon, Richard G.; Maher, Stephen F.; Martino, Anthony J.
2008-01-01
Continued research with the Wide-Field Imaging Interferometry Testbed (WIIT) has achieved several important milestones. We have moved WIIT into the Advanced Interferometry and Metrology (AIM) Laboratory at Goddard, and have characterized the testbed in this well-controlled environment. The system is now completely automated and we are in the process of acquiring large data sets for analysis. In this paper, we discuss these new developments and outline our future research directions. The WIIT testbed, combined with new data analysis techniques and algorithms, provides a demonstration of the technique of wide-field interferometric imaging, a powerful tool for future space-borne interferometers.
Ex-vivo imaging of excised tissue using vital dyes and confocal microscopy
Johnson, Simon; Rabinovitch, Peter
2012-01-01
Vital dyes routinely used for staining cultured cells can also be used to stain and image live tissue slices ex-vivo. Staining tissue with vital dyes allows researchers to collect structural and functional data simultaneously and can be used for qualitative or quantitative fluorescent image collection. The protocols presented here are useful for structural and functional analysis of viable properties of cells in intact tissue slices, allowing for the collection of data in a structurally relevant environment. With these protocols, vital dyes can be applied as a research tool to disease processes and properties of tissue not amenable to cell culture based studies. PMID:22752953
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
NASA Astrophysics Data System (ADS)
Tatebe, Hironobu; Kato, Kunihito; Yamamoto, Kazuhiko; Katsuta, Yukio; Nonaka, Masahiko
2005-12-01
Now a day, many evaluation methods for the food industry by using image processing are proposed. These methods are becoming new evaluation method besides the sensory test and the solid-state measurement that are using for the quality evaluation. An advantage of the image processing is to be able to evaluate objectively. The goal of our research is structure evaluation of sponge cake by using image processing. In this paper, we propose a feature extraction method of the bobble structure in the sponge cake. Analysis of the bubble structure is one of the important properties to understand characteristics of the cake from the image. In order to take the cake image, first we cut cakes and measured that's surface by using the CIS scanner. Because the depth of field of this type scanner is very shallow, the bubble region of the surface has low gray scale values, and it has a feature that is blur. We extracted bubble regions from the surface images based on these features. First, input image is binarized, and the feature of bubble is extracted by the morphology analysis. In order to evaluate the result of feature extraction, we compared correlation with "Size of the bubble" of the sensory test result. From a result, the bubble extraction by using morphology analysis gives good correlation. It is shown that our method is as well as the subjectivity evaluation.
Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field
NASA Astrophysics Data System (ADS)
Rubin, D. M.; Chezar, H.
2007-12-01
Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.
NeuroSeek dual-color image processing infrared focal plane array
NASA Astrophysics Data System (ADS)
McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.
1998-09-01
Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.
Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.
2017-01-01
ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
Human Terrain: A Tactical Issue or a Strategic C4I Problem?
2008-05-20
C4I" 20-21 May 2008, George Mason University, Fairfax, Virginia Campus, The original document contains color images . 14. ABSTRACT 15. SUBJECT TERMS...can modify and control it – Terrain is many things: typography ; geology and soil type; it’s natural coverage (forests); it’s roadways, rail lines... image of “gun toting” field researchers who are doing military stuff and not research and in the process are poisoning the environment for real field