Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum
2017-01-11
few haze effects as possible. One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the
Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J
2008-07-20
PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.
Image processing in forensic pathology.
Oliver, W R
1998-03-01
Image processing applications in forensic pathology are becoming increasingly important. This article introduces basic concepts in image processing as applied to problems in forensic pathology in a non-mathematical context. Discussions of contrast enhancement, digital encoding, compression, deblurring, and other topics are presented.
Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Image reconstruction: an overview for clinicians.
Hansen, Michael S; Kellman, Peter
2015-03-01
Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. © 2014 Wiley Periodicals, Inc.
Tse computers. [ultrahigh speed optical processing for two dimensional binary image
NASA Technical Reports Server (NTRS)
Schaefer, D. H.; Strong, J. P., III
1977-01-01
An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.
Digital techniques for processing Landsat imagery
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh
2009-01-01
Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
Hangiandreou, Nicholas J
2003-01-01
Ultrasonography (US) has been used in medical imaging for over half a century. Current US scanners are based largely on the same basic principles used in the initial devices for human imaging. Modern equipment uses a pulse-echo approach with a brightness-mode (B-mode) display. Fundamental aspects of the B-mode imaging process include basic ultrasound physics, interactions of ultrasound with tissue, ultrasound pulse formation, scanning the ultrasound beam, and echo detection and signal processing. Recent technical innovations that have been developed to improve the performance of modern US equipment include the following: tissue harmonic imaging, spatial compound imaging, extended field of view imaging, coded pulse excitation, electronic section focusing, three-dimensional and four-dimensional imaging, and the general trend toward equipment miniaturization. US is a relatively inexpensive, portable, safe, and real-time modality, all of which make it one of the most widely used imaging modalities in medicine. Although B-mode US is sometimes referred to as a mature technology, this modality continues to experience a significant evolution in capability with even more exciting developments on the horizon. Copyright RSNA, 2003
Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye
2014-02-01
Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.
NASA Astrophysics Data System (ADS)
Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.
2008-12-01
Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.
USDA-ARS?s Scientific Manuscript database
Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...
The application of digital techniques to the analysis of metallurgical experiments
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1977-01-01
The application of a specific digital computer system (known as the Image Data Processing System) to the analysis of three NASA-sponsored metallurgical experiments is discussed in some detail. The basic hardware and software components of the Image Data Processing System are presented. Many figures are presented in the discussion of each experimental analysis in an attempt to show the accuracy and speed that the Image Data Processing System affords in analyzing photographic images dealing with metallurgy, and in particular with material processing.
NASA Technical Reports Server (NTRS)
Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert
1996-01-01
The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.
IMAGES: An interactive image processing system
NASA Technical Reports Server (NTRS)
Jensen, J. R.
1981-01-01
The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.
Photogrammetry Toolbox Reference Manual
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Burner, Alpheus W.
2014-01-01
Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.
Digital imaging technology assessment: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.
Digital Radiographic Image Processing and Analysis.
Yoon, Douglas C; Mol, André; Benn, Douglas K; Benavides, Erika
2018-07-01
This article describes digital radiographic imaging and analysis from the basics of image capture to examples of some of the most advanced digital technologies currently available. The principles underlying the imaging technologies are described to provide a better understanding of their strengths and limitations. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Hafner, Mathias
2008-01-01
Cell biology and molecular imaging technologies have made enormous progress in basic research. However, the transfer of this knowledge to the pharmaceutical drug discovery process, or even therapeutic improvements for disorders such as neuronal diseases, is still in its infancy. This transfer needs scientists who can integrate basic research with…
A Simple Encryption Algorithm for Quantum Color Image
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
A Web simulation of medical image reconstruction and processing as an educational tool.
Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos
2015-02-01
Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.
McBain, Ryan; Norton, Daniel; Chen, Yue
2010-09-01
While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.
LaBerge, Jeanne M; Andriole, Katherine P
2003-12-01
This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored.
Integrated analysis of remote sensing products from basic geological surveys. [Brazil
NASA Technical Reports Server (NTRS)
Dasilvafagundesfilho, E. (Principal Investigator)
1984-01-01
Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.
Prototype Focal-Plane-Array Optoelectronic Image Processor
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey
1995-01-01
Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.
Bridges, Robert L; Wiley, Chris R; Christian, John C; Strohm, Adam P
2007-06-01
Na(18)F, an early bone scintigraphy agent, is poised to reenter mainstream clinical imaging with the present generations of stand-alone PET and PET/CT hybrid scanners. (18)F PET scans promise improved imaging quality for both benign and malignant bone disease, with significantly improved sensitivity and specificity over conventional planar and SPECT bone scans. In this article, basic acquisition information will be presented along with examples of studies related to oncology, sports medicine, and general orthopedics. The use of image fusion of PET bone scans with CT and MRI will be demonstrated. The objectives of this article are to provide the reader with an understanding of the history of early bone scintigraphy in relation to Na(18)F scanning, a familiarity with basic imaging techniques for PET bone scanning, an appreciation of the extent of disease processes that can be imaged with PET bone scanning, an appreciation for the added value of multimodality image fusion with bone disease, and a recognition of the potential role PET bone scanning may play in clinical imaging.
Basic research planning in mathematical pattern recognition and image analysis
NASA Technical Reports Server (NTRS)
Bryant, J.; Guseman, L. F., Jr.
1981-01-01
Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.
Superordinate Level Processing Has Priority Over Basic-Level Processing in Scene Gist Recognition
Sun, Qi; Zheng, Yang; Sun, Mingxia; Zheng, Yuanjie
2016-01-01
By combining a perceptual discrimination task and a visuospatial working memory task, the present study examined the effects of visuospatial working memory load on the hierarchical processing of scene gist. In the perceptual discrimination task, two scene images from the same (manmade–manmade pairing or natural–natural pairing) or different superordinate level categories (manmade–natural pairing) were presented simultaneously, and participants were asked to judge whether these two images belonged to the same basic-level category (e.g., street–street pairing) or not (e.g., street–highway pairing). In the concurrent working memory task, spatial load (position-based load in Experiment 1) and object load (figure-based load in Experiment 2) were manipulated. The results were as follows: (a) spatial load and object load have stronger effects on discrimination of same basic-level scene pairing than same superordinate level scene pairing; (b) spatial load has a larger impact on the discrimination of scene pairings at early stages than at later stages; on the contrary, object information has a larger influence on at later stages than at early stages. It followed that superordinate level processing has priority over basic-level processing in scene gist recognition and spatial information contributes to the earlier and object information to the later stages in scene gist recognition. PMID:28382195
NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
SIP: A Web-Based Astronomical Image Processing Program
NASA Astrophysics Data System (ADS)
Simonetti, J. H.
1999-12-01
I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.
Improved egg crack detection algorithm for modified pressure imaging system
USDA-ARS?s Scientific Manuscript database
Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...
A modeling analysis program for the JPL table mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Goldberg, B. A.
1984-01-01
A detailed review of 110 of the 263 Region B/C images of the 1981 data set is undertaken and a preliminary assessment of 39 images of the 1976-79 data set is presented. The basic spatial characteristics of these images are discussed. Modeling analysis of these images after further data processing will provide useful information about Io and the planetary magnetosphere. Plans for data processing and modeling analysis are outlined. Results of very preliminary modeling activities are presented.
Applications of HCMM satellite data to the study of urban heating patterns
NASA Technical Reports Server (NTRS)
Carlson, T. N. (Principal Investigator)
1980-01-01
A research summary is presented and is divided into two major areas, one developmental and the other basic science. In the first three sub-categories are discussed: image processing techniques, especially the method whereby surface temperature image are converted to images of surface energy budget, moisture availability and thermal inertia; model development; and model verification. Basic science includes the use of a method to further the understanding of the urban heat island and anthropogenic modification of the surface heating, evaporation over vegetated surfaces, and the effect of surface heat flux on plume spread.
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
NASA Astrophysics Data System (ADS)
Hou, H. S.
1985-07-01
An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.
Processing Cones: A Computational Structure for Image Analysis.
1981-12-01
image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward
Medical image processing on the GPU - past, present and future.
Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M
2013-12-01
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.
Digital image processing using parallel computing based on CUDA technology
NASA Astrophysics Data System (ADS)
Skirnevskiy, I. P.; Pustovit, A. V.; Abdrashitova, M. O.
2017-01-01
This article describes expediency of using a graphics processing unit (GPU) in big data processing in the context of digital images processing. It provides a short description of a parallel computing technology and its usage in different areas, definition of the image noise and a brief overview of some noise removal algorithms. It also describes some basic requirements that should be met by certain noise removal algorithm in the projection to computer tomography. It provides comparison of the performance with and without using GPU as well as with different percentage of using CPU and GPU.
Methods of training the graduate level and professional geologist in remote sensing technology
NASA Technical Reports Server (NTRS)
Kolm, K. E.
1981-01-01
Requirements for a basic course in remote sensing to accommodate the needs of the graduate level and professional geologist are described. The course should stress the general topics of basic remote sensing theory, the theory and data types relating to different remote sensing systems, an introduction to the basic concepts of computer image processing and analysis, the characteristics of different data types, the development of methods for geological interpretations, the integration of all scales and data types of remote sensing in a given study, the integration of other data bases (geophysical and geochemical) into a remote sensing study, and geological remote sensing applications. The laboratories should stress hands on experience to reinforce the concepts and procedures presented in the lecture. The geologist should then be encouraged to pursue a second course in computer image processing and analysis of remotely sensed data.
Processing digital images and calculation of beam emittance (pepper-pot method for the Krion source)
NASA Astrophysics Data System (ADS)
Alexandrov, V. S.; Donets, E. E.; Nyukhalova, E. V.; Kaminsky, A. K.; Sedykh, S. N.; Tuzikov, A. V.; Philippov, A. V.
2016-12-01
Programs for the pre-processing of photographs of beam images on the mask based on Wolfram Mathematica and Origin software are described. Angles of rotation around the axis and in the vertical plane are taken into account in the generation of the file with image coordinates. Results of the emittance calculation by the Pep_emit program written in Visual Basic using the generated file in the test mode are presented.
MR Imaging of the Penis and Scrotum.
Parker, Rex A; Menias, Christine O; Quazi, Robin; Hara, Amy K; Verma, Sadhna; Shaaban, Akram; Siegel, Cary L; Radmanesh, Alireza; Sandrasegaran, Kumar
2015-01-01
Traditionally, due to its low cost, ready availability, and proved diagnostic accuracy, ultrasonography (US) has been the primary imaging modality for the evaluation of scrotal and, to a lesser extent, penile disease. However, US is limited by its relatively small useful field of view, operator dependence, and inability to provide much information on tissue characterization. Magnetic resonance (MR) imaging, with its excellent soft-tissue contrast and good spatial resolution, is increasingly being used as both a problem-solving tool in patients who have already undergone US and as a primary modality for the evaluation of suspected disease. Specifically, MR imaging can aid in differentiating between benign and malignant lesions seen at US, help define the extent of inflammatory processes or traumatic injuries, and play a vital role in locoregional staging of tumors. Consequently, it is becoming more important for radiologists to be familiar with the wide range of penile and scrotal disease entities and their MR imaging appearances. The authors review the basic anatomy of the penis and scrotum as seen at MR imaging and provide a basic protocol for penile and scrotal imaging, with emphasis on the advantages of MR imaging. Pathologic processes are organized into traumatic (including penile fracture and contusion), infectious or inflammatory (including Fournier gangrene and scrotal abscess), and neoplastic (including both benign and malignant scrotal and penile tumors) processes. ©RSNA, 2015.
The edge detection method of the infrared imagery of the laser spot
NASA Astrophysics Data System (ADS)
Che, Jinxi; Zhang, Jinchun; Li, Zhongmin
2016-01-01
In the jamming effectiveness experiments, in which the thermal infrared imager was interfered by the CO2 Laser, in order to evaluate the jamming effect of the thermal infrared imager by the CO2 Laser, it was needed to analyses the obtained infrared imagery of laser spot. Because the laser spot pictures obtained from the thermal infrared imager are irregular, the edge detection is an important process. The image edge is one of the most basic characteristics of the image, and it contains most of the information of the image. Generally, because of the thermal balance effect, the partly temperature of objective is no quite difference; therefore the infrared imagery's ability of reflecting the local detail of object is obvious week. At the same time, when the information of heat distribution of the thermal imagery was combined with the basic information of target, such as the object size, the relative position of field of view, shape and outline, and so on, the information just has more value. Hence, it is an important step for making image processing to extract the objective edge of the infrared imagery. Meanwhile it is an important part of image processing procedure and it is the premise of many subsequent processing. So as to extract outline information of the target from the original thermal imagery, and overcome the disadvantage, such as the low image contrast of the image and serious noise interference, and so on, the edge of thermal imagery needs detecting and processing. The principles of the Roberts, Sobel, Prewitt and Canny operator were analyzed, and then they were used to making edge detection on the thermal imageries of laser spot, which were obtained from the jamming effect experiments of CO2 laser jamming the thermal infrared imager. On the basis of the detection result, their performances were compared. At the end, the characteristics of the operators were summarized, which provide reference for the choice of edge detection operators in thermal imagery processing in future.
Processing Satellite Images on Tertiary Storage: A Study of the Impact of Tile Size on Performance
NASA Technical Reports Server (NTRS)
Yu, JieBing; DeWitt, David J.
1996-01-01
Before raw data from a satellite can be used by an Earth scientist, it must first undergo a number of processing steps including basic processing, cleansing, and geo-registration. Processing actually expands the volume of data collected by a factor of 2 or 3 and the original data is never deleted. Thus processing and storage requirements can exceed 2 terrabytes/day. Once processed data is ready for analysis, a series of algorithms (typically developed by the Earth scientists) is applied to a large number of images in a data set. The focus of this paper is how best to handle such images stored on tape using the following assumptions: (1) all images of interest to a scientist are stored on a single tape, (2) images are accessed and processed in the order that they are stored on tape, and (3) the analysis requires access to only a portion of each image and not the entire image.
Process thresholds: Report of Working Group Number 3
NASA Technical Reports Server (NTRS)
Williams, R. S., Jr.
1985-01-01
The Process Thresholds Working Group concerned itself with whether a geomorphic process to be monitored on satellite imagery must be global, regional, or local in its effect on the landscape. It was pointed out that major changes in types and magnitudes of processes operating in an area are needed to be detectable on a global scale. It was concluded from a review of geomorphic studies which used satellite images that they do record change in landscape over time (on a time-lapse basis) as a result of one or more processes. In fact, this may be one of the most important attributes of space imagery, in that one can document land form changes in the form of a permanent historical record. The group also discussed the important subject of the acquisition of basic data sets by different satellite imaging systems. Geomorphologists already have available one near-global basis data set resulting from the early LANDSAT program, especially images acquired by LANDSATs 1 and 2. Such historic basic data sets can serve as a benchmark for comparison with landscape changes that take place in the future. They can also serve as a benchmark for comparison with landscape changes that have occurred in the past (as recorded) by images, photography and maps.
Digital document imaging systems: An overview and guide
NASA Technical Reports Server (NTRS)
1990-01-01
This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.
The Indispensable Teachers' Guide to Computer Skills. Second Edition.
ERIC Educational Resources Information Center
Johnson, Doug
This book provides a framework of technology skills that can be used for staff development. Part One presents critical components of effective staff development. Part Two describes the basic CODE 77 skills, including basic computer operation, file management, time management, word processing, network and Internet use, graphics and digital images,…
GPU computing in medical physics: a review.
Pratx, Guillem; Xing, Lei
2011-05-01
The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.
CCDs in the Mechanics Lab--A Competitive Alternative? (Part I).
ERIC Educational Resources Information Center
Pinto, Fabrizio
1995-01-01
Reports on the implementation of a relatively low-cost, versatile, and intuitive system to teach basic mechanics based on the use of a Charge-Coupled Device (CCD) camera and inexpensive image-processing and analysis software. Discusses strengths and limitations of CCD imaging technologies. (JRH)
EduGATE - basic examples for educative purpose using the GATE simulation platform.
Pietrzyk, Uwe; Zakhnini, Abdelhamid; Axer, Markus; Sauerzapf, Sophie; Benoit, Didier; Gaens, Michaela
2013-02-01
EduGATE is a collection of basic examples to introduce students to the fundamental physical aspects of medical imaging devices. It is based on the GATE platform, which has received a wide acceptance in the field of simulating medical imaging devices including SPECT, PET, CT and also applications in radiation therapy. GATE can be configured by commands, which are, for the sake of simplicity, listed in a collection of one or more macro files to set up phantoms, multiple types of sources, detection device, and acquisition parameters. The aim of the EduGATE is to use all these helpful features of GATE to provide insights into the physics of medical imaging by means of a collection of very basic and simple GATE macros in connection with analysis programs based on ROOT, a framework for data processing. A graphical user interface to define a configuration is also included. Copyright © 2012. Published by Elsevier GmbH.
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.
Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer
Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954
Application of AIS Technology to Forest Mapping
NASA Technical Reports Server (NTRS)
Yool, S. R.; Star, J. L.
1985-01-01
Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.
Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.
Ozaki, Nobuyuki
2002-07-01
This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.
Contact Angle Measurements Using a Simplified Experimental Setup
ERIC Educational Resources Information Center
Lamour, Guillaume; Hamraoui, Ahmed; Buvailo, Andrii; Xing, Yangjun; Keuleyan, Sean; Prakash, Vivek; Eftekhari-Bafrooei, Ali; Borguet, Eric
2010-01-01
A basic and affordable experimental apparatus is described that measures the static contact angle of a liquid drop in contact with a solid. The image of the drop is made with a simple digital camera by taking a picture that is magnified by an optical lens. The profile of the drop is then processed with ImageJ free software. The ImageJ contact…
Visualization of Concrete Slump Flow Using the Kinect Sensor
Park, Minbeom
2018-01-01
Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow. PMID:29510510
Visualization of Concrete Slump Flow Using the Kinect Sensor.
Kim, Jung-Hoon; Park, Minbeom
2018-03-03
Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow.
A framework for farmland parcels extraction based on image classification
NASA Astrophysics Data System (ADS)
Liu, Guoying; Ge, Wenying; Song, Xu; Zhao, Hongdan
2018-03-01
It is very important for the government to build an accurate national basic cultivated land database. In this work, farmland parcels extraction is one of the basic steps. However, during the past years, people had to spend much time on determining an area is a farmland parcel or not, since they were bounded to understand remote sensing images only from the mere visual interpretation. In order to overcome this problem, in this study, a method was proposed to extract farmland parcels by means of image classification. In the proposed method, farmland areas and ridge areas of the classification map are semantically processed independently and the results are fused together to form the final results of farmland parcels. Experiments on high spatial remote sensing images have shown the effectiveness of the proposed method.
Banno, Hayaki; Saiki, Jun
2015-03-01
Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.
TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, K; Mutic, S
2014-06-15
AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less
Spaceborne SAR Imaging Algorithm for Coherence Optimized.
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.
Spaceborne SAR Imaging Algorithm for Coherence Optimized
Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun
2016-01-01
This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446
Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images
NASA Technical Reports Server (NTRS)
Blonski, Slawomir
2006-01-01
Characterization was conducted under the Memorandum of Understanding among Orbital Sciences Corp., ORBIMAGE, Inc., and NASA Applied Sciences Directorate. Acquired five OrbView-3 panchromatic images of the permanent Stennis Space Center edge targets painted on a concrete surface. Each image is available at two processing levels: Georaw and Basic. Georaw is an intermediate image in which individual pixels are aligned by a nominal shift in the along-scan direction to adjust for the staggered layout of the panchromatic detectors along the focal plane array. Georaw images are engineering data and are not delivered to customers. The Basic product includes a cubic interpolation to align the pixels better along the focal plane and to correct for sensor artifacts, such as smile and attitude smoothing. This product retains satellite geometry - no rectification is performed. Processing of the characterized images did not include image sharpening, which is applied by default to OrbView-3 image products delivered by ORBIMAGE to customers. Edge responses were extracted from images of tilted edges in two directions: along-scan and cross-scan. Each edge response was approximated with a superposition of three sigmoidal functions through a nonlinear least-squares curve-fitting. Line Spread Functions (LSF) were derived by differentiation of the analytical approximation. Modulation Transfer Functions (MTF) were obtained after applying the discrete Fourier transform to the LSF.
Chosen Aspects of the Production of the Basic Map Using Uav Imagery
NASA Astrophysics Data System (ADS)
Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.
2016-06-01
For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
Thread concept for automatic task parallelization in image analysis
NASA Astrophysics Data System (ADS)
Lueckenhaus, Maximilian; Eckstein, Wolfgang
1998-09-01
Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.
Web-based platform for collaborative medical imaging research
NASA Astrophysics Data System (ADS)
Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.
2015-03-01
Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.
Processing Digital Imagery to Enhance Perceptions of Realism
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2003-01-01
Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.
A learning tool for optical and microwave satellite image processing and analysis
NASA Astrophysics Data System (ADS)
Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.
2016-04-01
This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.
Cerdena, Ernesto A; Corigliano, Barbara A
2007-01-01
The implementation of the Deficit Reduction Act (DRA) of 2005 has had adverse impacts with freestanding imaging centers and independent diagnostic testing facilities (IDTF) throughout the nation, including patient's access to quality imaging as well as crippling an organization's bottom line. Basic but effective business strategic tools should be formulated and executed to overcome the negative impact of the DRA. This should include creative and innovative process improvement initiatives while reducing operational costs and optimizing staff, thus improving profitability. Radiology administrators should act as facilitators to articulate and instill the mission, core values, and vision of the organization to the staff. Equally important, leaders in the imaging industry need to manifest a strong commitment in bringing the center into a whole new paradigm shift towards excellence and effective business operations.
Effective Moment Feature Vectors for Protein Domain Structures
Shi, Jian-Yu; Yiu, Siu-Ming; Zhang, Yan-Ning; Chin, Francis Yuk-Lun
2013-01-01
Imaging processing techniques have been shown to be useful in studying protein domain structures. The idea is to represent the pairwise distances of any two residues of the structure in a 2D distance matrix (DM). Features and/or submatrices are extracted from this DM to represent a domain. Existing approaches, however, may involve a large number of features (100–400) or complicated mathematical operations. Finding fewer but more effective features is always desirable. In this paper, based on some key observations on DMs, we are able to decompose a DM image into four basic binary images, each representing the structural characteristics of a fundamental secondary structure element (SSE) or a motif in the domain. Using the concept of moments in image processing, we further derive 45 structural features based on the four binary images. Together with 4 features extracted from the basic images, we represent the structure of a domain using 49 features. We show that our feature vectors can represent domain structures effectively in terms of the following. (1) We show a higher accuracy for domain classification. (2) We show a clear and consistent distribution of domains using our proposed structural vector space. (3) We are able to cluster the domains according to our moment features and demonstrate a relationship between structural variation and functional diversity. PMID:24391828
Fixed-Cell Imaging of Schizosaccharomyces pombe.
Hagan, Iain M; Bagley, Steven
2016-07-01
The acknowledged genetic malleability of fission yeast has been matched by impressive cytology to drive major advances in our understanding of basic molecular cell biological processes. In many of the more recent studies, traditional approaches of fixation followed by processing to accommodate classical staining procedures have been superseded by live-cell imaging approaches that monitor the distribution of fusion proteins between a molecule of interest and a fluorescent protein. Although such live-cell imaging is uniquely informative for many questions, fixed-cell imaging remains the better option for others and is an important-sometimes critical-complement to the analysis of fluorescent fusion proteins by live-cell imaging. Here, we discuss the merits of fixed- and live-cell imaging as well as specific issues for fluorescence microscopy imaging of fission yeast. © 2016 Cold Spring Harbor Laboratory Press.
Optical coherence tomography for embryonic imaging: a review
Raghunathan, Raksha; Singh, Manmohan; Dickinson, Mary E.; Larin, Kirill V.
2016-01-01
Abstract. Embryogenesis is a highly complex and dynamic process, and its visualization is crucial for understanding basic physiological processes during development and for identifying and assessing possible defects, malformations, and diseases. While traditional imaging modalities, such as ultrasound biomicroscopy, micro-magnetic resonance imaging, and micro-computed tomography, have long been adapted for embryonic imaging, these techniques generally have limitations in their speed, spatial resolution, and contrast to capture processes such as cardiodynamics during embryogenesis. Optical coherence tomography (OCT) is a noninvasive imaging modality with micrometer-scale spatial resolution and imaging depth up to a few millimeters in tissue. OCT has bridged the gap between ultrahigh resolution imaging techniques with limited imaging depth like confocal microscopy and modalities, such as ultrasound sonography, which have deeper penetration but poorer spatial resolution. Moreover, the noninvasive nature of OCT has enabled live imaging of embryos without any external contrast agents. We review how OCT has been utilized to study developing embryos and also discuss advances in techniques used in conjunction with OCT to understand embryonic development. PMID:27228503
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
"Minding the gap": imagination, creativity and human cognition.
Pelaprat, Etienne; Cole, Michael
2011-12-01
Inquiry into the nature of mental images is a major topic in psychology where research is focused on the psychological faculties of imagination and creativity. In this paper, we draw on the work of L.S. Vygotsky to develop a cultural-historical approach to the study of imagination as central to human cognitive processes. We characterize imagination as a process of image making that resolves "gaps" arising from biological and cultural-historical constraints, and that enables ongoing time-space coordination necessary for thought and action. After presenting some basic theoretical considerations, we offer a series of examples to illustrate for the reader the diversity of processes of imagination as image making. Applying our arguments to contemporary digital media, we argue that a cultural-historical approach to image formation is important for understanding how imagination and creativity are distinct, yet inter-penetrating processes.
Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.
2004-01-01
Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
Acoustical holographic recording with coherent optical read-out and image processing
NASA Astrophysics Data System (ADS)
Liu, H. K.
1980-10-01
New acoustic holographic wave memory devices have been designed for real-time in-situ recording applications. The basic operating principles of these devices and experimental results through the use of some of the prototypes of the devices are presented. Recording media used in the device include thermoplastic resin, Crisco vegetable oil, and Wilson corn oil. In addition, nonlinear coherent optical image processing techniques including equidensitometry, A-D conversion, and pseudo-color, all based on the new contact screen technique, are discussed with regard to the enhancement of the normally poor-resolved acoustical holographic images.
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
New Windows based Color Morphological Operators for Biomedical Image Processing
NASA Astrophysics Data System (ADS)
Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia
2016-04-01
Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.
A Tentative Application Of Morphological Filters To Time-Varying Images
NASA Astrophysics Data System (ADS)
Billard, D.; Poquillon, B.
1989-03-01
In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.
[Quantitative data analysis for live imaging of bone.
Seno, Shigeto
Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.
Johnson, Heath E; Haugh, Jason M
2013-12-02
This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.
Melo, E Correa
2003-08-01
The author describes the reasons why evaluation processes should be applied to the Veterinary Services of Member Countries, either for trade in animals and animal products and by-products between two countries, or for establishing essential measures to improve the Veterinary Service concerned. The author also describes the basic elements involved in conducting an evaluation process, including the instruments for doing so. These basic elements centre on the following:--designing a model, or desirable image, against which a comparison can be made--establishing a list of processes to be analysed and defining the qualitative and quantitative mechanisms for this analysis--establishing a multidisciplinary evaluation team and developing a process for standardising the evaluation criteria.
Onboard spectral imager data processor
NASA Astrophysics Data System (ADS)
Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.
1999-10-01
Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.
NASA Astrophysics Data System (ADS)
Gupta, Shubhank; Panda, Aditi; Naskar, Ruchira; Mishra, Dinesh Kumar; Pal, Snehanshu
2017-11-01
Steels are alloys of iron and carbon, widely used in construction and other applications. The evolution of steel microstructure through various heat treatment processes is an important factor in controlling properties and performance of steel. Extensive experimentations have been performed to enhance the properties of steel by customizing heat treatment processes. However, experimental analyses are always associated with high resource requirements in terms of cost and time. As an alternative solution, we propose an image processing-based technique for refinement of raw plain carbon steel microstructure images, into a digital form, usable in experiments related to heat treatment processes of steel in diverse applications. The proposed work follows the conventional steps practiced by materials engineers in manual refinement of steel images; and it appropriately utilizes basic image processing techniques (including filtering, segmentation, opening, and clustering) to automate the whole process. The proposed refinement of steel microstructure images is aimed to enable computer-aided simulations of heat treatment of plain carbon steel, in a timely and cost-efficient manner; hence it is beneficial for the materials and metallurgy industry. Our experimental results prove the efficiency and effectiveness of the proposed technique.
Implementation of the Pan-STARRS Image Processing Pipeline
NASA Astrophysics Data System (ADS)
Fang, Julia; Aspin, C.
2007-12-01
Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.
NASA Astrophysics Data System (ADS)
Kage, Andreas; Canto, Marcia; Gorospe, Emmanuel; Almario, Antonio; Münzenmayer, Christian
2010-03-01
In the near future, Computer Assisted Diagnosis (CAD) which is well known in the area of mammography might be used to support clinical experts in the diagnosis of images derived from imaging modalities such as endoscopy. In the recent past, a few first approaches for computer assisted endoscopy have been presented already. These systems use a video signal as an input that is provided by the endoscopes video processor. Despite the advent of high-definition systems most standard endoscopy systems today still provide only analog video signals. These signals consist of interlaced images that can not be used in a CAD approach without deinterlacing. Of course, there are many different deinterlacing approaches known today. But most of them are specializations of some basic approaches. In this paper we present four basic deinterlacing approaches. We have used a database of non-interlaced images which have been degraded by artificial interlacing and afterwards processed by these approaches. The database contains regions of interest (ROI) of clinical relevance for the diagnosis of abnormalities in the esophagus. We compared the classification rates on these ROIs on the original images and after the deinterlacing. The results show that the deinterlacing has an impact on the classification rates. The Bobbing approach and the Motion Compensation approach achieved the best classification results in most cases.
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.
Furuta, Akihiro; Onishi, Hideo; Nakamoto, Kenta
This study aimed at developing the realistic striatal digital brain (SDB) phantom and to assess specific binding ratio (SBR) for ventricular effect in the 123 I-FP-CIT SPECT imaging. SDB phantom was constructed in to four segments (striatum, ventricle, brain parenchyma, and skull bone) using Percentile method and other image processing in the T2-weighted MR images. The reference image was converted into 128×128 matrixes to align MR images with SPECT images. The process image was reconstructed with projection data sets generated from reference images additive blurring, attenuation, scatter, and statically noise. The SDB phantom was evaluated to find the accuracy of calculated SBR and to find the effect of SBR with/without ventricular counts with the reference and process images. We developed and investigated the utility of the SDB phantom in the 123 I-FP-CIT SPECT clinical study. The true value of SBR was just marched to calculate SBR from reference and process images. The SBR was underestimated 58.0% with ventricular counts in reference image, however, was underestimated 162% with ventricular counts in process images. The SDB phantom provides an extremely convenient tool for discovering basic properties of 123 I-FP-CIT SPECT clinical study image. It was suggested that the SBR was susceptible to ventricle.
Advanced biologically plausible algorithms for low-level image processing
NASA Astrophysics Data System (ADS)
Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan
1999-08-01
At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.
“Pretty Pictures” with the HDI
NASA Astrophysics Data System (ADS)
Buckner, Spencer L.
2017-01-01
The Half-Degree Imager (HDI) has been in use on the 0.9-m WIYN telescope since October 2013. The instrument has well served the consortium as evidenced by the posters in this session and presentations at the concurrent special session held at this meeting. One thing that has been missing from the mix are aesthetically pleasing images for use in publicity and public outreach. Making “pretty pictures” with a scientific instrument such as HDI presents a number of challenges and opportunities. The chief challenge is finding the time to do the basic imaging given the limited telescope time available to users. Most users are understandably reluctant to take time away from imaging for their scientific research to take images whose primary purpose is to make a pretty picture. Fortunately, imaging of some objects to make pretty pictures can be done under sky conditions that are less than ideal when photometric studies would have limited usefulness. Another challenge is the raw HDI images must be converted from an extended FITS format into a normal FITS and a filter line added to the header to make the images usable by most commercially available image processing software. On the plus side, pretty picture images can serve to inspire prospective students into astronomy. Austin Peay State University has a popular astrophotography class that makes use of images taken with the HDI camera to introduce students to basic image processing techniques. The course is taken by both physics majors on the astrophysics track and non-science majors completing the astronomy minor. Pretty pictures can also be used as a recruitment tool to bring students into astronomy. APSU houses physics, biology, chemistry, agriculture and medical technology in the same building and displaying astronomical pictures at strategic locations around the building serves to recruit non-science majors to take more astronomy courses. Finally, the images can be used in publicity and outreach efforts by the university. This poster presents some of the techniques used in processing the images tor aesthetic value and how those images are used in recruitment, publicity and outreach. Several of the finished images in poster-sized prints will be available for viewing.
Processing Infrared Images For Fire Management Applications
NASA Astrophysics Data System (ADS)
Warren, John R.; Pratt, William K.
1981-12-01
The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
IMAGESEER - IMAGEs for Education and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas; Milner, Barbara
2012-01-01
IMAGESEER is a new Web portal that brings easy access to NASA image data for non-NASA researchers, educators, and students. The IMAGESEER Web site and database are specifically designed to be utilized by the university community, to enable teaching image processing (IP) techniques on NASA data, as well as to provide reference benchmark data to validate new IP algorithms. Along with the data and a Web user interface front-end, basic knowledge of the application domains, benchmark information, and specific NASA IP challenges (or case studies) are provided.
A fast discrete S-transform for biomedical signal processing.
Brown, Robert A; Frayne, Richard
2008-01-01
Determining the frequency content of a signal is a basic operation in signal and image processing. The S-transform provides both the true frequency and globally referenced phase measurements characteristic of the Fourier transform and also generates local spectra, as does the wavelet transform. Due to this combination, the S-transform has been successfully demonstrated in a variety of biomedical signal and image processing tasks. However, the computational demands of the S-transform have limited its application in medicine to this point in time. This abstract introduces the fast S-transform, a more efficient discrete implementation of the classic S-transform with dramatically reduced computational requirements.
NASA Technical Reports Server (NTRS)
Zolotukhin, V. G.; Kolosov, B. I.; Usikov, D. A.; Borisenko, V. I.; Mosin, S. T.; Gorokhov, V. N.
1980-01-01
A description of a batch of programs for the YeS-1040 computer combined into an automated system for processing photo (and video) images of the Earth's surface, taken from spacecraft, is presented. Individual programs with the detailed discussion of the algorithmic and programmatic facilities needed by the user are presented. The basic principles for assembling the system, and the control programs are included. The exchange format within whose framework the cataloging of any programs recommended for the system of processing will be activated in the future is displayed.
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
NASA Astrophysics Data System (ADS)
Kwok, Ngaiming; Shi, Haiyan; Peng, Yeping; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur
2018-04-01
Restoring images captured under low-illuminations is an essential front-end process for most image based applications. The Center-Surround Retinex algorithm has been a popular approach employed to improve image brightness. However, this algorithm in its basic form, is known to produce color degradations. In order to mitigate this problem, here the Single-Scale Retinex algorithm is modifid as an edge extractor while illumination is recovered through a non-linear intensity mapping stage. The derived edges are then integrated with the mapped image to produce the enhanced output. Furthermore, in reducing color distortion, the process is conducted in the magnitude sorted domain instead of the conventional Red-Green-Blue (RGB) color channels. Experimental results had shown that improvements with regard to mean brightness, colorfulness, saturation, and information content can be obtained.
[Research on Spectral Polarization Imaging System Based on Static Modulation].
Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng
2015-04-01
The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.
Original and creative stereoscopic film making
NASA Astrophysics Data System (ADS)
Criado, Enrique
2008-02-01
The stereoscopic cinema has become, once again, a hot topic in the film production. For filmmakers to be successful in this field, a technical background in the principles of binocular perception and how our brain interprets the incoming data from our eyes, are fundamental. It is also paramount for a stereoscopic production to adhere certain rules for comfort and safety. There is an immense variety of options in the art of standard "flat" photography, and the possibilities only can be multiply with the stereo. The stereoscopic imaging has its own unique areas for subjective, original and creative control that allow an incredible range of possible combinations by working inside the standards, and in some cases on the boundaries of the basic stereo rules. The stereoscopic imaging can be approached in a "flat" manner, like channeling sound through an audio equalizer with all the bands at the same level. It can provide a realistic perception, which in many cases can be sufficient, thanks to the rock-solid viewing inherent to the stereoscopic image, but there are many more possibilities. This document describes some of the basic operating parameters and concepts for stereoscopic imaging, but it also offers ideas for a creative process based on the variation and combination of these basic parameters, which can lead into a truly innovative and original viewing experience.
Image wavelet decomposition and applications
NASA Technical Reports Server (NTRS)
Treil, N.; Mallat, S.; Bajcsy, R.
1989-01-01
The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.
Optimization of image processing algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Poudel, Pramod; Shirvaikar, Mukul
2011-03-01
This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.
Comparison of Image Processing Techniques using Random Noise Radar
2014-03-27
detection UWB ultra-wideband EM electromagnetic CW continuous wave RCS radar cross section RFI radio frequency interference FFT fast Fourier transform...several factors including radar cross section (RCS), orientation, and material makeup. A single monostatic radar at some position collects only range and...Chapter 2 is to provide the theory behind noise radar and SAR imaging. Section 2.1 presents the basic concepts in transmitting and receiving random
A comparative study of deep learning models for medical image classification
NASA Astrophysics Data System (ADS)
Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.
2017-11-01
Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are favoured as they provide better visual processing models successfully classifying the noisy data as well. The work centres on the detection on Diabetic Retinopathy-loss in vision and recognition of computed tomography (CT) emphysema data measuring the severity levels for both cases. The paper discovers how various Machine Learning algorithms can be implemented ensuing a supervised approach, so as to get accurate results with less complexity possible.
Karimi, Davood; Ward, Rabab K
2016-10-01
Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.
NASA Astrophysics Data System (ADS)
Patil, Venkat P.; Gohatre, Umakant B.
2018-04-01
The technique of obtaining a wider field-of-view of an image to get high resolution integrated image is normally required for development of panorama of a photographic images or scene from a sequence of part of multiple views. There are various image stitching methods developed recently. For image stitching five basic steps are adopted stitching which are Feature detection and extraction, Image registration, computing homography, image warping and Blending. This paper provides review of some of the existing available image feature detection and extraction techniques and image stitching algorithms by categorizing them into several methods. For each category, the basic concepts are first described and later on the necessary modifications made to the fundamental concepts by different researchers are elaborated. This paper also highlights about the some of the fundamental techniques for the process of photographic image feature detection and extraction methods under various illumination conditions. The Importance of Image stitching is applicable in the various fields such as medical imaging, astrophotography and computer vision. For comparing performance evaluation of the techniques used for image features detection three methods are considered i.e. ORB, SURF, HESSIAN and time required for input images feature detection is measured. Results obtained finally concludes that for daylight condition, ORB algorithm found better due to the fact that less tome is required for more features extracted where as for images under night light condition it shows that SURF detector performs better than ORB/HESSIAN detectors.
Positron Emission Tomography Molecular Imaging in Late-Life Depression
Hirao, Kentaro; Smith, Gwenn S.
2017-01-01
Molecular imaging represents a bridge between basic and clinical neuroscience observations and provides many opportunities for translation and identifying mechanisms that may inform prevention and intervention strategies in late-life depression (LLD). Substantial advances in instrumentation and radiotracer chemistry have resulted in improved sensitivity and spatial resolution and the ability to study in vivo an increasing number of neurotransmitters, neuromodulators, and, importantly, neuropathological processes. Molecular brain imaging studies in LLD will be reviewed, with a primary focus on positron emission tomography. Future directions for the field of molecular imaging in LLD will be discussed, including integrating molecular imaging with genetic, neuropsychiatric, and cognitive outcomes and multimodality neuroimaging. PMID:24394152
Signal and image processing for early detection of coronary artery diseases: A review
NASA Astrophysics Data System (ADS)
Mobssite, Youness; Samir, B. Belhaouari; Mohamad Hani, Ahmed Fadzil B.
2012-09-01
Today biomedical signals and image based detection are a basic step to diagnose heart diseases, in particular, coronary artery diseases. The goal of this work is to provide non-invasive early detection of Coronary Artery Diseases relying on analyzing images and ECG signals as a combined approach to extract features, further classify and quantify the severity of DCAD by using B-splines method. In an aim of creating a prototype of screening biomedical imaging for coronary arteries to help cardiologists to decide the kind of treatment needed to reduce or control the risk of heart attack.
Barrett, Harrison H; Myers, Kyle J; Caucci, Luca
2014-08-17
A fundamental way of describing a photon-limited imaging system is in terms of a Poisson random process in spatial, angular and wavelength variables. The mean of this random process is the spectral radiance. The principle of conservation of radiance then allows a full characterization of the noise in the image (conditional on viewing a specified object). To elucidate these connections, we first review the definitions and basic properties of radiance as defined in terms of geometrical optics, radiology, physical optics and quantum optics. The propagation and conservation laws for radiance in each of these domains are reviewed. Then we distinguish four categories of imaging detectors that all respond in some way to the incident radiance, including the new category of photon-processing detectors. The relation between the radiance and the statistical properties of the detector output is discussed and related to task-based measures of image quality and the information content of a single detected photon.
Barrett, Harrison H.; Myers, Kyle J.; Caucci, Luca
2016-01-01
A fundamental way of describing a photon-limited imaging system is in terms of a Poisson random process in spatial, angular and wavelength variables. The mean of this random process is the spectral radiance. The principle of conservation of radiance then allows a full characterization of the noise in the image (conditional on viewing a specified object). To elucidate these connections, we first review the definitions and basic properties of radiance as defined in terms of geometrical optics, radiology, physical optics and quantum optics. The propagation and conservation laws for radiance in each of these domains are reviewed. Then we distinguish four categories of imaging detectors that all respond in some way to the incident radiance, including the new category of photon-processing detectors. The relation between the radiance and the statistical properties of the detector output is discussed and related to task-based measures of image quality and the information content of a single detected photon. PMID:27478293
Wáng, Yì-Xiáng J; Zhang, Qinwei; Li, Xiaojuan; Chen, Weitian; Ahuja, Anil; Yuan, Jing
2015-12-01
T1ρ relaxation time provides a new contrast mechanism that differs from T1- and T2-weighted contrast, and is useful to study low-frequency motional processes and chemical exchange in biological tissues. T1ρ imaging can be performed in the forms of T1ρ-weighted image, T1ρ mapping and T1ρ dispersion. T1ρ imaging, particularly at low spin-lock frequency, is sensitive to B0 and B1 inhomogeneity. Various composite spin-lock pulses have been proposed to alleviate the influence of field inhomogeneity so as to reduce the banding-like spin-lock artifacts. T1ρ imaging could be specific absorption rate (SAR) intensive and time consuming. Efforts to address these issues and speed-up data acquisition are being explored to facilitate wider clinical applications. This paper reviews the T1ρ imaging's basic physic principles, as well as its application for cartilage imaging and intervertebral disc imaging. Compared to more established T2 relaxation time, it has been shown that T1ρ provides more sensitive detection of proteoglycan (PG) loss at early stages of cartilage degeneration. T1ρ has also been shown to provide more sensitive evaluation of annulus fibrosis (AF) degeneration of the discs.
Reasoning strategies modulate gender differences in emotion processing.
Markovits, Henry; Trémolière, Bastien; Blanchette, Isabelle
2018-01-01
The dual strategy model of reasoning has proposed that people's reasoning can be understood asa combination of two different ways of processing information related to problem premises: a counterexample strategy that examines information for explicit potential counterexamples and a statistical strategy that uses associative access to generate a likelihood estimate of putative conclusions. Previous studies have examined this model in the context of basic conditional reasoning tasks. However, the information processing distinction that underlies the dual strategy model can be seen asa basic description of differences in reasoning (similar to that described by many general dual process models of reasoning). In two studies, we examine how these differences in reasoning strategy may relate to processing very different information, specifically we focus on previously observed gender differences in processing negative emotions. Study 1 examined the intensity of emotional reactions to a film clip inducing primarily negative emotions. Study 2 examined the speed at which participants determine the emotional valence of sequences of negative images. In both studies, no gender differences were observed among participants using a counterexample strategy. Among participants using a statistical strategy, females produce significantly stronger emotional reactions than males (in Study 1) and were faster to recognize the valence of negative images than were males (in Study 2). Results show that the processing distinction underlying the dual strategy model of reasoning generalizes to the processing of emotions. Copyright © 2017 Elsevier B.V. All rights reserved.
Dynamic chest radiography: flat-panel detector (FPD) based functional X-ray imaging.
Tanaka, Rie
2016-07-01
Dynamic chest radiography is a flat-panel detector (FPD)-based functional X-ray imaging, which is performed as an additional examination in chest radiography. The large field of view (FOV) of FPDs permits real-time observation of the entire lungs and simultaneous right-and-left evaluation of diaphragm kinetics. Most importantly, dynamic chest radiography provides pulmonary ventilation and circulation findings as slight changes in pixel value even without the use of contrast media; the interpretation is challenging and crucial for a better understanding of pulmonary function. The basic concept was proposed in the 1980s; however, it was not realized until the 2010s because of technical limitations. Dynamic FPDs and advanced digital image processing played a key role for clinical application of dynamic chest radiography. Pulmonary ventilation and circulation can be quantified and visualized for the diagnosis of pulmonary diseases. Dynamic chest radiography can be deployed as a simple and rapid means of functional imaging in both routine and emergency medicine. Here, we focus on the evaluation of pulmonary ventilation and circulation. This review article describes the basic mechanism of imaging findings according to pulmonary/circulation physiology, followed by imaging procedures, analysis method, and diagnostic performance of dynamic chest radiography.
Nuts and Bolts of CEST MR imaging
Liu, Guanshu; Song, Xiaolei; Chan, Kannie W.Y.
2013-01-01
Chemical Exchange Saturation Transfer (CEST) has emerged as a novel MRI contrast mechanism that is well suited for molecular imaging studies. This new mechanism can be used to detect small amounts of contrast agent through saturation of rapidly exchanging protons on these agents, allowing a wide range of applications. CEST technology has a number of indispensable features, such as the possibility of simultaneous detection of multiple “colors” of agents and detecting changes in their environment (e.g. pH, metabolites, etc) through MR contrast. Currently a large number of new imaging schemes and techniques have been developed to improve the temporal resolution and specificity and to correct the influence of B0 and B1 inhomogeneities. In this review, the techniques developed over the last decade have been summarized with the different imaging strategies and post-processing methods discussed from a practical point of view including describing their relative merits for detecting CEST agents. The goal of the present work is to provide the reader with a fundamental understanding of the techniques developed, and to provide guidance to help refine future applications of this technology. This review is organized into three main sections: Basics of CEST Contrast, Implementation, Post-Processing, and also includes a brief Introduction section and Summary. The Basics of CEST Contrast section contains a description of the relevant background theory for saturation transfer and frequency labeled transfer, and a brief discussion of methods to determine exchange rates. The Implementation section contains a description of the practical considerations in conducting CEST MRI studies, including choice of magnetic field, pulse sequence, saturation pulse, imaging scheme, and strategies to separate MT and CEST. The Post-Processing section contains a description of the typical image processing employed for B0/B1 correction, Z-spectral interpolation, frequency selective detection, and improving CEST contrast maps. PMID:23303716
A comprehensive neuropsychological mapping battery for functional magnetic resonance imaging.
Karakas, Sirel; Baran, Zeynel; Ceylan, Arzu Ozkan; Tileylioglu, Emre; Tali, Turgut; Karakas, Hakki Muammer
2013-11-01
Existing batteries for FMRI do not precisely meet the criteria for comprehensive mapping of cognitive functions within minimum data acquisition times using standard scanners and head coils. The goal was to develop a battery of neuropsychological paradigms for FMRI that can also be used in other brain imaging techniques and behavioural research. Participants were 61 healthy, young adult volunteers (48 females and 13 males, mean age: 22.25 ± 3.39 years) from the university community. The battery included 8 paradigms for basic (visual, auditory, sensory-motor, emotional arousal) and complex (language, working memory, inhibition/interference control, learning) cognitive functions. Imaging was performed using standard functional imaging capabilities (1.5-T MR scanner, standard head coil). Structural and functional data series were analysed using Brain Voyager QX2.9 and Statistical Parametric Mapping-8. For basic processes, activation centres for individuals were within a distance of 3-11 mm of the group centres of the target regions and for complex cognitive processes, between 7 mm and 15 mm. Based on fixed-effect and random-effects analyses, the distance between the activation centres was 0-4 mm. There was spatial variability between individual cases; however, as shown by the distances between the centres found with fixed-effect and random-effects analyses, the coordinates for individual cases can be used to represent those of the group. The findings show that the neuropsychological brain mapping battery described here can be used in basic science studies that investigate the relationship of the brain to the mind and also as functional localiser in clinical studies for diagnosis, follow-up and pre-surgical mapping. © 2013.
Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.
Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott
2007-01-01
The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.
Image-based 3D reconstruction and virtual environmental walk-through
NASA Astrophysics Data System (ADS)
Sun, Jifeng; Fang, Lixiong; Luo, Ying
2001-09-01
We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.
Energy conservation using face detection
NASA Astrophysics Data System (ADS)
Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.
2011-10-01
Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.
IDIMS/GEOPAK: Users manual for a geophysical data display and analysis system
NASA Technical Reports Server (NTRS)
Libert, J. M.
1982-01-01
The application of an existing image analysis system to the display and analysis of geophysical data is described, the potential for expanding the capabilities of such a system toward more advanced computer analytic and modeling functions is investigated. The major features of the IDIMS (Interactive Display and Image Manipulation System) and its applicability for image type analysis of geophysical data are described. Development of a basic geophysical data processing system to permit the image representation, coloring, interdisplay and comparison of geophysical data sets using existing IDIMS functions and to provide for the production of hard copies of processed images was described. An instruction manual and documentation for the GEOPAK subsystem was produced. A training course for personnel in the use of the IDIMS/GEOPAK was conducted. The effectiveness of the current IDIMS/GEOPAK system for geophysical data analysis was evaluated.
Suzuki, Kazuhiko; Oho, Eisaku
2013-01-01
Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.
Efficiency analysis for 3D filtering of multichannel images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
Shaw, S L; Salmon, E D; Quatrano, R S
1995-12-01
In this report, we describe a relatively inexpensive method for acquiring, storing and processing light microscope images that combines the advantages of video technology with the powerful medium now termed digital photography. Digital photography refers to the recording of images as digital files that are stored, manipulated and displayed using a computer. This report details the use of a gated video-rate charge-coupled device (CCD) camera and a frame grabber board for capturing 256 gray-level digital images from the light microscope. This camera gives high-resolution bright-field, phase contrast and differential interference contrast (DIC) images but, also, with gated on-chip integration, has the capability to record low-light level fluorescent images. The basic components of the digital photography system are described, and examples are presented of fluorescence and bright-field micrographs. Digital processing of images to remove noise, to enhance contrast and to prepare figures for printing is discussed.
Thin layer imaging process for microlithography using radiation at strongly attenuated wavelengths
Wheeler, David R.
2004-01-06
A method for patterning of resist surfaces which is particularly advantageous for systems having low photon flux and highly energetic, strongly attenuated radiation. A thin imaging layer is created with uniform silicon distribution in a bilayer format. An image is formed by exposing selected regions of the silylated imaging layer to radiation. The radiation incident upon the silyliated resist material results in acid generation which either catalyzes cleavage of Si--O bonds to produce moieties that are volatile enough to be driven off in a post exposure bake step or produces a resist material where the exposed portions of the imaging layer are soluble in a basic solution, thereby desilylating the exposed areas of the imaging layer. The process is self limiting due to the limited quantity of silyl groups within each region of the pattern. Following the post exposure bake step, an etching step, generally an oxygen plasma etch, removes the resist material from the de-silylated areas of the imaging layer.
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
Retrieval of land cover information under thin fog in Landsat TM image
NASA Astrophysics Data System (ADS)
Wei, Yuchun
2008-04-01
Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.
NASA Astrophysics Data System (ADS)
Protsyuk, Yu. I.; Andruk, V. N.; Kazantseva, L. V.
The paper discusses and illustrates the steps of basic processing of digitized image of astro negatives. Software for obtaining of a rectangular coordinates and photometric values of objects on photographic plates was created in the environment LINUX / MIDAS / ROMAFOT. The program can automatically process the specified number of files in FITS format with sizes up to 20000 x 20000 pixels. Other programs were made in FORTRAN and PASCAL with the ability to work in an environment of LINUX or WINDOWS. They were used for: identification of stars, separation and exclusion of diffraction satellites and double and triple exposures, elimination of image defects, reduction to the equatorial coordinates and magnitudes of a reference catalogs.
A Electro-Optical Image Algebra Processing System for Automatic Target Recognition
NASA Astrophysics Data System (ADS)
Coffield, Patrick Cyrus
The proposed electro-optical image algebra processing system is designed specifically for image processing and other related computations. The design is a hybridization of an optical correlator and a massively paralleled, single instruction multiple data processor. The architecture of the design consists of three tightly coupled components: a spatial configuration processor (the optical analog portion), a weighting processor (digital), and an accumulation processor (digital). The systolic flow of data and image processing operations are directed by a control buffer and pipelined to each of the three processing components. The image processing operations are defined in terms of basic operations of an image algebra developed by the University of Florida. The algebra is capable of describing all common image-to-image transformations. The merit of this architectural design is how it implements the natural decomposition of algebraic functions into spatially distributed, point use operations. The effect of this particular decomposition allows convolution type operations to be computed strictly as a function of the number of elements in the template (mask, filter, etc.) instead of the number of picture elements in the image. Thus, a substantial increase in throughput is realized. The implementation of the proposed design may be accomplished in many ways. While a hybrid electro-optical implementation is of primary interest, the benefits and design issues of an all digital implementation are also discussed. The potential utility of this architectural design lies in its ability to control a large variety of the arithmetic and logic operations of the image algebra's generalized matrix product. The generalized matrix product is the most powerful fundamental operation in the algebra, thus allowing a wide range of applications. No other known device or design has made this claim of processing speed and general implementation of a heterogeneous image algebra.
Zhang, Qinwei; Li, Xiaojuan; Chen, Weitian; Ahuja, Anil; Yuan, Jing
2015-01-01
T1ρ relaxation time provides a new contrast mechanism that differs from T1- and T2-weighted contrast, and is useful to study low-frequency motional processes and chemical exchange in biological tissues. T1ρ imaging can be performed in the forms of T1ρ-weighted image, T1ρ mapping and T1ρ dispersion. T1ρ imaging, particularly at low spin-lock frequency, is sensitive to B0 and B1 inhomogeneity. Various composite spin-lock pulses have been proposed to alleviate the influence of field inhomogeneity so as to reduce the banding-like spin-lock artifacts. T1ρ imaging could be specific absorption rate (SAR) intensive and time consuming. Efforts to address these issues and speed-up data acquisition are being explored to facilitate wider clinical applications. This paper reviews the T1ρ imaging’s basic physic principles, as well as its application for cartilage imaging and intervertebral disc imaging. Compared to more established T2 relaxation time, it has been shown that T1ρ provides more sensitive detection of proteoglycan (PG) loss at early stages of cartilage degeneration. T1ρ has also been shown to provide more sensitive evaluation of annulus fibrosis (AF) degeneration of the discs. PMID:26807369
Special Software for Planetary Image Processing and Research
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.
2016-06-01
The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Rotation Covariant Image Processing for Biomedical Applications
Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255
Medical image registration: basic science and clinical implications.
Imran, Muhammad Babar; Meo, Sultan Ayoub; Yousuf, Mohammad; Othman, Saleh; Shahid, Abubakar
2010-01-01
Image Registration is a process of aligning two or more images so that corresponding feature can be related objectively. Integration of corresponding and complementary information from various images has become an important area of computation in medical imaging. Merging different images of the same patient taken by different modalities or acquired at different times is quite useful in interpreting lower resolution functional images, such as those provided by nuclear medicine, in determining spatial relationships of structures seen in different modalities. This will help in planning surgery and longitudinal follow up. The aim of this article was to introduce image registration to all those who are working in field of medical sciences in general and medical doctors in particular; and indicate how and where this specialty is moving to provide better health care services.
Segmentation of human brain using structural MRI.
Helms, Gunther
2016-04-01
Segmentation of human brain using structural MRI is a key step of processing in imaging neuroscience. The methods have undergone a rapid development in the past two decades and are now widely available. This non-technical review aims at providing an overview and basic understanding of the most common software. Starting with the basis of structural MRI contrast in brain and imaging protocols, the concepts of voxel-based and surface-based segmentation are discussed. Special emphasis is given to the typical contrast features and morphological constraints of cortical and sub-cortical grey matter. In addition to the use for voxel-based morphometry, basic applications in quantitative MRI, cortical thickness estimations, and atrophy measurements as well as assignment of cortical regions and deep brain nuclei are briefly discussed. Finally, some fields for clinical applications are given.
Effects of aging on perception of motion
NASA Astrophysics Data System (ADS)
Kaur, Manpreet; Wilder, Joseph; Hung, George; Julesz, Bela
1997-09-01
Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.
Baroux, Célia; Schubert, Veit
2018-01-01
In situ nucleus and chromatin analyses rely on microscopy imaging that benefits from versatile, efficient fluorescent probes and proteins for static or live imaging. Yet the broad choice in imaging instruments offered to the user poses orientation problems. Which imaging instrument should be used for which purpose? What are the main caveats and what are the considerations to best exploit each instrument's ability to obtain informative and high-quality images? How to infer quantitative information on chromatin or nuclear organization from microscopy images? In this review, we present an overview of common, fluorescence-based microscopy systems and discuss recently developed super-resolution microscopy systems, which are able to bridge the resolution gap between common fluorescence microscopy and electron microscopy. We briefly present their basic principles and discuss their possible applications in the field, while providing experience-based recommendations to guide the user toward best-possible imaging. In addition to raw data acquisition methods, we discuss commercial and noncommercial processing tools required for optimal image presentation and signal evaluation in two and three dimensions.
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
Portable laser speckle perfusion imaging system based on digital signal processor.
Tang, Xuejun; Feng, Nengyun; Sun, Xiaoli; Li, Pengcheng; Luo, Qingming
2010-12-01
The ability to monitor blood flow in vivo is of major importance in clinical diagnosis and in basic researches of life science. As a noninvasive full-field technique without the need of scanning, laser speckle contrast imaging (LSCI) is widely used to study blood flow with high spatial and temporal resolution. Current LSCI systems are based on personal computers for image processing with large size, which potentially limit the widespread clinical utility. The need for portable laser speckle contrast imaging system that does not compromise processing efficiency is crucial in clinical diagnosis. However, the processing of laser speckle contrast images is time-consuming due to the heavy calculation for enormous high-resolution image data. To address this problem, a portable laser speckle perfusion imaging system based on digital signal processor (DSP) and the algorithm which is suitable for DSP is described. With highly integrated DSP and the algorithm, we have markedly reduced the size and weight of the system as well as its energy consumption while preserving the high processing speed. In vivo experiments demonstrate that our portable laser speckle perfusion imaging system can obtain blood flow images at 25 frames per second with the resolution of 640 × 480 pixels. The portable and lightweight features make it capable of being adapted to a wide variety of application areas such as research laboratory, operating room, ambulance, and even disaster site.
NASA Technical Reports Server (NTRS)
Sowers, J.; Mehrotra, R.; Sethi, I. K.
1989-01-01
A method for extracting road boundaries using the monochrome image of a visual road scene is presented. The statistical information regarding the intensity levels present in the image along with some geometrical constraints concerning the road are the basics of this approach. Results and advantages of this technique compared to others are discussed. The major advantages of this technique, when compared to others, are its ability to process the image in only one pass, to limit the area searched in the image using only knowledge concerning the road geometry and previous boundary information, and dynamically adjust for inconsistencies in the located boundary information, all of which helps to increase the efficacy of this technique.
Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimmel, R.; Malladi, R.; Sochen, N.
A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as amore » surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trease, Lynn L.; Trease, Harold E.; Fowler, John
2007-03-15
One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process.more » The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.« less
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
Fundamentals of nuclear medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alazraki, N.P.; Mishkin, F.S.
1988-01-01
The book begins with basic science and statistics relevant to nuclear medicine, and specific organ systems are addressed in separate chapters. A section of the text also covers imaging of groups of disease processes (eg, trauma, cancer). The authors present a comparison between nuclear medicine techniques and other diagnostic imaging studies. A table is given which comments on sensitivities and specificities of common nuclear medicine studies. The sensitivities and specificities are categorized as very high, high, moderate, and so forth.
Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.
Bouaynaya, Nidhal; Schonfeld, Dan
2008-05-01
In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.
The infection algorithm: an artificial epidemic approach for dense stereo correspondence.
Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne
2006-01-01
We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.
Image processing for improved eye-tracking accuracy
NASA Technical Reports Server (NTRS)
Mulligan, J. B.; Watson, A. B. (Principal Investigator)
1997-01-01
Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.
Notes to Parents - When Your Child Has Undergone Amputation.
ERIC Educational Resources Information Center
Pierson, Margaret Hauser
Designed to provide parents with basic information about the physical and emotional aspects of amputation, the booklet gives information about the grief response, body image, phantom limb sensation, stump care, and the prosthesis. The section on the grief process describes normal reactions to loss: denial, anger, bargaining, depression, and…
Basic physics of ultrasound imaging.
Aldrich, John E
2007-05-01
The appearance of ultrasound images depends critically on the physical interactions of sound with the tissues in the body. The basic principles of ultrasound imaging and the physical reasons for many common artifacts are described.
Force/torque and tactile sensors for sensor-based manipulator control
NASA Technical Reports Server (NTRS)
Vanbrussel, H.; Belieen, H.; Bao, Chao-Ying
1989-01-01
The autonomy of manipulators, in space and in industrial environments, can be dramatically enhanced by the use of force/torque and tactile sensors. The development and future use of a six-component force/torque sensor for the Hermes Robot Arm (HERA) Basic End-Effector (BEE) is discussed. Then a multifunctional gripper system based on tactile sensors is described. The basic transducing element of the sensor is a sheet of pressure-sensitive polymer. Tactile image processing algorithms for slip detection, object position estimation, and object recognition are described.
PET Imaging: Basics and New Trends
NASA Astrophysics Data System (ADS)
Dahlbom, Magnus
Positron Emission Tomography or PET is a noninvasive molecular imaging method used both in research to study biology and disease, and clinically as a routine diagnostic imaging tool. In PET imaging, the subject is injected with a tracer labeled with a positron-emitting isotope and is then placed in a scanner to localize the radioactive tracer in the body. The localization of the tracer utilizes the unique decay characteristics of isotopes decaying by positron emission. In the PET scanner, a large number of scintillation detectors use coincidence detection of the annihilation radiation that is emitted as a result of the positron decay. By collecting a large number of these coincidence events, together with tomographic image reconstruction methods, the 3-D distribution of the radioactive tracer in the body can be reconstructed. Depending on the type of tracer used, the distribution will reflect a particular biological process, such as glucose metabolism when fluoro-deoxyglucose is used. PET has evolved from a relatively inefficient single-slice imaging system with relatively poor spatial resolution to an efficient, high-resolution imaging modality which can acquire a whole-body scan in a few minutes. This chapter will describe the basic physics and instrumentation used in PET. The various corrections that are necessary to apply to the acquired data in order to produce quantitative images are also described. Finally, some of the latest trends in instrumentation development are also discussed.
Medical imaging and registration in computer assisted surgery.
Simon, D A; Lavallée, S
1998-09-01
Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, M.D.; Beck, R.N.
1988-06-01
This document describes several years research to improve PET imaging and diagnostic techniques in man. This program addresses the problems involving the basic science and technology underlying the physical and conceptual tools of radioactive tracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The overall objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice in order that individual patients and society as a whole will receive the maximum net benefitmore » from the new knowledge gained. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. The reports in the study were processed separately for the data bases. (TEM)« less
Image retrieval and processing system version 2.0 development work
NASA Technical Reports Server (NTRS)
Slavney, Susan H.; Guinness, Edward A.
1991-01-01
The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.
1982-02-01
IKCuNITY CLASSIFICATION OF Tm4iS IMAGE (Vrhn Dot& Entered) .,.-’- . . . . . ... .. ... " . . ...... ....... .. . . . . . . . . TABLE OF CONTENTS...11-19 APPENDIX D: BASIC PROCESSING ............................... 11-21 APPENDIX E: SIMULATION OF DATA...equipment previously developed, and an on-board data processing system. These full scale ship trials were the first in history with the objective of directly
Characterization of fission gas bubbles in irradiated U-10Mo fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casella, Andrew M.; Burkes, Douglas E.; MacFarlan, Paul J.
2017-09-01
Irradiated U-10Mo fuel samples were prepared with traditional mechanical potting and polishing methods with in a hot cell. They were then removed and imaged with an SEM located outside of a hot cell. The images were then processed with basic imaging techniques from 3 separate software packages. The results were compared and a baseline method for characterization of fission gas bubbles in the samples is proposed. It is hoped that through adoption of or comparison to this baseline method that sample characterization can be somewhat standardized across the field of post irradiated examination of metal fuels.
Preparing images for publication: part 2.
Bengel, Wolfgang; Devigus, Alessandro
2006-08-01
The transition from conventional to digital photography presents many advantages for authors and photographers in the field of dentistry, but also many complexities and potential problems. No uniform procedures for authors and publishers exist at present for producing high-quality dental photographs. This two-part article aims to provide guidelines for preparing images for publication and improving communication between these two parties. Part 1 provided information about basic color principles, factors that can affect color perception, and digital color management. Part 2 describes the camera setup, discusses how to take a photograph suitable for publication, and outlines steps for the image editing process.
Capacitive micromachined ultrasonic transducers for medical imaging and therapy.
Khuri-Yakub, Butrus T; Oralkan, Omer
2011-05-01
Capacitive micromachined ultrasonic transducers (CMUTs) have been subject to extensive research for the last two decades. Although they were initially developed for air-coupled applications, today their main application space is medical imaging and therapy. This paper first presents a brief description of CMUTs, their basic structure, and operating principles. Our progression of developing several generations of fabrication processes is discussed with an emphasis on the advantages and disadvantages of each process. Monolithic and hybrid approaches for integrating CMUTs with supporting integrated circuits are surveyed. Several prototype transducer arrays with integrated frontend electronic circuits we developed and their use for 2-D and 3-D, anatomical and functional imaging, and ablative therapies are described. The presented results prove the CMUT as a MEMS technology for many medical diagnostic and therapeutic applications.
Capacitive micromachined ultrasonic transducers for medical imaging and therapy
Khuri-Yakub, Butrus T.; Oralkan, Ömer
2011-01-01
Capacitive micromachined ultrasonic transducers (CMUTs) have been subject to extensive research for the last two decades. Although they were initially developed for air-coupled applications, today their main application space is medical imaging and therapy. This paper first presents a brief description of CMUTs, their basic structure, and operating principles. Our progression of developing several generations of fabrication processes is discussed with an emphasis on the advantages and disadvantages of each process. Monolithic and hybrid approaches for integrating CMUTs with supporting integrated circuits are surveyed. Several prototype transducer arrays with integrated frontend electronic circuits we developed and their use for 2-D and 3-D, anatomical and functional imaging, and ablative therapies are described. The presented results prove the CMUT as a MEMS technology for many medical diagnostic and therapeutic applications. PMID:21860542
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
A modeling analysis program for the JPL Table Mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Goldberg, B. A.
1986-01-01
Progress and achievements in the second year are discussed in three main areas: (1) data quality review of the 1981 Region B/C images; (2) data processing activities; and (3) modeling activities. The data quality review revealed that almost all 1981 Region B/C images are of sufficient quality to be valuable in the analyses of the JPL data set. In the second area, the major milestone reached was the successful development and application of complex image-processing software required to render the original image data suitable for modeling analysis studies. In the third area, the lifetime description of sodium atoms in the planet magnetosphere was improved in the model to include the offset dipole nature of the magnetic field as well as an east-west electric field. These improvements are important in properly representing the basic morphology as well as the east-west asymmetries of the sodium cloud.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.
Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y
2016-01-01
Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1976-01-01
A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Optical asymmetric image encryption using gyrator wavelet transform
NASA Astrophysics Data System (ADS)
Mehra, Isha; Nishchal, Naveen K.
2015-11-01
In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.
Optimality of the basic colour categories for classification
Griffin, Lewis D
2005-01-01
Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219
Object extraction method for image synthesis
NASA Astrophysics Data System (ADS)
Inoue, Seiki
1991-11-01
The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.
Pedagogical Red Tape: Difficulties in Teaching the Bureaucracy to Undergraduate Students
ERIC Educational Resources Information Center
Miller, William J.; Kaltenthaler, Karl; Feuerstein, Derek
2010-01-01
Americans are often perceived as holding extremely negative views of governmental bureaucrats. Phrases like bureaucratic waste and unresponsive bureaucracy fill the mainstream media and taint the image of bureaucrats. Beginning in basic high school civics classes, students are taught to respect the lawmaking process, the executive power of the…
Design and implementation of a dual-wavelength intrinsic fluorescence camera system
NASA Astrophysics Data System (ADS)
Ortega-Martinez, Antonio; Musacchia, Joseph J.; Gutierrez-Herrera, Enoch; Wang, Ying; Franco, Walfre
2017-03-01
Intrinsic UV fluorescence imaging is a technique that permits the observation of spatial differences in emitted fluorescence. It relies on the fluorescence produced by the innate fluorophores in the sample, and thus can be used for marker-less in-vivo assessment of tissue. It has been studied as a tool for the study of the skin, specifically for the classification of lesions, the delimitation of lesion borders and the study of wound healing, among others. In its most basic setup, a sample is excited with a narrow-band UV light source and the resulting fluorescence is imaged with a UV sensitive camera filtered to the emission wavelength of interest. By carefully selecting the excitation/emission pair, we can observe changes in fluorescence associated with physiological processes. One of the main drawbacks of this simple setup is the inability to observe more than a single excitation/emission pair at the same time, as some phenomena are better studied when two or more different pairs are studied simultaneously. In this work, we describe the design and the hardware and software implementation of a dual wavelength portable UV fluorescence imaging system. Its main components are an UV camera, a dual wavelength UV LED illuminator (295 and 345 nm) and two different emission filters (345 and 390 nm) that can be swapped by a mechanical filter wheel. The system is operated using a laptop computer and custom software that performs basic pre-processing to improve the image. The system was designed to allow us to image fluorescent peaks of tryptophan and collagen cross links in order to study wound healing progression.
Fringe formation in dual-hologram interferometry
NASA Technical Reports Server (NTRS)
Burner, A. W.
1990-01-01
Reference-fringe formation in nondiffuse dual-hologram interferometry is described by combining a first-order geometrical hologram treatment with interference fringes generated by two point sources. The first-order imaging relationships can be used to describe reference-fringe patterns for the geometry of the dual-hologram interferometry. The process can be completed without adjusting the two holograms when the reconstructing wavelength is less than the exposing wavelength, and the process is found to facilitate basic intereferometer adjustments.
Integration of basic sciences and clinical sciences in oral radiology education for dental students.
Baghdady, Mariam T; Carnahan, Heather; Lam, Ernest W N; Woods, Nicole N
2013-06-01
Educational research suggests that cognitive processing in diagnostic radiology requires a solid foundation in the basic sciences and knowledge of the radiological changes associated with disease. Although it is generally assumed that dental students must acquire both sets of knowledge, little is known about the most effective way to teach them. Currently, the basic and clinical sciences are taught separately. This study was conducted to compare the diagnostic accuracy of students when taught basic sciences segregated or integrated with clinical features. Predoctoral dental students (n=51) were taught four confusable intrabony abnormalities using basic science descriptions integrated with the radiographic features or taught segregated from the radiographic features. The students were tested with diagnostic images, and memory tests were performed immediately after learning and one week later. On immediate and delayed testing, participants in the integrated basic science group outperformed those from the segregated group. A main effect of learning condition was found to be significant (p<0.05). The results of this study support the critical role of integrating biomedical knowledge in diagnostic radiology and shows that teaching basic sciences integrated with clinical features produces higher diagnostic accuracy in novices than teaching basic sciences segregated from clinical features.
Analysis of the temperature of the hot tool in the cut of woven fabric using infrared images
NASA Astrophysics Data System (ADS)
Borelli, Joao E.; Verderio, Leonardo A.; Gonzaga, Adilson; Ruffino, Rosalvo T.
2001-03-01
Textile manufacture occupies a prominence place in the national economy. By virtue of its importance researches have been made on the development of new materials, equipment and methods used in the production process. The cutting of textiles starts in the basic stage, to be followed within the process of the making of clothes and other articles. In the hot cutting of fabric, one of the variables of great importance in the control of the process is the contact temperature between the tool and the fabric. The work presents a technique for the measurement of the temperature based on the processing of infrared images. For this a system was developed composed of an infrared camera, a framegrabber PC board and software that analyzes the punctual temperature in the cut area enabling the operator to achieve the necessary control of the other variables involved in the process.
Experimental research of digital holographic microscopic measuring
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Chen, Feifei; Li, Jicheng
2013-06-01
Digital holography is a new imaging technique, which is developed on the base of optical holography, Digital processing, and Computer techniques. It is using CCD instead of the conventional silver to record hologram, and then reproducing the 3D contour of the object by the way of computer simulation. Compared with the traditional optical holographic, the whole process is of simple measuring, lower production cost, faster the imaging speed, and with the advantages of non-contact real-time measurement. At present, it can be used in the fields of the morphology detection of tiny objects, micro deformation analysis, and biological cells shape measurement. It is one of the research hot spot at home and abroad. This paper introduced the basic principles and relevant theories about the optical holography and Digital holography, and researched the basic questions which influence the reproduce images in the process of recording and reconstructing of the digital holographic microcopy. In order to get a clear digital hologram, by analyzing the optical system structure, we discussed the recording distance and of the hologram. On the base of the theoretical studies, we established a measurement and analyzed the experimental conditions, then adjusted them to the system. To achieve a precise measurement of tiny object in three-dimension, we measured MEMS micro device for example, and obtained the reproduction three-dimensional contour, realized the three dimensional profile measurement of tiny object. According to the experiment results consider: analysis the reference factors between the zero-order term and a pair of twin-images by the choice of the object light and the reference light and the distance of the recording and reconstructing and the characteristics of reconstruction light on the measurement, the measurement errors were analyzed. The research result shows that the device owns certain reliability.
Quality Assurance By Laser Scanning And Imaging Techniques
NASA Astrophysics Data System (ADS)
SchmalfuB, Harald J.; Schinner, Karl Ludwig
1989-03-01
Laser scanning systems are well established in the world of fast industrial in-process quality inspection systems. The materials inspected by laser scanning systems are e.g. "endless" sheets of steel, paper, textile, film or foils. The web width varies from 50 mm up to 5000 mm or more. The web speed depends strongly on the production process and can reach several hundred meters per minute. The continuous data flow in one of different channels of the optical receiving system exceeds ten Megapixels/sec. Therefore it is clear that the electronic evaluation system has to process these data streams in real time and no image storage is possible. But sometimes (e.g. first installation of the system, change of the defect classification) it would be very helpful to have the possibility for a visual look on the original, i.e. not processed sensor data. At first we show the principle set up of a standard laser scanning system. Then we will introduce a large image memory especially designed for the needs of high-speed inspection sensors. This image memory co-operates with the standard on-line evaluation electronics and provides therefore an easy comparison between processed and non-processed data. We will discuss the basic system structure and we will show the first industrial results.
Curriculum in biomedical optics and laser-tissue interactions
NASA Astrophysics Data System (ADS)
Jacques, Steven L.
2003-10-01
A graduate student level curriculum has been developed for teaching the basic principles of how lasers and light interact with biological tissues and materials. The field of Photomedicine can be divided into two topic areas: (1) where tissue affects photons, used for diagnostic sensing, imaging, and spectroscopy of tissues and biomaterials, and (2) where photons affect tissue, used for surgical and therapeutic cutting, dissecting, machining, processing, coagulating, welding, and oxidizing tissues and biomaterials. The courses teach basic principles of tissue optical properties and light transport in tissues, and interaction of lasers and conventional light sources with tissues via photochemical, photothermal and photomechanical mechanisms.
Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain
NASA Astrophysics Data System (ADS)
Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.
2017-03-01
MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.
Advantages and Disadvantages in Image Processing with Free Software in Radiology.
Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan
2018-01-15
Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.
Combining fluorescence imaging with Hi-C to study 3D genome architecture of the same single cell.
Lando, David; Basu, Srinjan; Stevens, Tim J; Riddell, Andy; Wohlfahrt, Kai J; Cao, Yang; Boucher, Wayne; Leeb, Martin; Atkinson, Liam P; Lee, Steven F; Hendrich, Brian; Klenerman, Dave; Laue, Ernest D
2018-05-01
Fluorescence imaging and chromosome conformation capture assays such as Hi-C are key tools for studying genome organization. However, traditionally, they have been carried out independently, making integration of the two types of data difficult to perform. By trapping individual cell nuclei inside a well of a 384-well glass-bottom plate with an agarose pad, we have established a protocol that allows both fluorescence imaging and Hi-C processing to be carried out on the same single cell. The protocol identifies 30,000-100,000 chromosome contacts per single haploid genome in parallel with fluorescence images. Contacts can be used to calculate intact genome structures to better than 100-kb resolution, which can then be directly compared with the images. Preparation of 20 single-cell Hi-C libraries using this protocol takes 5 d of bench work by researchers experienced in molecular biology techniques. Image acquisition and analysis require basic understanding of fluorescence microscopy, and some bioinformatics knowledge is required to run the sequence-processing tools described here.
Anthropomorphic robot for recognition and drawing generalized object images
NASA Astrophysics Data System (ADS)
Ginzburg, Vera M.
1998-10-01
The process of recognition, for instance, understanding the text, written by different fonts, consists in the depriving of the individual attributes of the letters in the particular font. It is shown that such process, in Nature and technique, can be provided by the narrowing the spatial frequency of the object's image by its defocusing. In defocusing images remain only areas, so-called Informative Fragments (IFs), which all together form the generalized (stylized) image of many identical objects. It is shown that the variety of shapes of IFs is restricted and can be presented by `Geometrical alphabet'. The `letters' for this alphabet can be created using two basic `genetic' figures: a stripe and round spot. It is known from physiology that the special cells of visual cortex response to these particular figures. The prototype of such `genetic' alphabet has been made using Boolean algebra (Venn's diagrams). The algorithm for drawing the letter's (`genlet's') shape in this alphabet and generalized images of objects (for example, `sleeping cat'), are given. A scheme of an anthropomorphic robot is shown together with results of model computer experiment of the robot's action--`drawing' the generalized image.
The Basic Principles of FDG-PET/CT Imaging.
Basu, Sandip; Hess, Søren; Nielsen Braad, Poul-Erik; Olsen, Birgitte Brinkmann; Inglev, Signe; Høilund-Carlsen, Poul Flemming
2014-10-01
Positron emission tomography (PET) imaging with 2-[(18)F]fluoro-2-deoxy-D-glucose (FDG) forms the basis of molecular imaging. FDG-PET imaging is a multidisciplinary undertaking that requires close interdisciplinary collaboration in a broad team comprising physicians, technologists, secretaries, radio-chemists, hospital physicists, molecular biologists, engineers, and cyclotron technicians. The aim of this review is to provide a brief overview of important basic issues and considerations pivotal to successful patient examinations, including basic physics, instrumentation, radiochemistry, molecular and cell biology, patient preparation, normal distribution of tracer, and potential interpretive pitfalls. Copyright © 2014 Elsevier Inc. All rights reserved.
Translational research of optical molecular imaging for personalized medicine.
Qin, C; Ma, X; Tian, J
2013-12-01
In the medical imaging field, molecular imaging is a rapidly developing discipline and forms many imaging modalities, providing us effective tools to visualize, characterize, and measure molecular and cellular mechanisms in complex biological processes of living organisms, which can deepen our understanding of biology and accelerate preclinical research including cancer study and medicine discovery. Among many molecular imaging modalities, although the penetration depth of optical imaging and the approved optical probes used for clinics are limited, it has evolved considerably and has seen spectacular advances in basic biomedical research and new drug development. With the completion of human genome sequencing and the emergence of personalized medicine, the specific drug should be matched to not only the right disease but also to the right person, and optical molecular imaging should serve as a strong adjunct to develop personalized medicine by finding the optimal drug based on an individual's proteome and genome. In this process, the computational methodology and imaging system as well as the biomedical application regarding optical molecular imaging will play a crucial role. This review will focus on recent typical translational studies of optical molecular imaging for personalized medicine followed by a concise introduction. Finally, the current challenges and the future development of optical molecular imaging are given according to the understanding of the authors, and the review is then concluded.
NASA Astrophysics Data System (ADS)
Saini, Surender Singh; Sardana, Harish Kumar; Pattnaik, Shyam Sundar
2017-06-01
Conventional image editing software in combination with other techniques are not only difficult to apply to an image but also permits a user to perform some basic functions one at a time. However, image processing algorithms and photogrammetric systems are developed in the recent past for real-time pattern recognition applications. A graphical user interface (GUI) is developed which can perform multiple functions simultaneously for the analysis and estimation of geometric distortion in an image with reference to the corresponding distorted image. The GUI measure, record, and visualize the performance metric of X/Y coordinates of one image over the other. The various keys and icons provided in the utility extracts the coordinates of distortion free reference image and the image with geometric distortion. The error between these two corresponding points gives the measure of distortion and also used to evaluate the correction parameters for image distortion. As the GUI interface minimizes human interference in the process of geometric correction, its execution just requires use of icons and keys provided in the utility; this technique gives swift and accurate results as compared to other conventional methods for the measurement of the X/Y coordinates of an image.
Control Software for Advanced Video Guidance Sensor
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.
2006-01-01
Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.
NASA Technical Reports Server (NTRS)
Hjellming, R. M.
1992-01-01
AIPS++ is an Astronomical Information Processing System being designed and implemented by an international consortium of NRAO and six other radio astronomy institutions in Australia, India, the Netherlands, the United Kingdom, Canada, and the USA. AIPS++ is intended to replace the functionality of AIPS, to be more easily programmable, and will be implemented in C++ using object-oriented techniques. Programmability in AIPS++ is planned at three levels. The first level will be that of a command-line interpreter with characteristics similar to IDL and PV-Wave, but with an intensive set of operations appropriate to telescope data handling, image formation, and image processing. The third level will be in C++ with extensive use of class libraries for both basic operations and advanced applications. The third level will allow input and output of data between external FORTRAN programs and AIPS++ telescope and image databases. In addition to summarizing the above programmability characteristics, this talk will given an overview of the classes currently being designed for telescope data calibration and editing, image formation, and the 'toolkit' of mathematical 'objects' that will perform most of the processing in AIPS++.
Trajectory-based morphological operators: a model for efficient image processing.
Jimeno-Morenilla, Antonio; Pujol, Francisco A; Molina-Carmona, Rafael; Sánchez-Romero, José L; Pujol, Mar
2014-01-01
Mathematical morphology has been an area of intensive research over the last few years. Although many remarkable advances have been achieved throughout these years, there is still a great interest in accelerating morphological operations in order for them to be implemented in real-time systems. In this work, we present a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations. Then, a trajectory-based morphological operation (such as dilation, and erosion) is defined as the set of points resulting from the ordered application of the instant basic operations. The MTM approach allows working with different structuring elements, such as disks, and from the experiments, it can be extracted that our method is independent of the structuring element size and can be easily applied to industrial systems and high-resolution images.
Molecular aspects of magnetic resonance imaging and spectroscopy.
Boesch, C
1999-01-01
Magnetic resonance imaging (MRI) is a well known diagnostic tool in radiology that produces unsurpassed images of the human body, in particular of soft tissue. However, the medical community is often not aware that MRI is an important yet limited segment of magnetic resonance (MR) or nuclear magnetic resonance (NMR) as this method is called in basic science. The tremendous morphological information of MR images sometimes conceal the fact that MR signals in general contain much more information, especially on processes on the molecular level. NMR is successfully used in physics, chemistry, and biology to explore and characterize chemical reactions, molecular conformations, biochemical pathways, solid state material, and many other applications that elucidate invisible characteristics of matter and tissue. In medical applications, knowledge of the molecular background of MRI and in particular MR spectroscopy (MRS) is an inevitable basis to understand molecular phenomenon leading to macroscopic effects visible in diagnostic images or spectra. This review shall provide the necessary background to comprehend molecular aspects of magnetic resonance applications in medicine. An introduction into the physical basics aims at an understanding of some of the molecular mechanisms without extended mathematical treatment. The MR typical terminology is explained such that reading of original MR publications could be facilitated for non-MR experts. Applications in MRI and MRS are intended to illustrate the consequences of molecular effects on images and spectra.
Andriole, Katherine P; Morin, Richard L; Arenson, Ronald L; Carrino, John A; Erickson, Bradley J; Horii, Steven C; Piraino, David W; Reiner, Bruce I; Seibert, J Anthony; Siegel, Eliot
2004-12-01
The Society for Computer Applications in Radiology (SCAR) Transforming the Radiological Interpretation Process (TRIP) Initiative aims to spearhead research, education, and discovery of innovative solutions to address the problem of information and image data overload. The initiative will foster interdisciplinary research on technological, environmental and human factors to better manage and exploit the massive amounts of data. TRIP will focus on the following basic objectives: improving the efficiency of interpretation of large data sets, improving the timeliness and effectiveness of communication, and decreasing medical errors. The ultimate goal of the initiative is to improve the quality and safety of patient care. Interdisciplinary research into several broad areas will be necessary to make progress in managing the ever-increasing volume of data. The six concepts involved are human perception, image processing and computer-aided detection (CAD), visualization, navigation and usability, databases and integration, and evaluation and validation of methods and performance. The result of this transformation will affect several key processes in radiology, including image interpretation; communication of imaging results; workflow and efficiency within the health care enterprise; diagnostic accuracy and a reduction in medical errors; and, ultimately, the overall quality of care.
Three-dimensional rendering of segmented object using matlab - biomed 2010.
Anderson, Jeffrey R; Barrett, Steven F
2010-01-01
The three-dimensional rendering of microscopic objects is a difficult and challenging task that often requires specialized image processing techniques. Previous work has been described of a semi-automatic segmentation process of fluorescently stained neurons collected as a sequence of slice images with a confocal laser scanning microscope. Once properly segmented, each individual object can be rendered and studied as a three-dimensional virtual object. This paper describes the work associated with the design and development of Matlab files to create three-dimensional images from the segmented object data previously mentioned. Part of the motivation for this work is to integrate both the segmentation and rendering processes into one software application, providing a seamless transition from the segmentation tasks to the rendering and visualization tasks. Previously these tasks were accomplished on two different computer systems, windows and Linux. This transition basically limits the usefulness of the segmentation and rendering applications to those who have both computer systems readily available. The focus of this work is to create custom Matlab image processing algorithms for object rendering and visualization, and merge these capabilities to the Matlab files that were developed especially for the image segmentation task. The completed Matlab application will contain both the segmentation and rendering processes in a single graphical user interface, or GUI. This process for rendering three-dimensional images in Matlab requires that a sequence of two-dimensional binary images, representing a cross-sectional slice of the object, be reassembled in a 3D space, and covered with a surface. Additional segmented objects can be rendered in the same 3D space. The surface properties of each object can be varied by the user to aid in the study and analysis of the objects. This inter-active process becomes a powerful visual tool to study and understand microscopic objects.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Rodgers, E. B.
1977-01-01
An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.
Microscopy Images as Interactive Tools in Cell Modeling and Cell Biology Education
ERIC Educational Resources Information Center
Araujo-Jorge, Tania C.; Cardona, Tania S.; Mendes, Claudia L. S.; Henriques-Pons, Andrea; Meirelles, Rosane M. S.; Coutinho, Claudia M. L. M.; Aguiar, Luiz Edmundo V.; Meirelles, Maria de Nazareth L.; de Castro, Solange L.; Barbosa, Helene S.; Luz, Mauricio R. M. P.
2004-01-01
The advent of genomics, proteomics, and microarray technology has brought much excitement to science, both in teaching and in learning. The public is eager to know about the processes of life. In the present context of the explosive growth of scientific information, a major challenge of modern cell biology is to popularize basic concepts of…
A microcomputer program for analysis of nucleic acid hybridization data
Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.
1982-01-01
The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017
Actual Drawing of Histological Images Improves Knowledge Retention
ERIC Educational Resources Information Center
Balemans, Monique C. M.; Kooloos, Jan G. M.; Donders, A. Rogier T.; Van der Zee, Catharina E. E. M.
2016-01-01
Medical students have to process a large amount of information during the first years of their study, which has to be retained over long periods of nonuse. Therefore, it would be beneficial when knowledge is gained in a way that promotes long-term retention. Paper-and-pencil drawings for the uptake of form-function relationships of basic tissues…
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
Open source software in a practical approach for post processing of radiologic images.
Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea
2015-03-01
The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
- PNNL, Harold Trease
2012-10-10
ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.
1992-01-01
basic reference structure, changes to which can be studied as a function of doping and/or processing parameters . and correlated to electrical and...MICROSCOPY CHARACTERIZATION OF EPITAXIAL GROWTH OF Ag DEPOSITED ON MgO MICROCUBES 127 J. Liu, M. Pan, and GE. Spinnler REAL-TIME VIEWING OF DYNAMIC...IMAGING OF GRAIN BOUNDARIES IN Pr- DOPED ZnO CERAMICS 189 I.G. Solorzano, J.B. VanDer Sande, K.K. Baek, and H.L. Tuller ATOMIC STRUCTURES AND DEFECTS OF
dada - a web-based 2D detector analysis tool
NASA Astrophysics Data System (ADS)
Osterhoff, Markus
2017-06-01
The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-08-01
Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.
High quality image-pair-based deblurring method using edge mask and improved residual deconvolution
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting
2017-04-01
Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.
Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka
2017-01-01
Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.
The influence of pets on infants' processing of cat and dog images.
Hurley, Karinna B; Kovack-Lesh, Kristine A; Oakes, Lisa M
2010-12-01
We examined how experience at home with pets is related to infants' processing of animal stimuli in a standard laboratory procedure. We presented 6-month-old infants with photographs of cats or dogs and found that infants with pets at home (N=40) responded differently to the pictures than infants without pets (N=40). These results suggest that infants' experience in one context (at home) contributes to their processing of similar stimuli in a different context (the laboratory), and have implications for how infants' early experience shapes basic cognitive processing. Copyright © 2010 Elsevier Inc. All rights reserved.
The Influence of Pets on Infants’ Processing of Cat and Dog Images
Hurley, Karinna B.; Kovack-Lesh, Kristine A.; Oakes, Lisa M.
2010-01-01
We examined how experience at home with pets is related to infants’ processing of animal stimuli in a standard laboratory procedure. We presented 6-month-old infants with photographs of cats or dogs and found that infants with pets at home (N = 40) responded differently to the pictures than infants without pets (N = 40). These results suggest that infants’ experience in one context (at home) contributes to their processing of similar stimuli in a different context (the lab), and have implications for how infants’ early experience shapes basic cognitive processing. PMID:20728223
Automatic brain MR image denoising based on texture feature-based artificial neural networks.
Chang, Yu-Ning; Chang, Herng-Hua
2015-01-01
Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.
Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
The AAPM/RSNA physics tutorial for residents. Basic physics of MR imaging: an introduction.
Hendrick, R E
1994-07-01
This article provides an introduction to the basic physical principles of magnetic resonance (MR) imaging. Essential basic concepts such as nuclear magnetism, tissue magnetization, precession, excitation, and tissue relaxation properties are presented. Hydrogen spin density and tissue relaxation times T1, T2, and T2* are explained. The basic elements of a planar MR pulse sequence are described: section selection during tissue excitation, phase encoding, and frequency encoding during signal measurement.
A manual for microcomputer image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rich, P.M.; Ranken, D.M.; George, J.S.
1989-12-01
This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less
NASA Astrophysics Data System (ADS)
Vijayakumar, A.; Rosen, Joseph
2017-05-01
Coded aperture correlation holography (COACH) is a recently developed incoherent digital holographic technique. In COACH, two holograms are recorded: the object hologram for the object under study and another hologram for a point object called PSF hologram. The holograms are recorded by interfering two beams, both diffracted from the same object point, but only one of them passes through a random-like coded phase mask (CPM). The same CPM is used for recording the object as well as the PSF holograms. The image is reconstructed by correlating the object hologram with a processed version of the PSF hologram. The COACH holographic technique exhibits the same transverse and axial resolution of the regular imaging, but with the unique capability of storing 3D information. The basic COACH configuration consists of a single spatial light modulator (SLM) used for displaying the CPM. In this study, the basic COACH configuration has been advanced by employing two spatial light modulators (SLMs) in the setup. The refractive lens used in the basic COACH setup for collecting and collimating the light diffracted by the object is replaced by an SLM on which an equivalent diffractive lens is displayed. Unlike a refractive lens, the diffractive lens displayed on the first SLM focuses light with different wavelengths to different axial planes, which are separated by distances larger than the axial correlation lengths of the CPM for any visible wavelength. This characteristic extends the boundaries of COACH from three-dimensional to four-dimensional imaging with the wavelength as its fourth dimension.
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Beef assessments using functional magnetic resonance imaging and sensory evaluation.
Tapp, W N; Davis, T H; Paniukov, D; Brooks, J C; Brashears, M M; Miller, M F
2017-04-01
Functional magnetic resonance imaging (fMRI) has been used to unveil how some foods and basic rewards are processed in the human brain. This study evaluated how resting state functional connectivity in regions of the human brain changed after differing qualities of beef steaks were consumed. Functional images of participants (n=8) were collected after eating high or low quality beef steaks on separate days, after consumption a sensory ballot was administered to evaluate consumers' perceptions of tenderness, juiciness, flavor, and overall liking. Imaging data showed that high quality steak samples resulted in greater functional connectivity to the striatum, medial orbitofrontal cortex, and insular cortex at various stages after consumption (P≤0.05). Furthermore, high quality steaks elicited higher sensory ballot scores for each palatability trait (P≤0.01). Together, these results suggest that resting state fMRI may be a useful tool for evaluating the neural process that follows positive sensory experiences such as the enjoyment of high quality beef steaks. Published by Elsevier Ltd.
Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk
2015-01-01
Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.
Automated Ontology Generation Using Spatial Reasoning
NASA Astrophysics Data System (ADS)
Coalter, Alton; Leopold, Jennifer L.
Recently there has been much interest in using ontologies to facilitate knowledge representation, integration, and reasoning. Correspondingly, the extent of the information embodied by an ontology is increasing beyond the conventional is_a and part_of relationships. To address these requirements, a vast amount of digitally available information may need to be considered when building ontologies, prompting a desire for software tools to automate at least part of the process. The main efforts in this direction have involved textual information retrieval and extraction methods. For some domains extension of the basic relationships could be enhanced further by the analysis of 2D and/or 3D images. For this type of media, image processing algorithms are more appropriate than textual analysis methods. Herein we present an algorithm that, given a collection of 3D image files, utilizes Qualitative Spatial Reasoning (QSR) to automate the creation of an ontology for the objects represented by the images, relating the objects in terms of is_a and part_of relationships and also through unambiguous Relational Connection Calculus (RCC) relations.
Granular computing with multiple granular layers for brain big data processing.
Wang, Guoyin; Xu, Ji
2014-12-01
Big data is the term for a collection of datasets so huge and complex that it becomes difficult to be processed using on-hand theoretical models and technique tools. Brain big data is one of the most typical, important big data collected using powerful equipments of functional magnetic resonance imaging, multichannel electroencephalography, magnetoencephalography, Positron emission tomography, near infrared spectroscopic imaging, as well as other various devices. Granular computing with multiple granular layers, referred to as multi-granular computing (MGrC) for short hereafter, is an emerging computing paradigm of information processing, which simulates the multi-granular intelligent thinking model of human brain. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of information and even knowledge from data. This paper analyzes three basic mechanisms of MGrC, namely granularity optimization, granularity conversion, and multi-granularity joint computation, and discusses the potential of introducing MGrC into intelligent processing of brain big data.
NASA Astrophysics Data System (ADS)
Arinilhaq; Widita, R.
2016-03-01
Diagnosis of macular degeneration using a Stratus OCT with a fast macular thickness map (FMTM) method produced six B-scan images of macula from different angles. The images were converted into a retinal thickness chart to be evaluated by normal distribution percentile of data so that it can be classified as normal thickness of macula or as experiencing abnormality (e.g. thickening and thinning). Unfortunately, the diagnostic images only represent the retinal thickness in several areas of the macular region. Thus, this study is aims to obtain the entire retinal thickness in the macula area from Status OCT's output images. Basically, the volumetric image is obtained by combining each of the six images. Reconstruction consists of a series of processes such as pre-processing, segmentation, and interpolation. Linear interpolation techniques are used to fill the empty pixels in reconstruction matrix. Based on the results, this method is able to provide retinal thickness maps on the macula surface and the macula 3D image. Retinal thickness map can display the macula area which experienced abnormalities. The macula 3D image can show the layers of tissue in the macula that is abnormal. The system built cannot replace ophthalmologist in decision making in term of diagnosis.
The Real-Time Monitoring Service Platform for Land Supervision Based on Cloud Integration
NASA Astrophysics Data System (ADS)
Sun, J.; Mao, M.; Xiang, H.; Wang, G.; Liang, Y.
2018-04-01
Remote sensing monitoring has become the important means for land and resources departments to strengthen supervision. Aiming at the problems of low monitoring frequency and poor data currency in current remote sensing monitoring, this paper researched and developed the cloud-integrated real-time monitoring service platform for land supervision which enhanced the monitoring frequency by acquiring the domestic satellite image data overall and accelerated the remote sensing image data processing efficiency by exploiting the intelligent dynamic processing technology of multi-source images. Through the pilot application in Jinan Bureau of State Land Supervision, it has been proved that the real-time monitoring technical method for land supervision is feasible. In addition, the functions of real-time monitoring and early warning are carried out on illegal land use, permanent basic farmland protection and boundary breakthrough in urban development. The application has achieved remarkable results.
Deep learning with convolutional neural network in radiology.
Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu
2018-04-01
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
Thøgersen-Ntoumani, Cecilie; Ntoumanis, Nikos; Nikitaras, Nikitas
2010-06-01
This study used self-determination theory (Deci, E.L., & Ryan, R.M. (2000). The 'what' and 'why' of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11, 227-268.) to examine predictors of body image concerns and unhealthy weight control behaviours in a sample of 350 Greek adolescent girls. A process model was tested which proposed that perceptions of parental autonomy support and two life goals (health and image) would predict adolescents' degree of satisfaction of their basic psychological needs. In turn, psychological need satisfaction was hypothesised to negatively predict body image concerns (i.e. drive for thinness and body dissatisfaction) and, indirectly, unhealthy weight control behaviours. The predictions of the model were largely supported indicating that parental autonomy support and adaptive life goals can indirectly impact upon the extent to which female adolescents engage in unhealthy weight control behaviours via facilitating the latter's psychological need satisfaction.
Automatic analysis of microscopic images of red blood cell aggregates
NASA Astrophysics Data System (ADS)
Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.
2015-06-01
Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).
Optimization of the segmented method for optical compression and multiplexing system
NASA Astrophysics Data System (ADS)
Al Falou, Ayman
2002-05-01
Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.
ERIC Educational Resources Information Center
Yamagata, Satoshi
2018-01-01
The present study investigated the effects of two types of core-image-based basic verb learning approaches: the learner-centered and the teacher-centered approaches. The learner-centered approach was an activity in which participants found semantic relationships among several definitions of each basic target verb through a picture-elucidated card…
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
2010-01-01
Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a…
ERIC Educational Resources Information Center
Villarreal, Ronald P.; Steinmetz, Joseph E.
2005-01-01
How the nervous system encodes learning and memory processes has interested researchers for 100 years. Over this span of time, a number of basic neuroscience methods has been developed to explore the relationship between learning and the brain, including brain lesion, stimulation, pharmacology, anatomy, imaging, and recording techniques. In this…
NASA Technical Reports Server (NTRS)
Metcalfe, A. G.; Bodenheimer, R. E.
1976-01-01
A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.
Franek, Michal; Suchánková, Jana; Sehnalová, Petra; Krejčí, Jana; Legartová, Soňa; Kozubek, Stanislav; Večeřa, Josef; Sorokin, Dmitry V; Bártová, Eva
2016-04-01
Studies on fixed samples or genome-wide analyses of nuclear processes are useful for generating snapshots of a cell population at a particular time point. However, these experimental approaches do not provide information at the single-cell level. Genome-wide studies cannot assess variability between individual cells that are cultured in vitro or originate from different pathological stages. Immunohistochemistry and immunofluorescence are fundamental experimental approaches in clinical laboratories and are also widely used in basic research. However, the fixation procedure may generate artifacts and prevents monitoring of the dynamics of nuclear processes. Therefore, live-cell imaging is critical for studying the kinetics of basic nuclear events, such as DNA replication, transcription, splicing, and DNA repair. This review is focused on the advanced microscopy analyses of the cells, with a particular focus on live cells. We note some methodological innovations and new options for microscope systems that can also be used to study tissue sections. Cornerstone methods for the biophysical research of living cells, such as fluorescence recovery after photobleaching and fluorescence resonance energy transfer, are also discussed, as are studies on the effects of radiation at the individual cellular level.
Unsupervised tattoo segmentation combining bottom-up and top-down cues
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen
2011-06-01
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Tost, H; Meyer-Lindenberg, A; Ruf, M; Demirakça, T; Grimm, O; Henn, F A; Ende, G
2005-02-01
Modern neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission tomography (PET) have contributed tremendously to our current understanding of psychiatric disorders in the context of functional, biochemical and microstructural alterations of the brain. Since the mid-nineties, functional MRI has provided major insights into the neurobiological correlates of signs and symptoms in schizophrenia. The current paper reviews important fMRI studies of the past decade in the domains of motor, visual, auditory, attentional and working memory function. Special emphasis is given to new methodological approaches, such as the visualisation of medication effects and the functional characterisation of risk genes.
NASA Astrophysics Data System (ADS)
Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.
2017-12-01
A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.
Basic radiological assessment of synovial diseases: a pictorial essay
Turan, Aynur; Çeltikçi, Pınar; Tufan, Abdurrahman; Öztürk, Mehmet Akif
2017-01-01
The synovium is a specialized tissue lining the synovial joints, bursae, and tendon sheaths of the body. It is affected by various localized or systemic disorders. Synovial diseases can be classified as inflammatory, infectious, degenerative, traumatic, hemorrhagic, and neoplastic. Damage in other intraarticular structures, particularly cartilages, generally occurs as a part of pathologic processes involving the synovium, leading to irreversible joint destruction. Imaging has an essential role in the early detection of synovial diseases prior to irreversible joint damage. Obtaining and understanding characteristic imaging findings of synovial diseases enables a proper diagnosis for early treatment. This article focuses on the recent literature that is related with the role of imaging in synovial disease. PMID:28638696
Use of ultrasmall superparamagnetic iron oxide particles for imaging carotid atherosclerosis.
Usman, Ammara; Sadat, Umar; Patterson, Andrew J; Tang, Tjun Y; Varty, Kevin; Boyle, Jonathan R; Armon, Mathew P; Hayes, Paul D; Graves, Martin J; Gillard, Jonathan H
2015-10-01
Based on the results of histopathological studies, inflammation within atherosclerotic tissue is now widely accepted as a key determinant of the disease process. Conventional imaging methods can highlight the location and degree of luminal stenosis but not the inflammatory activity of the plaque. Iron oxide-based MRI contrast media particularly ultrasmall supermagnetic particles of iron oxide have shown potential in assessing atheromatous plaque inflammation and in determining efficacy of antiatherosclerosis pharmacological treatments. In this paper, we review current data on the use of ultrasmall superparamagnetic iron oxides in atherosclerosis imaging with focus on ferumoxtran-10 and ferumoxytol. The basic chemistry, pharmacokinetics and dynamics, potential applications, limitations and future perspectives of these contrast media nanoparticles are discussed.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data.
Leping Chen; Daoxiang An; Xiaotao Huang; Zhimin Zhou
2017-11-01
In the last few years, interest in circular synthetic aperture radar (CSAR) acquisitions has arisen as a consequence of the potential achievement of 3D reconstructions over 360° azimuth angle variation. In real-world scenarios, full 3D reconstructions of arbitrary targets need multi-pass data, which makes the processing complex, money-consuming, and time expending. In this paper, we propose a processing strategy for the 3D reconstruction of vehicle, which can avoid using multi-pass data by introducing a priori information of vehicle's shape. Besides, the proposed strategy just needs the single-pass single-polarization CSAR data to perform vehicle's 3D reconstruction, which makes the processing much more economic and efficient. First, an analysis of the distribution of attributed scattering centers from vehicle facet model is presented. And the analysis results show that a smooth and continuous basic outline of vehicle could be extracted from the peak curve of a noncoherent processing image. Second, the 3D location of vehicle roofline is inferred from layover with empirical insets of the basic outline. At last, the basic line and roofline of the vehicle are used to estimate the vehicle's 3D information and constitute the vehicle's 3D outline. The simulated and measured data processing results prove the correctness and effectiveness of our proposed strategy.
The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation
NASA Astrophysics Data System (ADS)
Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.
2018-04-01
The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.
Lithium modified zeolite synthesis for conversion of biodiesel-derived glycerol to polyglycerol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayoub, Muhammad, E-mail: muhammad.ayoub@petronas.com.my; Abdullah, Ahmad Zuhairi, E-mail: chzuhairi@usm.my; Inayat, Abrar, E-mail: abrar.inayat@petronas.com.my
Basic zeolite has received significant attention in the catalysis community. These zeolites modified with alkaline are the potential replacement for existing zeolite catalysts due to its unique features with added advantages. The present paper covers the preparation of lithium modified zeolite Y (Li-ZeY) and its activity for solvent free conversion of biodiesel-derived glycerol to polyglycerol via etherification process. The modified zeolite was well characterized by X-ray diffraction (XRD), Scanning Electron Microscope (SEM) and Nitrogen Adsorption. The SEM images showed that there was no change in morphology of modified zeolite structure after lithium modification. XRD patterns showed that the structure ofmore » zeolite was sustained after lithium modification. The surface properties of parent and modified zeolite was also observed N{sub 2} adsortion-desorption technique and found some changes in surface area and pore size. In addition, the basic strength of prepared materials was measured by Hammet indicators and found that basic strength of Li-ZeY was highly improved. This modified zeolite was found highly thermal stable and active heterogamous basic catalyst for conversion of solvent free glycerol to polyglycerol. This reaction was conducted at different temperatures and 260 °C was found most active temperature for this process for reaction time from 6 to 12 h over this basic catalyst in the absence of solvent.« less
Darquenne, Chantal; Fleming, John S; Katz, Ira; Martin, Andrew R; Schroeter, Jeffry; Usmani, Omar S; Venegas, Jose; Schmid, Otmar
2016-04-01
Development of a new drug for the treatment of lung disease is a complex and time consuming process involving numerous disciplines of basic and applied sciences. During the 2015 Congress of the International Society for Aerosols in Medicine, a group of experts including aerosol scientists, physiologists, modelers, imagers, and clinicians participated in a workshop aiming at bridging the gap between basic research and clinical efficacy of inhaled drugs. This publication summarizes the current consensus on the topic. It begins with a short description of basic concepts of aerosol transport and a discussion on targeting strategies of inhaled aerosols to the lungs. It is followed by a description of both computational and biological lung models, and the use of imaging techniques to determine aerosol deposition distribution (ADD) in the lung. Finally, the importance of ADD to clinical efficacy is discussed. Several gaps were identified between basic science and clinical efficacy. One gap between scientific research aimed at predicting, controlling, and measuring ADD and the clinical use of inhaled aerosols is the considerable challenge of obtaining, in a single study, accurate information describing the optimal lung regions to be targeted, the effectiveness of targeting determined from ADD, and some measure of the drug's effectiveness. Other identified gaps were the language and methodology barriers that exist among disciplines, along with the significant regulatory hurdles that need to be overcome for novel drugs and/or therapies to reach the marketplace and benefit the patient. Despite these gaps, much progress has been made in recent years to improve clinical efficacy of inhaled drugs. Also, the recent efforts by many funding agencies and industry to support multidisciplinary networks including basic science researchers, R&D scientists, and clinicians will go a long way to further reduce the gap between science and clinical efficacy.
Fleming, John S.; Katz, Ira; Martin, Andrew R.; Schroeter, Jeffry; Usmani, Omar S.; Venegas, Jose
2016-01-01
Abstract Development of a new drug for the treatment of lung disease is a complex and time consuming process involving numerous disciplines of basic and applied sciences. During the 2015 Congress of the International Society for Aerosols in Medicine, a group of experts including aerosol scientists, physiologists, modelers, imagers, and clinicians participated in a workshop aiming at bridging the gap between basic research and clinical efficacy of inhaled drugs. This publication summarizes the current consensus on the topic. It begins with a short description of basic concepts of aerosol transport and a discussion on targeting strategies of inhaled aerosols to the lungs. It is followed by a description of both computational and biological lung models, and the use of imaging techniques to determine aerosol deposition distribution (ADD) in the lung. Finally, the importance of ADD to clinical efficacy is discussed. Several gaps were identified between basic science and clinical efficacy. One gap between scientific research aimed at predicting, controlling, and measuring ADD and the clinical use of inhaled aerosols is the considerable challenge of obtaining, in a single study, accurate information describing the optimal lung regions to be targeted, the effectiveness of targeting determined from ADD, and some measure of the drug's effectiveness. Other identified gaps were the language and methodology barriers that exist among disciplines, along with the significant regulatory hurdles that need to be overcome for novel drugs and/or therapies to reach the marketplace and benefit the patient. Despite these gaps, much progress has been made in recent years to improve clinical efficacy of inhaled drugs. Also, the recent efforts by many funding agencies and industry to support multidisciplinary networks including basic science researchers, R&D scientists, and clinicians will go a long way to further reduce the gap between science and clinical efficacy. PMID:26829187
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.
2005-09-01
Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less
Remote sensing programs and courses in engineering and water resources
NASA Technical Reports Server (NTRS)
Kiefer, R. W.
1981-01-01
The content of typical basic and advanced remote sensing and image interpretation courses are described and typical remote sensing graduate programs of study in civil engineering and in interdisciplinary environmental remote sensing and water resources management programs are outlined. Ideally, graduate programs with an emphasis on remote sensing and image interpretation should be built around a core of five courses: (1) a basic course in fundamentals of remote sensing upon which the more specialized advanced remote sensing courses can build; (2) a course dealing with visual image interpretation; (3) a course dealing with quantitative (computer-based) image interpretation; (4) a basic photogrammetry course; and (5) a basic surveying course. These five courses comprise up to one-half of the course work required for the M.S. degree. The nature of other course work and thesis requirements vary greatly, depending on the department in which the degree is being awarded.
Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network
NASA Astrophysics Data System (ADS)
Zhang, J.; Zhang, J.; Zhao, Z.
2018-04-01
Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.
Pandey, Shilpa; Hakky, Michael; Kwak, Ellie; Jara, Hernan; Geyer, Carl A; Erbay, Sami H
2013-05-01
Neurovascular imaging studies are routinely used for the assessment of headaches and changes in mental status, stroke workup, and evaluation of the arteriovenous structures of the head and neck. These imaging studies are being performed with greater frequency as the aging population continues to increase. Magnetic resonance (MR) angiographic imaging techniques are helpful in this setting. However, mastering these techniques requires an in-depth understanding of the basic principles of physics, complex flow patterns, and the correlation of MR angiographic findings with conventional MR imaging findings. More than one imaging technique may be used to solve difficult cases, with each technique contributing unique information. Unfortunately, incorporating findings obtained with multiple imaging modalities may add to the diagnostic challenge. To ensure diagnostic accuracy, it is essential that the radiologist carefully evaluate the details provided by these modalities in light of basic physics principles, the fundamentals of various imaging techniques, and common neurovascular imaging pitfalls. ©RSNA, 2013.
A Perspective of the future of nuclear medicine training and certification
Arevalo-Perez, Julio; Paris, Manuel; Graham, Michael M.; Osborne, Joseph R.
2016-01-01
Nuclear Medicine has evolved from a medical subspecialty using quite basic tests to one using elaborate methods to image organ physiology and has truly become “Molecular Imaging”. Concurrently, there has also been a timely debate about who has to be responsible for keeping pace with all of the components of the developmental cycle; imaging, radiopharmaceuticals and instrumentation. Since the foundation of the ABNM, the practice of Nuclear Medicine and the process toward certification have undergone major revisions. At present, the debate is focused on the inevitable future convergence of Radiology and Nuclear Medicine. The potential for further cooperation or fusion of the American Board of Radiology (ABR) and the American Board of Nuclear Medicine (ABNM) is likely to bring about a new path for Nuclear Medicine and Molecular Imaging training. If the merger is done carefully, respecting the strengths of both partners equally, there is an excellent potential to create a hybrid Nuclear Medicine – Radiology specialty that combines Physiology and Molecular Biology with detailed anatomic imaging that will sustain the innovation that has been central to nuclear medicine residency and practice. Herein, we also introduce a few basic trends in imaging utilization in the United States. These trends do not predict future utilization, but highlight the need for an appropriately credentialed practitioner to interpret these examinations and provide value to the healthcare system. PMID:26687859
Basic disturbances of information processing in psychosis prediction.
Bodatsch, Mitja; Klosterkötter, Joachim; Müller, Ralf; Ruhrmann, Stephan
2013-01-01
The basic symptoms (BS) approach provides a valid instrument in predicting psychosis onset and represents moreover a significant heuristic framework for research. The term "basic symptoms" denotes subtle changes of cognition and perception in the earliest and prodromal stages of psychosis development. BS are thought to correspond to disturbances of neural information processing. Following the heuristic implications of the BS approach, the present paper aims at exploring disturbances of information processing, revealed by functional magnetic resonance imaging (fMRI) and electro-encephalographic as characteristics of the at-risk state of psychosis. Furthermore, since high-risk studies employing ultra-high-risk criteria revealed non-conversion rates commonly exceeding 50%, thus warranting approaches that increase specificity, the potential contribution of neural information processing disturbances to psychosis prediction is reviewed. In summary, the at-risk state seems to be associated with information processing disturbances. Moreover, fMRI investigations suggested that disturbances of language processing domains might be a characteristic of the prodromal state. Neurophysiological studies revealed that disturbances of sensory processing may assist psychosis prediction in allowing for a quantification of risk in terms of magnitude and time. The latter finding represents a significant advancement since an estimation of the time to event has not yet been achieved by clinical approaches. Some evidence suggests a close relationship between self-experienced BS and neural information processing. With regard to future research, the relationship between neural information processing disturbances and different clinical risk concepts warrants further investigations. Thereby, a possible time sequence in the prodromal phase might be of particular interest.
Optical threshold secret sharing scheme based on basic vector operations and coherence superposition
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen
2015-04-01
We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.
NASA Astrophysics Data System (ADS)
Hasegawa, Hideyuki
2017-07-01
The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel Anne
Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.
Informatics in radiology (infoRAD): HTML and Web site design for the radiologist: a primer.
Ryan, Anthony G; Louis, Luck J; Yee, William C
2005-01-01
A Web site has enormous potential as a medium for the radiologist to store, present, and share information in the form of text, images, and video clips. With a modest amount of tutoring and effort, designing a site can be as painless as preparing a Microsoft PowerPoint presentation. The site can then be used as a hub for the development of further offshoots (eg, Web-based tutorials, storage for a teaching library, publication of information about one's practice, and information gathering from a wide variety of sources). By learning the basics of hypertext markup language (HTML), the reader will be able to produce a simple and effective Web page that permits display of text, images, and multimedia files. The process of constructing a Web page can be divided into five steps: (a) creating a basic template with formatted text, (b) adding color, (c) importing images and multimedia files, (d) creating hyperlinks, and (e) uploading one's page to the Internet. This Web page may be used as the basis for a Web-based tutorial comprising text documents and image files already in one's possession. Finally, there are many commercially available packages for Web page design that require no knowledge of HTML.
A new programming metaphor for image processing procedures
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.
Making and Using Aesthetically Pleasing Images With HDI
NASA Astrophysics Data System (ADS)
Buckner, Spencer L.
2017-01-01
The Half-Degree Imager (HDI) was installed as the primary imager on the 0.9-m WIYN telescope in October 2013. In the three plus years since then it has proven to be highly effective as a scientific instrument for the 0.9-m WIYN consortium. One thing that has been missing from the mix are aesthetically pleasing images for use in publicity and public outreach. The lack of “pretty pictures” is understandable since the HDI is designed for scientific use and observers are given limited telescope time. However, images which appeal to the general public can be an effective tool for public outreach, publicity and recruitment of students into astronomy programs. As a counter to the loss of limited telescope time an observer has, “pretty picture” images can be taken under less than desirable conditions when photometric studies would have limited usefulness. Astroimaging has become a popular pastime among both amateur and professional astronomers. At Austin Peay State University astrophotography is a popular course with non-science majors that wish to complete an astronomy minor as well as physics majors pursuing the astrophysics track. Images of a number of Messier objects have been taken with the HDI camera and are used to teach the basics of image calibration and processing for aesthetic value to students in the astrophotography class. Using HDI images with most image processing software commercially available to the public does present some problems, though. The extended FITS format of the images is not readable by most amateur image processing software and such software can also have problems recognizing the filter configurations of the HDI. Overcoming these issues and how the images are used in APSU courses, publicity and public outreach as well as finished pictures will be discussed in this presentation. A poster describing the processing techniques used will be displayed during the concurrent HDI poster session along with several poster-sized prints of images.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
... learn more about how the body works, what basic human anatomy is, and what happens when parts of ... consult your doctor. © 1995- The Nemours Foundation. All rights reserved. Images provided by The Nemours Foundation, iStock, Getty Images, Veer, Shutterstock, and Clipart.com.
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.
Object segmentation controls image reconstruction from natural scenes
2017-01-01
The structure of the physical world projects images onto our eyes. However, those images are often poorly representative of environmental structure: well-defined boundaries within the eye may correspond to irrelevant features of the physical world, while critical features of the physical world may be nearly invisible at the retinal projection. The challenge for the visual cortex is to sort these two types of features according to their utility in ultimately reconstructing percepts and interpreting the constituents of the scene. We describe a novel paradigm that enabled us to selectively evaluate the relative role played by these two feature classes in signal reconstruction from corrupted images. Our measurements demonstrate that this process is quickly dominated by the inferred structure of the environment, and only minimally controlled by variations of raw image content. The inferential mechanism is spatially global and its impact on early visual cortex is fast. Furthermore, it retunes local visual processing for more efficient feature extraction without altering the intrinsic transduction noise. The basic properties of this process can be partially captured by a combination of small-scale circuit models and large-scale network architectures. Taken together, our results challenge compartmentalized notions of bottom-up/top-down perception and suggest instead that these two modes are best viewed as an integrated perceptual mechanism. PMID:28827801
Fast ray-tracing of human eye optics on Graphics Processing Units.
Wei, Qi; Patkar, Saket; Pai, Dinesh K
2014-05-01
We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Basic Use of SExtractor Catalogs With TweakR eg - I
NASA Astrophysics Data System (ADS)
Lucas, Ray A.; Hilbert, Bryan
2015-05-01
We describe using external SExtractor (v2.8.6) catalogs from crclean.fits images to align ACS/WFC images with DrizzlePac/TweakReg. Note that this example was originally created before a more recent update to ACS/WFC geometric distortion files. At the time of this writing, one must follow the advice on the ACS Geometric Distortion web page as the first step in the process. By late 2015, as part of OPUS 2015.3, this part will be included by default in the standard pipeline processing and this will no longer need to be manually done by the user. We describe the rest of the process of preparing images for SExtractor, running SExtractor, and using the ouput catalogs to feed to the TweakReg task for alignment, and show that reasonably good first-cut results can be obtained with mostly default parameters in SExtractor and TweakReg. Better results may be possible with more exacting methods. This describes a method for quick alignment, not the ultimate best alignment. Note also that the use of crclean.fits images may be more suited to provide better results for ACS/WFC and WFC3/UVIS than for WFC3/IR.
VICAR image processing system guide to system use
NASA Technical Reports Server (NTRS)
Seidman, J. B.
1977-01-01
The functional characteristics and operating requirements of the VICAR (Video Image Communication and Retrieval) system are described. An introduction to the system describes the functional characteristics and the basic theory of operation. A brief description of the data flow as well as tape and disk formats is also presented. A formal presentation of the control statement formats is given along with a guide to usage of the system. The guide provides a step-by-step reference to the creation of a VICAR control card deck. Simple examples are employed to illustrate the various options and the system response thereto.
NASA Technical Reports Server (NTRS)
1975-01-01
Data acquisition using single image and seven image data processing is used to provide a precise and accurate geometric description of the earth's surface. Transformation parameters and network distortions are determined, Sea slope along the continental boundaries of the U.S. and earth rotation are examined, along with close grid geodynamic satellite system. Data are derived for a mathematical description of the earth's gravitational field; time variations are determined for geometry of the ocean surface, the solid earth, gravity field, and other geophysical parameters.
Mosaic of coded aperture arrays
Fenimore, Edward E.; Cannon, Thomas M.
1980-01-01
The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.
NASA Astrophysics Data System (ADS)
Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan
2018-01-01
Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
Advanced Computer Image Generation Techniques Exploiting Perceptual Characteristics
1981-08-01
the capabilities/limitations of the human visual perceptual processing system and improve the training effectiveness of visual simulation systems...Myron Braunstein of the University of California at Irvine performed all the work in the perceptual area. Mr. Timothy A. Zimmerlin contributed the... work . Thus, while some areas are related, each is resolved independently in order to focus on the basic perceptual limitation. In addition, the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seibert, J; Imbergamo, P
The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less
PCIPS 2.0: Powerful multiprofile image processing implemented on PCs
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Over the years, the processing power of personal computers has steadily increased. Now, 386- and 486-based PC's are fast enough for many image processing applications, and inexpensive enough even for amateur astronomers. PCIPS is an image processing system based on these platforms that was designed to satisfy a broad range of data analysis needs, while requiring minimum hardware and providing maximum expandability. It will run (albeit at a slow pace) even on a 80286 with 640K memory, but will take full advantage of bigger memory and faster CPU's. Because the actual image processing is performed by external modules, the system can be easily upgraded by the user for all sorts of scientific data analysis. PCIPS supports large format lD and 2D images in any numeric type from 8-bit integer to 64-bit floating point. The images can be displayed, overlaid, printed and any part of the data examined via an intuitive graphical user interface that employs buttons, pop-up menus, and a mouse. PCIPS automatically converts images between different types and sizes to satisfy the requirements of various applications. PCIPS features an API that lets users develop custom applications in C or FORTRAN. While doing so, a programmer can concentrate on the actual data processing, because PCIPS assumes responsibility for accessing images and interacting with the user. This also ensures that all applications, even custom ones, have a consistent and user-friendly interface. The API is compatible with factory programming, a metaphor for constructing image processing procedures that will be implemented in future versions of the system. Several application packages were created under PCIPS. The basic package includes elementary arithmetics and statistics, geometric transformations and import/export in various formats (FITS, binary, ASCII, and GIF). The CCD processing package and the spectral analysis package were successfully used to reduce spectra from the Nordic Telescope at La Palma. A photometry package is also available, and other packages are being developed. A multitasking version of PCIPS that utilizes the factory programming concept is currently under development. This version will remain compatible (on the source code level) with existing application packages and custom applications.
Ansari, Daniel; Dhital, Bibek
2006-11-01
Numerical magnitude processing is an essential everyday skill. Functional brain imaging studies with human adults have repeatedly revealed that bilateral regions of the intraparietal sulcus are correlated with various numerical and mathematical skills. Surprisingly little, however, is known about the development of these brain representations. In the present study, we used functional neuroimaging to compare the neural correlates of nonsymbolic magnitude judgments between children and adults. Although behavioral performance was similar across groups, in comparison to the group of children the adult participants exhibited greater effects of numerical distance on the left intraparietal sulcus. Our findings are the first to reveal that even the most basic aspects of numerical cognition are subject to age-related changes in functional neuroanatomy. We propose that developmental impairments of number may be associated with atypical specialization of cortical regions underlying magnitude processing.
Demiris, A M; Meinzer, H P
1997-01-01
Whether or not a computerized system enhances the conditions of work in the application domain, very much demands on the user interface. Graphical user interfaces seem to attract the interest of the users but mostly ignore some basic rules of visual information processing thus leading to systems which are difficult to use, lowering productivity and increasing working stress (cognitive and work load). In this work we present some fundamental ergonomic considerations and their application to the medical image processing and archiving domain. We introduce the extensions to an existing concept needed to control and guide the development of GUIs with respect to domain specific ergonomics. The suggested concept, called Model-View-Controller Constraints (MVCC), can be used to programmatically implement ergonomic constraints, and thus has some advantages over written style guides. We conclude with the presentation of existing norms and methods to evaluate user interfaces.
Fundamentals of image acquisition and processing in the digital era.
Farman, A G
2003-01-01
To review the historic context for digital imaging in dentistry and to outline the fundamental issues related to digital imaging modalities. Digital dental X-ray images can be achieved by scanning analog film radiographs (secondary capture), with photostimulable phosphors, or using solid-state detectors (e.g. charge-coupled device and complementary metal oxide semiconductor). There are four characteristics that are basic to all digital image detectors; namely, size of active area, signal-to-noise ratio, contrast resolution and the spatial resolution. To perceive structure in a radiographic image, there needs to be sufficient difference between contrasting densities. This primarily depends on the differences in the attenuation of the X-ray beam by adjacent tissues. It is also depends on the signal received; therefore, contrast tends to increase with increased exposure. Given adequate signal and sufficient differences in radiodensity, contrast will be sufficient to differentiate between adjacent structures, irrespective of the recording modality and processing used. Where contrast is not sufficient, digital images can sometimes be post-processed to disclose details that would otherwise go undetected. For example, cephalogram isodensity mapping can improve soft tissue detail. It is concluded that it could be a further decade or two before three-dimensional digital imaging systems entirely replace two-dimensional analog films. Such systems need not only to produce prettier images, but also to provide a demonstrable evidence-based higher standard of care at a cost that is not economically prohibitive for the practitioner or society, and which allows efficient and effective workflow within the business of dental practice.
NASA Astrophysics Data System (ADS)
Salach, A.; Markiewicza, J. S.; Zawieska, D.
2016-06-01
An orthoimage is one of the basic photogrammetric products used for architectural documentation of historical objects; recently, it has become a standard in such work. Considering the increasing popularity of photogrammetric techniques applied in the cultural heritage domain, this research examines the two most popular measuring technologies: terrestrial laser scanning, and automatic processing of digital photographs. The basic objective of the performed works presented in this paper was to optimize the quality of generated high-resolution orthoimages using integration of data acquired by a Z+F 5006 terrestrial laser scanner and a Canon EOS 5D Mark II digital camera. The subject was one of the walls of the "Blue Chamber" of the Museum of King Jan III's Palace at Wilanów (Warsaw, Poland). The high-resolution images resulting from integration of the point clouds acquired by the different methods were analysed in detail with respect to geometric and radiometric correctness.
A modeling analysis program for the JPL table mountain Io sodium cloud data
NASA Technical Reports Server (NTRS)
Smyth, William H.; Goldberg, Bruce A.
1988-01-01
Research in the third and final year of this project is divided into three main areas: (1) completion of data processing and calibration for 34 of the 1981 Region B/C images, selected from the massive JPL sodium cloud data set; (2) identification and examination of the basic features and observed changes in the morphological characteristics of the sodium cloud images; and (3) successful physical interpretation of these basic features and observed changes using the highly developed numerical sodium cloud model at AER. The modeling analysis has led to a number of definite conclusions regarding the local structure of Io's atmosphere, the gas escape mechanism at Io, and the presence of an east-west electric field and a System III longitudinal asymmetry in the plasma torus. Large scale stability, as well as some smaller scale time variability for both the sodium cloud and the structure of the plasma torus over a several year time period are also discussed.
New trends in articular cartilage repair.
Cucchiarini, Magali; Henrionnet, Christel; Mainard, Didier; Pinzano, Astrid; Madry, Henning
2015-12-01
Damage to the articular cartilage is an important, prevalent, and unsolved clinical issue for the orthopaedic surgeon. This review summarizes innovative basic research approaches that may improve the current understanding of cartilage repair processes and lead to novel therapeutic options. In this regard, new aspects of cartilage tissue engineering with a focus on the choice of the best-suited cell source are presented. The importance of non-destructive cartilage imaging is highlighted with the recent availability of adapted experimental tools such as Second Harmonic Generation (SHG) imaging. Novel insights into cartilage pathophysiology based on the involvement of the infrapatellar fat pad in osteoarthritis are also described. Also, recombinant adeno-associated viral vectors are discussed as clinically adapted, efficient tools for potential gene-based medicines in a variety of articular cartilage disorders. Taken as a whole, such advances in basic research in diverse fields of articular cartilage repair may lead to the development of improved therapies in the clinics for an improved, effective treatment of cartilage lesions in a close future.
Building Shadow Detection from Ghost Imagery
NASA Astrophysics Data System (ADS)
Zhou, G.; Sha, J.; Yue, T.; Wang, Q.; Liu, X.; Huang, S.; Pan, Q.; Wei, J.
2018-05-01
Shadow is one of the basic features of remote sensing image, it expresses a lot of information of the object which is loss or interference, and the removal of shadow is always a difficult problem to remote sensing image processing. In this paper, it is mainly analyzes the characteristics and properties of shadows from the ghost image (traditional orthorectification). The DBM and the interior and exterior orientation elements of the image are used to calculate the zenith angle of sun. Then this paper combines the scope of the architectural shadows which has be determined by the zenith angle of sun with the region growing method to make the detection of architectural shadow areas. This method lays a solid foundation for the shadow of the repair from the ghost image later. It will greatly improve the accuracy of shadow detection from buildings and make it more conducive to solve the problem of urban large-scale aerial imagines.
Advances in Small Animal Imaging Systems
NASA Astrophysics Data System (ADS)
Loudos, George K.
2007-11-01
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to an increased interest in in vivo laboratory animal imaging during the past few years. For this purpose, new instrumentation, data acquisition strategies, and image processing and reconstruction techniques are being developed, researched and evaluated. The aim of this article is to give a short overview of the state of the art technologies for high resolution and high sensitivity molecular imaging techniques, primarily positron emission tomography (PET) and single photon emission computed tomography (SPECT). The basic needs of small animal imaging will be described. The evolution in instrumentation in the past two decades, as well as the commercially available systems will be overviewed. Finally, the new trends in detector technology and preliminary results from challenging applications will be presented. For more details a number of references are provided.
Combining High Spatial Resolution Optical and LIDAR Data for Object-Based Image Classification
NASA Astrophysics Data System (ADS)
Li, R.; Zhang, T.; Geng, R.; Wang, L.
2018-04-01
In order to classify high spatial resolution images more accurately, in this research, a hierarchical rule-based object-based classification framework was developed based on a high-resolution image with airborne Light Detection and Ranging (LiDAR) data. The eCognition software is employed to conduct the whole process. In detail, firstly, the FBSP optimizer (Fuzzy-based Segmentation Parameter) is used to obtain the optimal scale parameters for different land cover types. Then, using the segmented regions as basic units, the classification rules for various land cover types are established according to the spectral, morphological and texture features extracted from the optical images, and the height feature from LiDAR respectively. Thirdly, the object classification results are evaluated by using the confusion matrix, overall accuracy and Kappa coefficients. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy.
NASA Astrophysics Data System (ADS)
Kurczyński, Zdzisław; Różycki, Sebastian; Bylina, Paweł
2017-12-01
To produce orthophotomaps or digital elevation models, the most commonly used method is photogrammetric measurement. However, the use of aerial images is not easy in polar regions for logistical reasons. In these areas, remote sensing data acquired from satellite systems is much more useful. This paper presents the basic technical requirements of different products which can be obtain (in particular orthoimages and digital elevation model (DEM)) using Very-High-Resolution Satellite (VHRS) images. The study area was situated in the vicinity of the Henryk Arctowski Polish Antarctic Station on the Western Shore of Admiralty Bay, King George Island, Western Antarctic. Image processing was applied on two triplets of images acquired by the Pléiades 1A and 1B in March 2013. The results of the generation of orthoimages from the Pléiades systems without control points showed that the proposed method can achieve Root Mean Squared Error (RMSE) of 3-9 m. The presented Pléiades images are useful for thematic remote sensing analysis and processing of measurements. Using satellite images to produce remote sensing products for polar regions is highly beneficial and reliable and compares well with more expensive airborne photographs or field surveys.
Comparison of dogs and humans in visual scanning of social interaction.
Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M; Vainio, Outi; Kujala, Miiamaaria V
2015-09-01
Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.
Autonomous robot software development using simple software components
NASA Astrophysics Data System (ADS)
Burke, Thomas M.; Chung, Chan-Jin
2004-10-01
Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.
Quantification of micro-CT images of textile reinforcements
NASA Astrophysics Data System (ADS)
Straumit, Ilya; Lomov, Stepan V.; Wevers, Martine
2017-10-01
VoxTex software (KU Leuven) employs 3D image processing, which use the local directionality information, retrieved using analysis of local structure tensor. The processing results in a voxel 3D array, with each voxel carrying information on (1) material type (matrix; yarn/ply, with identification of the yarn/ply in the reinforcement architecture; void) and (2) fibre direction for fibrous yarns/plies. The knowledge of the material phase volume and known characterisation of the textile structure allows assigning to the voxels (3) fibre volume fraction. This basic voxel model can be further used for different type of the material analysis: Internal geometry and characterisation of defects; permeability; micromechanics; mesoFE voxel models. Apart from the voxel based analysis, approaches to reconstruction of the yarn paths are presented.
Profile of science process skills of Preservice Biology Teacher in General Biology Course
NASA Astrophysics Data System (ADS)
Susanti, R.; Anwar, Y.; Ermayanti
2018-04-01
This study aims to obtain portrayal images of science process skills among preservice biology teacher. This research took place in Sriwijaya University and involved 41 participants. To collect the data, this study used multiple choice test comprising 40 items to measure the mastery of science process skills. The data were then analyzed in descriptive manner. The results showed that communication aspect outperfomed the other skills with that 81%; while the lowest one was identifying variables and predicting (59%). In addition, basic science process skills was 72%; whereas for integrated skills was a bit lower, 67%. In general, the capability of doing science process skills varies among preservice biology teachers.
Promise of new imaging technologies for assessing ovarian function.
Singh, Jaswant; Adams, Gregg P; Pierson, Roger A
2003-10-15
Advancements in imaging technologies over the last two decades have ushered a quiet revolution in research approaches to the study of ovarian structure and function. The most significant changes in our understanding of the ovary have resulted from the use of ultrasonography which has enabled sequential analyses in live animals. Computer-assisted image analysis and mathematical modeling of the dynamic changes within the ovary has permitted exciting new avenues of research with readily quantifiable endpoints. Spectral, color-flow and power Doppler imaging now facilitate physiologic interpretations of vascular dynamics over time. Similarly, magnetic resonance imaging (MRI) is emerging as a research tool in ovarian imaging. New technologies, such as three-dimensional ultrasonography and MRI, ultrasound-based biomicroscopy and synchrotron-based techniques each have the potential to enhance our real-time picture of ovarian function to the near-cellular level. Collectively, information available in ultrasonography, MRI, computer-assisted image analysis and mathematical modeling heralds a new era in our understanding of the basic processes of female and male reproduction.
Bit-level plane image encryption based on coupled map lattice with time-varying delay
NASA Astrophysics Data System (ADS)
Lv, Xiupin; Liao, Xiaofeng; Yang, Bo
2018-04-01
Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.
Freud, Erez; Avidan, Galia; Ganel, Tzvi
2015-02-01
Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information embedded in LSF, whereas HSF information may underlie the visual system's susceptibility to distortions in objects' spatial layouts.
Colour in digital pathology: a review.
Clarke, Emily L; Treanor, Darren
2017-01-01
Colour is central to the practice of pathology because of the use of coloured histochemical and immunohistochemical stains to visualize tissue features. Our reliance upon histochemical stains and light microscopy has evolved alongside a wide variation in slide colour, with little investigation into the implications of colour variation. However, the introduction of the digital microscope and whole-slide imaging has highlighted the need for further understanding and control of colour. This is because the digitization process itself introduces further colour variation which may affect diagnosis, and image analysis algorithms often use colour or intensity measures to detect or measure tissue features. The US Food and Drug Administration have released recent guidance stating the need to develop a method of controlling colour reproduction throughout the digitization process in whole-slide imaging for primary diagnostic use. This comprehensive review introduces applied basic colour physics and colour interpretation by the human visual system, before discussing the importance of colour in pathology. The process of colour calibration and its application to pathology are also included, as well as a summary of the current guidelines and recommendations regarding colour in digital pathology. © 2016 John Wiley & Sons Ltd.
Colony image acquisition and genetic segmentation algorithm and colony analyses
NASA Astrophysics Data System (ADS)
Wang, W. X.
2012-01-01
Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.
WE-DE-206-02: MRI Hardware - Magnet, Gradient, RF Coils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocharian, A.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
WE-DE-206-04: MRI Pulse Sequences - Spin Echo, Gradient Echo, EPI, Non-Cartesia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooley, R.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
WE-DE-206-01: MRI Signal in Biological Tissues - Proton, Spin, T1, T2, T2*
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorny, K.
Magnetic resonance imaging (MRI) has become an essential part of clinical imaging due to its ability to render high soft tissue contrast. Instead of ionizing radiation, MRI use strong magnetic field, radio frequency waves and field gradients to create diagnostic useful images. It can be used to image the anatomy and also functional and physiological activities within the human body. Knowledge of the basic physical principles underlying MRI acquisition is vitally important to successful image production and proper image interpretation. This lecture will give an overview of the spin physics, imaging principle of MRI, the hardware of the MRI scanner,more » and various pulse sequences and their applications. It aims to provide a conceptual foundation to understand the image formation process of a clinical MRI scanner. Learning Objectives: Understand the origin of the MR signal and contrast from the spin physics level. Understand the main hardware components of a MRI scanner and their purposes Understand steps for MR image formation including spatial encoding and image reconstruction Understand the main kinds of MR pulse sequences and their characteristics.« less
NASA Technical Reports Server (NTRS)
1987-01-01
Used to detect eye problems in children through analysis of retinal reflexes, the system incorporates image processing techniques. VISISCREEN's photorefractor is basically a 35 millimeter camera with a telephoto lens and an electronic flash. By making a color photograph, the system can test the human eye for refractive error and obstruction in the cornea or lens. Ocular alignment problems are detected by imaging both eyes simultaneously. Electronic flash sends light into the eyes and the light is reflected from the retina back to the camera lens. Photorefractor analyzes the retinal reflexes generated by the subject's response to the flash and produces an image of the subject's eyes in which the pupils are variously colored. The nature of a defect, where such exists, is identifiable by atrained observer's visual examination.
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-01-01
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately. PMID:27879843
Multidirectional Scanning Model, MUSCLE, to Vectorize Raster Images with Straight Lines.
Karas, Ismail Rakip; Bayram, Bulent; Batuk, Fatmagul; Akay, Abdullah Emin; Baz, Ibrahim
2008-04-15
This paper presents a new model, MUSCLE (Multidirectional Scanning for Line Extraction), for automatic vectorization of raster images with straight lines. The algorithm of the model implements the line thinning and the simple neighborhood methods to perform vectorization. The model allows users to define specified criteria which are crucial for acquiring the vectorization process. In this model, various raster images can be vectorized such as township plans, maps, architectural drawings, and machine plans. The algorithm of the model was developed by implementing an appropriate computer programming and tested on a basic application. Results, verified by using two well known vectorization programs (WinTopo and Scan2CAD), indicated that the model can successfully vectorize the specified raster data quickly and accurately.
Architecture of orogenic belts and convergent zones in Western Ishtar Terra, Venus
NASA Technical Reports Server (NTRS)
Head, James W.; Vorderbruegge, R. W.; Crumpler, L. S.
1989-01-01
Linear mountain belts in Ishtar Terra were recognized from Pioneer-Venus topography, and later Arecibo images showed banded terrain interpreted to represent folds. Subsequent analyses showed that the mountains represented orogenic belts, and that each had somewhat different features and characteristics. Orogenic belts are regions of focused shortening and compressional deformation and thus provide evidence for the nature of such deformation, processes of crustal thickening (brittle, ductile), and processes of crustal loss. Such information is important in understanding the nature of convergent zones on Venus (underthrusting, imbrication, subduction), the implications for rates of crustal recycling, and the nature of environments of melting and petrogenesis. The basic elements of four convergent zones and orogenic belts in western Ishtar Terra are identified and examined, and then assess the architecture of these zones (the manner in which the elements are arrayed), and their relationships. The basic nomenclature of the convergent zones is shown.
Disorders of emotional processing in amyotrophic lateral sclerosis.
Sedda, Anna
2014-12-01
Amyotrophic lateral sclerosis (ALS) is a degenerative brain disease characterized by motor, behavioural and cognitive deficits. Only recently, emotional processing disorders have been shown in this disease. The interest in affective processing in ALS is growing given that basic emotion impairments could impact copying strategies and mood. Studies explore both basic emotion recognition and social cognition. Results are congruent on arousal and valence detection impairments, independently from the stimulus modality (verbal or visual). Further, recognition of facial expressions of anger, sadness and disgust is impaired in ALS, even when cognition is preserved. Clinical features such as type of onset and severity of the disease could be the cause of the heterogeneity in emotional deficits profiles between patients. Finally, a study employing diffusion tensor imaging showed that emotional dysfunctions in ALS are related to right hemispheric connective bundles impairments, involving the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Research on emotional processing in ALS is still in its infancy and results are mixed. Future research including more detailed clinical profiles of patients and measures of brain connectivity will provide useful information to understand heterogeneity of results in ALS.
MULTI: a shared memory approach to cooperative molecular modeling.
Darden, T; Johnson, P; Smith, H
1991-03-01
A general purpose molecular modeling system, MULTI, based on the UNIX shared memory and semaphore facilities for interprocess communication is described. In addition to the normal querying or monitoring of geometric data, MULTI also provides processes for manipulating conformations, and for displaying peptide or nucleic acid ribbons, Connolly surfaces, close nonbonded contacts, crystal-symmetry related images, least-squares superpositions, and so forth. This paper outlines the basic techniques used in MULTI to ensure cooperation among these specialized processes, and then describes how they can work together to provide a flexible modeling environment.
Low Temperature Performance of High-Speed Neural Network Circuits
NASA Technical Reports Server (NTRS)
Duong, T.; Tran, M.; Daud, T.; Thakoor, A.
1995-01-01
Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.
Geocoded data structures and their applications to Earth science investigations
NASA Technical Reports Server (NTRS)
Goldberg, M.
1984-01-01
A geocoded data structure is a means for digitally representing a geographically referenced map or image. The characteristics of representative cellular, linked, and hybrid geocoded data structures are reviewed. The data processing requirements of Earth science projects at the Goddard Space Flight Center and the basic tools of geographic data processing are described. Specific ways that new geocoded data structures can be used to adapt these tools to scientists' needs are presented. These include: expanding analysis and modeling capabilities; simplifying the merging of data sets from diverse sources; and saving computer storage space.
Pawłowska, Monika; Kalka, Dorota
2015-01-01
Obesity is a constantly escalating problem in all age groups. In the face of ubiquitous images of food, colourful advertisements of high-calorie meals and beverages, it is necessary to examine the role of the memory and attention mechanism in the processing of these stimuli. Knowledge regarding this subject will surely significantly contribute to the improvement of prevention and management of obesity programs designed to prevent secondary psychological difficulties, including depression. This paper presents cognitive-motivational model of obesity, according to which the description of mechanisms of eating disorders occurrence should include not only motivational factors but also the cognitive ones. The paper shows theoretical perspectives on the problem of obesity irrespective of its origin, as well as the latest empirical reports in this field. The presented survey demonstrates the lack of explicit research findings related to the processing of high and low-calorie food images by persons with excess weight. It seems that the knowledge of the basic mechanisms involved in the processing of these stimuli and the exploration of this phenomenon will allow to improve programs whose objective is to prevent obesity.
Real time non invasive imaging of fatty acid uptake in vivo
Henkin, Amy H.; Cohen, Allison S.; Dubikovskaya, Elena A.; Park, Hyo Min; Nikitin, Gennady F.; Auzias, Mathieu G.; Kazantzis, Melissa; Bertozzi, Carolyn R.; Stahl, Andreas
2012-01-01
Detection and quantification of fatty acid fluxes in animal model systems following physiological, pathological, or pharmacological challenges is key to our understanding of complex metabolic networks as these macronutrients also activate transcription factors and modulate signaling cascades including insulin-sensitivity. To enable non-invasive, real-time, spatiotemporal quantitative imaging of fatty acid fluxes in animals, we created a bioactivatable molecular imaging probe based on long-chain fatty acids conjugated to a reporter molecule (luciferin). We show that this probe faithfully recapitulates cellular fatty acid uptake and can be used in animal systems as a valuable tool to localize and quantitate in real-time lipid fluxes such as intestinal fatty acid absorption and brown adipose tissue activation. This imaging approach should further our understanding of basic metabolic processes and pathological alterations in multiple disease models. PMID:22928772
EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS
NASA Technical Reports Server (NTRS)
Jayroe, R. R.
1994-01-01
Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available evaluation criteria basically compare the observed results with the expected results. For the image reconstruction processes of registration and compression, the expected results are usually the original data or some selected characteristics of the original data. For classification processes the expected result is the ground truth of the scene. Thus, the comparison process consists of determining what changes occur in processing, where the changes occur, how much change occurs, and the amplitude of the change. The package includes evaluation routines for performing such comparisons as average uncertainty, average information transfer, chi-square statistics, multidimensional histograms, and computation of contingency matrices. This collection of routines is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 662K of 8 bit bytes. This collection of image processing and evaluation routines was developed in 1979.
Preferential responses in amygdala and insula during presentation of facial contempt and disgust.
Sambataro, Fabio; Dimalta, Savino; Di Giorgio, Annabella; Taurisano, Paolo; Blasi, Giuseppe; Scarabino, Tommaso; Giannatempo, Giuseppe; Nardini, Marcello; Bertolino, Alessandro
2006-10-01
Some authors consider contempt to be a basic emotion while others consider it a variant of disgust. The neural correlates of contempt have not so far been specifically contrasted with disgust. Using functional magnetic resonance imaging (fMRI), we investigated the neural networks involved in the processing of facial contempt and disgust in 24 healthy subjects. Facial recognition of contempt was lower than that of disgust and of neutral faces. The imaging data indicated significant activity in the amygdala and in globus pallidus and putamen during processing of contemptuous faces. Bilateral insula and caudate nuclei and left as well as right inferior frontal gyrus were engaged during processing of disgusted faces. Moreover, direct comparisons of contempt vs. disgust yielded significantly different activations in the amygdala. On the other hand, disgusted faces elicited greater activation than contemptuous faces in the right insula and caudate. Our findings suggest preferential involvement of different neural substrates in the processing of facial emotional expressions of contempt and disgust.
Molecular Imaging in the Era of Personalized Medicine
Jung, Kyung-Ho; Lee, Kyung-Han
2015-01-01
Clinical imaging creates visual representations of the body interior for disease assessment. The role of clinical imaging significantly overlaps with that of pathology, and diagnostic workflows largely depend on both fields. The field of clinical imaging is presently undergoing a radical change through the emergence of a new field called molecular imaging. This new technology, which lies at the intersection between imaging and molecular biology, enables noninvasive visualization of biochemical processes at the molecular level within living bodies. Molecular imaging differs from traditional anatomical imaging in that biomarkers known as imaging probes are used to visualize target molecules-of-interest. This ability opens up exciting new possibilities for applications in oncologic, neurological and cardiovascular diseases. Molecular imaging is expected to make major contributions to personalized medicine by allowing earlier diagnosis and predicting treatment response. The technique is also making a huge impact on pharmaceutical development by optimizing preclinical and clinical tests for new drug candidates. This review will describe the basic principles of molecular imaging and will briefly touch on three examples (from an immense list of new techniques) that may contribute to personalized medicine: receptor imaging, angiogenesis imaging, and apoptosis imaging. PMID:25812652
Molecular imaging in the era of personalized medicine.
Jung, Kyung-Ho; Lee, Kyung-Han
2015-01-01
Clinical imaging creates visual representations of the body interior for disease assessment. The role of clinical imaging significantly overlaps with that of pathology, and diagnostic workflows largely depend on both fields. The field of clinical imaging is presently undergoing a radical change through the emergence of a new field called molecular imaging. This new technology, which lies at the intersection between imaging and molecular biology, enables noninvasive visualization of biochemical processes at the molecular level within living bodies. Molecular imaging differs from traditional anatomical imaging in that biomarkers known as imaging probes are used to visualize target molecules-of-interest. This ability opens up exciting new possibilities for applications in oncologic, neurological and cardiovascular diseases. Molecular imaging is expected to make major contributions to personalized medicine by allowing earlier diagnosis and predicting treatment response. The technique is also making a huge impact on pharmaceutical development by optimizing preclinical and clinical tests for new drug candidates. This review will describe the basic principles of molecular imaging and will briefly touch on three examples (from an immense list of new techniques) that may contribute to personalized medicine: receptor imaging, angiogenesis imaging, and apoptosis imaging.
Photo-Carrier Multi-Dynamical Imaging at the Nanometer Scale in Organic and Inorganic Solar Cells.
Fernández Garrillo, Pablo A; Borowik, Łukasz; Caffy, Florent; Demadrille, Renaud; Grévin, Benjamin
2016-11-16
Investigating the photocarrier dynamics in nanostructured and heterogeneous energy materials is of crucial importance from both fundamental and technological points of view. Here, we demonstrate how noncontact atomic force microscopy combined with Kelvin probe force microscopy under frequency-modulated illumination can be used to simultaneously image the surface photopotential dynamics at different time scales with a sub-10 nm lateral resolution. The basic principle of the method consists in the acquisition of spectroscopic curves of the surface potential as a function of the illumination frequency modulation on a two-dimensional grid. We show how this frequency-spectroscopy can be used to probe simultaneously the charging rate and several decay processes involving short-lived and long-lived carriers. With this approach, dynamical images of the trap-filling, trap-delayed recombination and nongeminate recombination processes have been acquired in nanophase segregated organic donor-acceptor bulk heterojunction thin films. Furthermore, the spatial variation of the minority carrier lifetime has been imaged in polycrystalline silicon thin films. These results establish two-dimensional multidynamical photovoltage imaging as a universal tool for local investigations of the photocarrier dynamics in photoactive materials and devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krumeich, F., E-mail: krumeich@inorg.chem.ethz.ch; Mueller, E.; Wepf, R.A.
While HRTEM is the well-established method to characterize the structure of dodecagonal tantalum (vanadium) telluride quasicrystals and their periodic approximants, phase-contrast imaging performed on an aberration-corrected scanning transmission electron microscope (STEM) represents a favorable alternative. The (Ta,V){sub 151}Te{sub 74} clusters, the basic structural unit in all these phases, can be visualized with high resolution. A dependence of the image contrast on defocus and specimen thickness has been observed. In thin areas, the projected crystal potential is basically imaged with either dark or bright contrast at two defocus values close to Scherzer defocus as confirmed by image simulations utilizing the principlemore » of reciprocity. Models for square-triangle tilings describing the arrangement of the basic clusters can be derived from such images. - Graphical abstract: PC-STEM image of a (Ta,V){sub 151}Te{sub 74} cluster. Highlights: Black-Right-Pointing-Pointer C{sub s}-corrected STEM is applied for the characterization of dodecagonal quasicrystals. Black-Right-Pointing-Pointer The projected potential of the structure is mirrored in the images. Black-Right-Pointing-Pointer Phase-contrast STEM imaging depends on defocus and thickness. Black-Right-Pointing-Pointer For simulations of phase-contrast STEM images, the reciprocity theorem is applicable.« less
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
An investigation and conceptual design of a holographic starfield and landmark tracker
NASA Technical Reports Server (NTRS)
Welch, J. D.
1973-01-01
The analysis, experiments, and design effort of this study have supported the feasibility of the basic holographic tracker concept. Image intensifiers and photoplastic recording materials were examined, along with a Polaroid rapid process silver halide material. Two reference beam, coherent optical matched filter technique was used for multiplexing spatial frequency filters for starfields. A 1 watt HeNe laser and an electro-optical readout are also considered.
Rett syndrome: basic features of visual processing-a pilot study of eye-tracking.
Djukic, Aleksandra; Valicenti McDermott, Maria; Mavrommatis, Kathleen; Martins, Cristina L
2012-07-01
Consistently observed "strong eye gaze" has not been validated as a means of communication in girls with Rett syndrome, ubiquitously affected by apraxia, unable to reply either verbally or manually to questions during formal psychologic assessment. We examined nonverbal cognitive abilities and basic features of visual processing (visual discrimination attention/memory) by analyzing patterns of visual fixation in 44 girls with Rett syndrome, compared with typical control subjects. To determine features of visual fixation patterns, multiple pictures (with the location of the salient and presence/absence of novel stimuli as variables) were presented on the screen of a TS120 eye-tracker. Of the 44, 35 (80%) calibrated and exhibited meaningful patterns of visual fixation. They looked longer at salient stimuli (cartoon, 2.8 ± 2 seconds S.D., vs shape, 0.9 ± 1.2 seconds S.D.; P = 0.02), regardless of their position on the screen. They recognized novel stimuli, decreasing the fixation time on the central image when another image appeared on the periphery of the slide (2.7 ± 1 seconds S.D. vs 1.8 ± 1 seconds S.D., P = 0.002). Eye-tracking provides a feasible method for cognitive assessment and new insights into the "hidden" abilities of individuals with Rett syndrome. Copyright © 2012 Elsevier Inc. All rights reserved.
Using Microsoft PowerPoint as an Astronomical Image Analysis Tool
NASA Astrophysics Data System (ADS)
Beck-Winchatz, Bernhard
2006-12-01
Engaging students in the analysis of authentic scientific data is an effective way to teach them about the scientific process and to develop their problem solving, teamwork and communication skills. In astronomy several image processing and analysis software tools have been developed for use in school environments. However, the practical implementation in the classroom is often difficult because the teachers may not have the comfort level with computers necessary to install and use these tools, they may not have adequate computer privileges and/or support, and they may not have the time to learn how to use specialized astronomy software. To address this problem, we have developed a set of activities in which students analyze astronomical images using basic tools provided in PowerPoint. These include measuring sizes, distances, and angles, and blinking images. In contrast to specialized software, PowerPoint is broadly available on school computers. Many teachers are already familiar with PowerPoint, and the skills developed while learning how to analyze astronomical images are highly transferable. We will discuss several practical examples of measurements, including the following: -Variations in the distances to the sun and moon from their angular sizes -Magnetic declination from images of shadows -Diameter of the moon from lunar eclipse images -Sizes of lunar craters -Orbital radii of the Jovian moons and mass of Jupiter -Supernova and comet searches -Expansion rate of the universe from images of distant galaxies
Quantum image median filtering in the spatial domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Liu, Xiande; Xiao, Hong
2018-03-01
Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
Onboard data-processing architecture of the soft X-ray imager (SXI) on NeXT satellite
NASA Astrophysics Data System (ADS)
Ozaki, Masanobu; Dotani, Tadayasu; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.
2004-09-01
NeXT is the X-ray satellite proposed for the next Japanese space science mission. While the satellite total mass and the launching vehicle are similar to the prior satellite Astro-E2, the sensitivity is much improved; it requires all the components to be lighter and faster than previous architecture. This paper shows the data processing architecture of the X-ray CCD camera system SXI (Soft X-ray Imager), which is the top half of the WXI (Wide-band X-ray Imager) of the sensitivity in 0.2-80keV. The system is basically a variation of Astro-E2 XIS, but event extraction speed is much faster than it to fulfill the requirements coming from the large effective area and fast exposure period. At the same time, data transfer lines between components are redesigned in order to reduce the number and mass of the wire harnesses that limit the flexibility of the component distribution.
Technical aspects of dental CBCT: state of the art
Araki, K; Siewerdsen, J H; Thongvigitmanee, S S
2015-01-01
As CBCT is widely used in dental and maxillofacial imaging, it is important for users as well as referring practitioners to understand the basic concepts of this imaging modality. This review covers the technical aspects of each part of the CBCT imaging chain. First, an overview is given of the hardware of a CBCT device. The principles of cone beam image acquisition and image reconstruction are described. Optimization of imaging protocols in CBCT is briefly discussed. Finally, basic and advanced visualization methods are illustrated. Certain topics in these review are applicable to all types of radiographic imaging (e.g. the principle and properties of an X-ray tube), others are specific for dental CBCT imaging (e.g. advanced visualization techniques). PMID:25263643
Spatially variant morphological restoration and skeleton representation.
Bouaynaya, Nidhal; Charif-Chefchaouni, Mohammed; Schonfeld, Dan
2006-11-01
The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts.
GPR data processing computer software for the PC
Lucius, Jeffrey E.; Powers, Michael H.
2002-01-01
The computer software described in this report is designed for processing ground penetrating radar (GPR) data on Intel-compatible personal computers running the MS-DOS operating system or MS Windows 3.x/95/98/ME/2000. The earliest versions of these programs were written starting in 1990. At that time, commercially available GPR software did not meet the processing and display requirements of the USGS. Over the years, the programs were refined and new features and programs were added. The collection of computer programs presented here can perform all basic processing of GPR data, including velocity analysis and generation of CMP stacked sections and data volumes, as well as create publication quality data images.
Language learning impairments: integrating basic science, technology, and remediation.
Tallal, P; Merzenich, M M; Miller, S; Jenkins, W
1998-11-01
One of the fundamental goals of the modern field of neuroscience is to understand how neuronal activity gives rise to higher cortical function. However, to bridge the gap between neurobiology and behavior, we must understand higher cortical functions at the behavioral level at least as well as we have come to understand neurobiological processes at the cellular and molecular levels. This is certainly the case in the study of speech processing, where critical studies of behavioral dysfunction have provided key insights into the basic neurobiological mechanisms relevant to speech perception and production. Much of this progress derives from a detailed analysis of the sensory, perceptual, cognitive, and motor abilities of children who fail to acquire speech, language, and reading skills normally within the context of otherwise normal development. Current research now shows that a dysfunction in normal phonological processing, which is critical to the development of oral and written language, may derive, at least in part, from difficulties in perceiving and producing basic sensory-motor information in rapid succession--within tens of ms (see Tallal et al. 1993a for a review). There is now substantial evidence supporting the hypothesis that basic temporal integration processes play a fundamental role in establishing neural representations for the units of speech (phonemes), which must be segmented from the (continuous) speech stream and combined to form words, in order for the normal development of oral and written language to proceed. Results from magnetic resonance imaging (MRI) and positron emission tomography (PET) studies, as well as studies of behavioral performance in normal and language impaired children and adults, will be reviewed to support the view that the integration of rapidly changing successive acoustic events plays a primary role in phonological development and disorders. Finally, remediation studies based on this research, coupled with neuroplasticity research, will be presented.
BCB Bonding Technology of Back-Side Illuminated COMS Device
NASA Astrophysics Data System (ADS)
Wu, Y.; Jiang, G. Q.; Jia, S. X.; Shi, Y. M.
2018-03-01
Back-side illuminated CMOS(BSI) sensor is a key device in spaceborne hyperspectral imaging technology. Compared with traditional devices, the path of incident light is simplified and the spectral response is planarized by BSI sensors, which meets the requirements of quantitative hyperspectral imaging applications. Wafer bonding is the basic technology and key process of the fabrication of BSI sensors. 6 inch bonding of CMOS wafer and glass wafer was fabricated based on the low bonding temperature and high stability of BCB. The influence of different thickness of BCB on bonding strength was studied. Wafer bonding with high strength, high stability and no bubbles was fabricated by changing bonding conditions.
2010-07-15
operations of mathematical morphology applied for analysis of images are ways to extract information of image. The approach early developed [52] to use...1,2568 57 VB2 5,642; 5,804; 5,67; 5,784 0,5429 0,2338 0,04334 0,45837 CrB2 5,62; 5,779; 5,61; 5,783 0,53276 0,23482...maxT For VB2 - has min value if compare with other composite materials on the base of LaB6 and diborides of transitive metals [3], = Joule and
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon
2005-11-01
A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals.
TU-G-303-03: Machine Learning to Improve Human Learning From Longitudinal Image Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veeraraghavan, H.
‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with othermore » biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding requirements for reliable radiomic models, including robustness of metrics, adequate predictive accuracy, and generalizability. Understanding the methodology behind radiomic-genomic (’radiogenomics’) correlations. Research supported by NIH (US), CIHR (Canada), and NSERC (Canada)« less
Simultaneous optical and electrical recording of a single ion-channel.
Ide, Toru; Takeuchi, Yuko; Aoki, Takaaki; Yanagida, Toshio
2002-10-01
In recent years, the single-molecule imaging technique has proven to be a valuable tool in solving many basic problems in biophysics. The technique used to measure single-molecule functions was initially developed to study electrophysiological properties of channel proteins. However, the technology to visualize single channels at work has not received as much attention. In this study, we have for the first time, simultaneously measured the optical and electrical properties of single-channel proteins. The large conductance calcium-activated potassium channel (BK-channel) labeled with fluorescent dye molecules was incorporated into a planar bilayer membrane and the fluorescent image captured with a total internal reflection fluorescence microscope simultaneously with single-channel current recording. This innovative technology will greatly advance the study of channel proteins as well as signal transduction processes that involve ion permeation processes.
A Comparative Study : Microprogrammed Vs Risc Architectures For Symbolic Processing
NASA Astrophysics Data System (ADS)
Heudin, J. C.; Metivier, C.; Demigny, D.; Maurin, T.; Zavidovique, B.; Devos, F.
1987-05-01
It is oftenclaimed that conventional computers are not well suited for human-like tasks : Vision (Image Processing), Intelligence (Symbolic Processing) ... In the particular case of Artificial Intelligence, dynamic type-checking is one example of basic task that must be improved. The solution implemented in most Lisp work-stations consists in a microprogrammed architecture with a tagged memory. Another way to gain efficiency is to design a well suited instruction set for symbolic processing, which reduces the semantic gap between the high level language and the machine code. In this framework, the RISC concept provides a convenient approach to study new architectures for symbolic processing. This paper compares both approaches and describes our projectof designing a compact symbolic processor for Artificial Intelligence applications.
Dynamic single photon emission computed tomography—basic principles and cardiac applications
Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F
2011-01-01
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements. PMID:20858925
NASA Astrophysics Data System (ADS)
Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.
2010-10-01
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.
MO-F-204-00: Preparing for the ABR Diagnostic and Nuclear Medical Physics Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szczykutowicz, T.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambelli, J.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenney, S.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-00: Preparing for the ABR Diagnostic and Nuclear Medicine Physics Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simiele, S.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambelli, J.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
A Discrete Global Grid System Programming Language Using MapReduce
NASA Astrophysics Data System (ADS)
Peterson, P.; Shatz, I.
2016-12-01
A discrete global grid system (DGGS) is a powerful mechanism for storing and integrating geospatial information. As a "pixelization" of the Earth, many image processing techniques lend themselves to the transformation of data values referenced to the DGGS cells. It has been shown that image algebra, as an example, and advanced algebra, like Fast Fourier Transformation, can be used on the DGGS tiling structure for geoprocessing and spatial analysis. MapReduce has been shown to provide advantages for processing and generating large data sets within distributed and parallel computing. The DGGS structure is ideally suited for big distributed Earth data. We proposed that basic expressions could be created to form the atoms of a generalized DGGS language using the MapReduce programming model. We created three very efficient expressions: Selectors (aka filter) - A selection function that generate a set of cells, cell collections, or geometries; Calculators (aka map) - A computational function (including quantization of raw measurements and data sources) that generate values in a DGGS cell; and Aggregators (aka reduce) - A function that generate spatial statistics from cell values within a cell. We found that these three basic MapReduce operations along with a forth function, the Iterator, for horizontal and vertical traversing of any DGGS structure, provided simple building block resulting in very efficient operations and processes that could be used with any DGGS. We provide examples and a demonstration of their effectiveness using the ISEA3H DGGS on the PYXIS Studio.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
Content Based Image Retrieval by Using Color Descriptor and Discrete Wavelet Transform.
Ashraf, Rehan; Ahmed, Mudassar; Jabbar, Sohail; Khalid, Shehzad; Ahmad, Awais; Din, Sadia; Jeon, Gwangil
2018-01-25
Due to recent development in technology, the complexity of multimedia is significantly increased and the retrieval of similar multimedia content is a open research problem. Content-Based Image Retrieval (CBIR) is a process that provides a framework for image search and low-level visual features are commonly used to retrieve the images from the image database. The basic requirement in any image retrieval process is to sort the images with a close similarity in term of visually appearance. The color, shape and texture are the examples of low-level image features. The feature plays a significant role in image processing. The powerful representation of an image is known as feature vector and feature extraction techniques are applied to get features that will be useful in classifying and recognition of images. As features define the behavior of an image, they show its place in terms of storage taken, efficiency in classification and obviously in time consumption also. In this paper, we are going to discuss various types of features, feature extraction techniques and explaining in what scenario, which features extraction technique will be better. The effectiveness of the CBIR approach is fundamentally based on feature extraction. In image processing errands like object recognition and image retrieval feature descriptor is an immense among the most essential step. The main idea of CBIR is that it can search related images to an image passed as query from a dataset got by using distance metrics. The proposed method is explained for image retrieval constructed on YCbCr color with canny edge histogram and discrete wavelet transform. The combination of edge of histogram and discrete wavelet transform increase the performance of image retrieval framework for content based search. The execution of different wavelets is additionally contrasted with discover the suitability of specific wavelet work for image retrieval. The proposed algorithm is prepared and tried to implement for Wang image database. For Image Retrieval Purpose, Artificial Neural Networks (ANN) is used and applied on standard dataset in CBIR domain. The execution of the recommended descriptors is assessed by computing both Precision and Recall values and compared with different other proposed methods with demonstrate the predominance of our method. The efficiency and effectiveness of the proposed approach outperforms the existing research in term of average precision and recall values.
Image of Turkish Basic Schools: A Reflection from the Province of Ankara
ERIC Educational Resources Information Center
Eres, Figen
2011-01-01
The purpose of this study was to investigate the organizational image of basic schools in Turkey, a rapidly developing nation that has been investing significantly in education. Participants were 730 residents of Ankara province in the Golbasi district. The participants were selected using a cluster sampling methodology. Data were collected…
Ontology-based malaria parasite stage and species identification from peripheral blood smear images.
Makkapati, Vishnu V; Rao, Raghuveer M
2011-01-01
The diagnosis and treatment of malaria infection requires detecting the presence of the malaria parasite in the patient as well as identification of the parasite species. We present an image processing-based approach to detect parasites in microscope images of a blood smear and an ontology-based classification of the stage of the parasite for identifying the species of infection. This approach is patterned after the diagnosis approach adopted by a pathologist for visual examination, and hence, is expected to deliver similar results. We formulate several rules based on the morphology of the basic components of a parasite, namely, chromatin dot(s) and cytoplasm, to identify the parasite stage and species. Numerical results are presented for data taken from various patients. A sensitivity of 88% and a specificity of 95% is reported by evaluation of the scheme on 55 images.
Brain single-photon emission CT physics principles.
Accorsi, R
2008-08-01
The basic principles of scintigraphy are reviewed and extended to 3D imaging. Single-photon emission computed tomography (SPECT) is a sensitive and specific 3D technique to monitor in vivo functional processes in both clinical and preclinical studies. SPECT/CT systems are becoming increasingly common and can provide accurately registered anatomic information as well. In general, SPECT is affected by low photon-collection efficiency, but in brain imaging, not all of the large FOV of clinical gamma cameras is needed: The use of fan- and cone-beam collimation trades off the unused FOV for increased sensitivity and resolution. The design of dedicated cameras aims at increased angular coverage and resolution by minimizing the distance from the patient. The corrections needed for quantitative imaging are challenging but can take advantage of the relative spatial uniformity of attenuation and scatter. Preclinical systems can provide submillimeter resolution in small animal brain imaging with workable sensitivity.
PACS-Based Computer-Aided Detection and Diagnosis
NASA Astrophysics Data System (ADS)
Huang, H. K. (Bernie); Liu, Brent J.; Le, Anh HongTu; Documet, Jorge
The ultimate goal of Picture Archiving and Communication System (PACS)-based Computer-Aided Detection and Diagnosis (CAD) is to integrate CAD results into daily clinical practice so that it becomes a second reader to aid the radiologist's diagnosis. Integration of CAD and Hospital Information System (HIS), Radiology Information System (RIS) or PACS requires certain basic ingredients from Health Level 7 (HL7) standard for textual data, Digital Imaging and Communications in Medicine (DICOM) standard for images, and Integrating the Healthcare Enterprise (IHE) workflow profiles in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements to be a healthcare information system. Among the DICOM standards and IHE workflow profiles, DICOM Structured Reporting (DICOM-SR); and IHE Key Image Note (KIN), Simple Image and Numeric Report (SINR) and Post-processing Work Flow (PWF) are utilized in CAD-HIS/RIS/PACS integration. These topics with examples are presented in this chapter.
A new blood vessel extraction technique using edge enhancement and object classification.
Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin
2013-12-01
Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
Imaging and the new biology: What's wrong with this picture?
NASA Astrophysics Data System (ADS)
Vannier, Michael W.
2004-05-01
The Human Genome has been defined, giving us one part of the equation that stems from the central dogma of molecular biology. Despite this awesome scientific achievement, the correspondence between genomics and imaging is weak, since we cannot predict an organism's phenotype from even perfect knowledge of its genetic complement. Biological knowledge comes in several forms, and the genome is perhaps the best known and most completely understood type. Imaging creates another form of biological information, providing the ability to study morphology, growth and development, metabolic processes, and diseases in vitro and in vivo at many levels of scale. The principal challenge in biomedical imaging for the future lies in the need to reconcile the data provided by one or multiple modalities with other forms of biological knowledge, most importantly the genome, proteome, physiome, and other "-ome's." To date, the imaging science community has not set a high priority on the unification of their results with genomics, proteomics, and physiological functions in most published work. Images are relatively isolated from other forms of biological data, impairing our ability to conceive and address many fundamental questions in research and clinical practice. This presentation will explain the challenge of biological knowledge integration in basic research and clinical applications from the standpoint of imaging and image processing. The impediments to progress, isolation of the imaging community, and mainstream of new and future biological science will be identified, so the critical and immediate need for change can be highlighted.
Global search and rescue - A new concept. [orbital digital radar system with passive reflectors
NASA Technical Reports Server (NTRS)
Sivertson, W. E., Jr.
1976-01-01
A new terrestrial search and rescue concept is defined embodying the use of simple passive radiofreqeuncy reflectors in conjunction with a low earth-orbiting, all-weather, synthetic aperture radar to detect, identify, and position locate earth-bound users in distress. Users include ships, aircraft, small boats, explorers, hikers, etc. Airborne radar tests were conducted to evaluate the basic concept. Both X-band and L-band, dual polarization radars were operated simultaneously. Simple, relatively small, corner-reflector targets were successfully imaged and digital data processing approaches were investigated. Study of the basic concept and evaluation of results obtained from aircraft flight tests indicate an all-weather, day or night, global search and rescue system is feasible.
A novel method of the image processing on irregular triangular meshes
NASA Astrophysics Data System (ADS)
Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta
2018-04-01
The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).
Developing Low-Noise GaAs JFETs For Cryogenic Operation
NASA Technical Reports Server (NTRS)
Cunningham, Thomas J.
1995-01-01
Report discusses aspects of effort to develop low-noise, low-gate-leakage gallium arsenide-based junction field-effect transistors (JFETs) for operation at temperature of about 4 K as readout amplifiers and multiplexing devices for infrared-imaging devices. Transistors needed to replace silicon transistors, relatively noisy at 4 K. Report briefly discusses basic physical principles of JFETs and describes continuing process of optimization of designs of GaAs JFETs for cryogenic operation.
World-Wide Web Tools for Locating Planetary Images
NASA Technical Reports Server (NTRS)
Kanefsky, Bob; Deiss, Ron (Technical Monitor)
1995-01-01
The explosive growth of the World-Wide Web (WWW) in the past year has made it feasible to provide interactive graphical tools to assist scientists in locating planetary images. The highest available resolution images of any site of interest can be quickly found on a map or plot, and, if online, displayed immediately on nearly any computer equipped with a color screen, an Internet connection, and any of the free WWW browsers. The same tools may also be of interest to educators, students, and the general public. Image finding tools have been implemented covering most of the solar system: Earth, Mars, and the moons and planets imaged by Voyager. The Mars image-finder, which plots the footprints of all the high-resolution Viking Orbiter images and can be used to display any that are available online, also contains a complete scrollable atlas and hypertext gazetteer to help locating areas. The Earth image-finder is linked to thousands of Shuttle images stored at NASA/JSC, and displays them as red dots on a globe. The Voyager image-finder plots images as dots, by longitude and apparent target size, linked to online images. The locator (URL) for the top-level page is http: //ic-www.arc.nasa.gov/ic/projects/bayes-group/Atlas/. Through the efforts of the Planetary Data System and other organizations, hundreds of thousands of planetary images are now available on CD-ROM, and many of these have been made available on the WWW. However, locating images of a desired site is still problematic, in practice. For example, many scientists studying Mars use digital image maps, which are one third the resolution of Viking Orbiter survey images. When they douse Viking Orbiter images, they often work with photographically printed hardcopies, which lack the flexibility of digital images: magnification, contrast stretching, and other basic image-processing techniques offered by off-the-shelf software. From the perspective of someone working on an experimental image processing technique for super-resolution, the discovery that potential users are often not using the highest resolution already available, nor using conventional image processing techniques, was surprising. This motivated the present work.
NASA Astrophysics Data System (ADS)
Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves
2015-10-01
This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.
Three-dimensional reconstruction from serial sections in PC-Windows platform by using 3D_Viewer.
Xu, Yi-Hua; Lahvis, Garet; Edwards, Harlene; Pitot, Henry C
2004-11-01
Three-dimensional (3D) reconstruction from serial sections allows identification of objects of interest in 3D and clarifies the relationship among these objects. 3D_Viewer, developed in our laboratory for this purpose, has four major functions: image alignment, movie frame production, movie viewing, and shift-overlay image generation. Color images captured from serial sections were aligned; then the contours of objects of interest were highlighted in a semi-automatic manner. These 2D images were then automatically stacked at different viewing angles, and their composite images on a projected plane were recorded by an image transform-shift-overlay technique. These composition images are used in the object-rotation movie show. The design considerations of the program and the procedures used for 3D reconstruction from serial sections are described. This program, with a digital image-capture system, a semi-automatic contours highlight method, and an automatic image transform-shift-overlay technique, greatly speeds up the reconstruction process. Since images generated by 3D_Viewer are in a general graphic format, data sharing with others is easy. 3D_Viewer is written in MS Visual Basic 6, obtainable from our laboratory on request.
Quality initiatives: planning, setting up, and carrying out radiology process improvement projects.
Tamm, Eric P; Szklaruk, Janio; Puthooran, Leejo; Stone, Danna; Stevens, Brian L; Modaro, Cathy
2012-01-01
In the coming decades, those who provide radiologic imaging services will be increasingly challenged by the economic, demographic, and political forces affecting healthcare to improve their efficiency, enhance the value of their services, and achieve greater customer satisfaction. It is essential that radiologists master and consistently apply basic process improvement skills that have allowed professionals in many other fields to thrive in a competitive environment. The authors provide a step-by-step overview of process improvement from the perspective of a radiologic imaging practice by describing their experience in conducting a process improvement project: to increase the daily volume of body magnetic resonance imaging examinations performed at their institution. The first step in any process improvement project is to identify and prioritize opportunities for improvement in the work process. Next, an effective project team must be formed that includes representatives of all participants in the process. An achievable aim must be formulated, appropriate measures selected, and baseline data collected to determine the effects of subsequent efforts to achieve the aim. Each aspect of the process in question is then analyzed by using appropriate tools (eg, flowcharts, fishbone diagrams, Pareto diagrams) to identify opportunities for beneficial change. Plans for change are then established and implemented with regular measurements and review followed by necessary adjustments in course. These so-called PDSA (planning, doing, studying, and acting) cycles are repeated until the aim is achieved or modified and the project closed.
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Desjardins, M.; Shenk, W. E.
1979-01-01
Simultaneous Geosynchronous Operational Environmental Satellite (GOES) 1 km resolution visible image pairs can provide quantitative three dimensional measurements of clouds. These data have great potential for severe storms research and as a basic parameter measurement source for other areas of meteorology (e.g. climate). These stereo cloud height measurements are not subject to the errors and ambiguities caused by unknown cloud emissivity and temperature profiles that are associated with infrared techniques. This effort describes the display and measurement of stereo data using digital processing techniques.
Stereo optical guidance system for control of industrial robots
NASA Technical Reports Server (NTRS)
Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)
1992-01-01
A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.
Optical-Correlator Neural Network Based On Neocognitron
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Stoner, William W.
1994-01-01
Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.
TU-G-303-04: Radiomics and the Coming Pan-Omics Revolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Naqa, I.
‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with othermore » biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding requirements for reliable radiomic models, including robustness of metrics, adequate predictive accuracy, and generalizability. Understanding the methodology behind radiomic-genomic (’radiogenomics’) correlations. Research supported by NIH (US), CIHR (Canada), and NSERC (Canada)« less
Geometry Of Discrete Sets With Applications To Pattern Recognition
NASA Astrophysics Data System (ADS)
Sinha, Divyendu
1990-03-01
In this paper we present a new framework for discrete black and white images that employs only integer arithmetic. This framework is shown to retain the essential characteristics of the framework for Euclidean images. We propose two norms and based on them, the permissible geometric operations on images are defined. The basic invariants of our geometry are line images, structure of image and the corresponding local property of strong attachment of pixels. The permissible operations also preserve the 3x3 neighborhoods, area, and perpendicularity. The structure, patterns, and the inter-pattern gaps in a discrete image are shown to be conserved by the magnification and contraction process. Our notions of approximate congruence, similarity and symmetry are similar, in character, to the corresponding notions, for Euclidean images [1]. We mention two discrete pattern recognition algorithms that work purely with integers, and which fit into our framework. Their performance has been shown to be at par with the performance of traditional geometric schemes. Also, all the undesired effects of finite length registers in fixed point arithmetic that plague traditional algorithms, are non-existent in this family of algorithms.
Arousal Rather than Basic Emotions Influence Long-Term Recognition Memory in Humans
Marchewka, Artur; Wypych, Marek; Moslehi, Abnoos; Riegel, Monika; Michałowski, Jarosław M.; Jednoróg, Katarzyna
2016-01-01
Emotion can influence various cognitive processes, however its impact on memory has been traditionally studied over relatively short retention periods and in line with dimensional models of affect. The present study aimed to investigate emotional effects on long-term recognition memory according to a combined framework of affective dimensions and basic emotions. Images selected from the Nencki Affective Picture System were rated on the scale of affective dimensions and basic emotions. After 6 months, subjects took part in a surprise recognition test during an fMRI session. The more negative the pictures the better they were remembered, but also the more false recognitions they provoked. Similar effects were found for the arousal dimension. Recognition success was greater for pictures with lower intensity of happiness and with higher intensity of surprise, sadness, fear, and disgust. Consecutive fMRI analyses showed a significant activation for remembered (recognized) vs. forgotten (not recognized) images in anterior cingulate and bilateral anterior insula as well as in bilateral caudate nuclei and right thalamus. Further, arousal was found to be the only subjective rating significantly modulating brain activation. Higher subjective arousal evoked higher activation associated with memory recognition in the right caudate and the left cingulate gyrus. Notably, no significant modulation was observed for other subjective ratings, including basic emotion intensities. These results emphasize the crucial role of arousal for long-term recognition memory and support the hypothesis that the memorized material, over time, becomes stored in a distributed cortical network including the core salience network and basal ganglia. PMID:27818626
Arousal Rather than Basic Emotions Influence Long-Term Recognition Memory in Humans.
Marchewka, Artur; Wypych, Marek; Moslehi, Abnoos; Riegel, Monika; Michałowski, Jarosław M; Jednoróg, Katarzyna
2016-01-01
Emotion can influence various cognitive processes, however its impact on memory has been traditionally studied over relatively short retention periods and in line with dimensional models of affect. The present study aimed to investigate emotional effects on long-term recognition memory according to a combined framework of affective dimensions and basic emotions. Images selected from the Nencki Affective Picture System were rated on the scale of affective dimensions and basic emotions. After 6 months, subjects took part in a surprise recognition test during an fMRI session. The more negative the pictures the better they were remembered, but also the more false recognitions they provoked. Similar effects were found for the arousal dimension. Recognition success was greater for pictures with lower intensity of happiness and with higher intensity of surprise, sadness, fear, and disgust. Consecutive fMRI analyses showed a significant activation for remembered (recognized) vs. forgotten (not recognized) images in anterior cingulate and bilateral anterior insula as well as in bilateral caudate nuclei and right thalamus. Further, arousal was found to be the only subjective rating significantly modulating brain activation. Higher subjective arousal evoked higher activation associated with memory recognition in the right caudate and the left cingulate gyrus. Notably, no significant modulation was observed for other subjective ratings, including basic emotion intensities. These results emphasize the crucial role of arousal for long-term recognition memory and support the hypothesis that the memorized material, over time, becomes stored in a distributed cortical network including the core salience network and basal ganglia.
Electron holography—basics and applications
NASA Astrophysics Data System (ADS)
Lichte, Hannes; Lehmann, Michael
2008-01-01
Despite the huge progress achieved recently by means of the corrector for aberrations, allowing now a true atomic resolution of 0.1 nm, hence making it an unrivalled tool for nanoscience, transmission electron microscopy (TEM) suffers from a severe drawback: in a conventional electron micrograph only a poor phase contrast can be achieved, i.e. phase structures are virtually invisible. Therefore, conventional TEM is nearly blind for electric and magnetic fields, which are pure phase objects. Since such fields provoked by the atomic structure, e.g. of semiconductors and ferroelectrics, largely determine the solid state properties, hence the importance for high technology applications, substantial object information is missing. Electron holography in TEM offers the solution: by superposition with a coherent reference wave, a hologram is recorded, from which the image wave can be completely reconstructed by amplitude and phase. Now the object is displayed quantitatively in two separate images: one representing the amplitude, the other the phase. From the phase image, electric and magnetic fields can be determined quantitatively in the range from micrometre down to atomic dimensions by all wave optical methods that one can think of, both in real space and in Fourier space. Electron holography is pure wave optics. Therefore, we discuss the basics of coherence and interference, the implementation into a TEM, the path of rays for recording holograms as well as the limits in lateral and signal resolution. We outline the methods of reconstructing the wave by numerical image processing and procedures for extracting the object properties of interest. Furthermore, we present a broad spectrum of applications both at mesoscopic and atomic dimensions. This paper gives an overview of the state of the art pointing at the needs for further development. It is also meant as encouragement for those who refrain from holography, thinking that it can only be performed by specialists in highly specialized laboratories. In fact, a modern TEM built for atomic resolution and equipped with a field emitter or a Schottky emitter, well aligned by a skilled operator, can deliver good holograms. Running commercially available image processing software and mathematics programs on a laptop-computer is sufficient for reconstruction of the amplitude and phase images and extracting desirable object information.
NASA Astrophysics Data System (ADS)
Hahlweg, Cornelius; Rothe, Hendrik
2016-09-01
For more than two decades lessons in optics, digital image processing and optronics are compulsory optional subjects and as such integral parts of the courses in mechanical engineering at the University of the Federal Armed Forces in Hamburg. They are provided by the Chair for Measurement and Information Technology. Historically, the curricula started as typical basic lessons in optics and digital image processing and related sensors. Practical sessions originally concentrated on image processing procedures in Pascal, C and later Matlab. They evolved into a broad portfolio of practical hands-on lessons in lab and field, including high-tech and especially military equipment, but also homemaker style primitive experiments, of which the paper will give a methodical overview. A special topic - as always with optics in education - is the introduction to the various levels of abstraction in conjunction with the highly complex and wide-ranging matter squeezed into only two trimesters - instead of semesters at civil universities - for an audience being subject to strains from both study and duty. The talk will be accompanied by striking multi-media material, which will be also part of the multi-media attachment of the paper.
NASA Astrophysics Data System (ADS)
Latief, F. D. E.; Mohammad, I. H.; Rarasati, A. D.
2017-11-01
Digital imaging of a concrete sample using high resolution tomographic imaging by means of X-Ray Micro Computed Tomography (μ-CT) has been conducted to assess the characteristic of the sample’s structure. A standard procedure of image acquisition, reconstruction, image processing of the method using a particular scanning device i.e., the Bruker SkyScan 1173 High Energy Micro-CT are elaborated. A qualitative and a quantitative analysis were briefly performed on the sample to deliver some basic ideas of the capability of the system and the bundled software package. Calculation of total VOI volume, object volume, percent of object volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity were conducted and analysed. This paper should serve as a brief description of how the device can produce the preferred image quality as well as the ability of the bundled software packages to help in performing qualitative and quantitative analysis.
Bardo, Dianna M E; Brown, Paul
2008-08-01
Cardiac MDCT is here to stay. And, it is more than just imaging coronary arteries. Understanding the differences in and the benefits of one CT scanner from another will help you to optimize the capabilities of the scanner, but requires a basic understanding of the MDCT imaging physics.This review provides key information needed to understand the differences in the types of MDCT scanners, from 64 - 320 detectors, flat panels, single and dual source configurations, step and shoot prospective and retrospective gating, and how each factor influences radiation dose, spatial and temporal resolution, and image noise.
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.
2016-09-01
The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).
Dahlberg, Jerry; Tkacik, Peter T; Mullany, Brigid; Fleischhauer, Eric; Shahinian, Hossein; Azimi, Farzad; Navare, Jayesh; Owen, Spencer; Bisel, Tucker; Martin, Tony; Sholar, Jodie; Keanini, Russell G
2017-12-04
An analog, macroscopic method for studying molecular-scale hydrodynamic processes in dense gases and liquids is described. The technique applies a standard fluid dynamic diagnostic, particle image velocimetry (PIV), to measure: i) velocities of individual particles (grains), extant on short, grain-collision time-scales, ii) velocities of systems of particles, on both short collision-time- and long, continuum-flow-time-scales, iii) collective hydrodynamic modes known to exist in dense molecular fluids, and iv) short- and long-time-scale velocity autocorrelation functions, central to understanding particle-scale dynamics in strongly interacting, dense fluid systems. The basic system is composed of an imaging system, light source, vibrational sensors, vibrational system with a known media, and PIV and analysis software. Required experimental measurements and an outline of the theoretical tools needed when using the analog technique to study molecular-scale hydrodynamic processes are highlighted. The proposed technique provides a relatively straightforward alternative to photonic and neutron beam scattering methods traditionally used in molecular hydrodynamic studies.
Advances in dual-tone development for pitch frequency doubling
NASA Astrophysics Data System (ADS)
Fonseca, Carlos; Somervell, Mark; Scheer, Steven; Kuwahara, Yuhei; Nafus, Kathleen; Gronheid, Roel; Tarutani, Shinji; Enomoto, Yuuichiro
2010-04-01
Dual-tone development (DTD) has been previously proposed as a potential cost-effective double patterning technique1. DTD was reported as early as in the late 1990's2. The basic principle of dual-tone imaging involves processing exposed resist latent images in both positive tone (aqueous base) and negative tone (organic solvent) developers. Conceptually, DTD has attractive cost benefits since it enables pitch doubling without the need for multiple etch steps of patterned resist layers. While the concept for DTD technique is simple to understand, there are many challenges that must be overcome and understood in order to make it a manufacturing solution. Previous work by the authors demonstrated feasibility of DTD imaging for 50nm half-pitch features at 0.80NA (k1 = 0.21) and discussed challenges lying ahead for printing sub-40nm half-pitch features with DTD. While previous experimental results suggested that clever processing on the wafer track can be used to enable DTD beyond 50nm halfpitch, it also suggest that identifying suitable resist materials or chemistries is essential for achieving successful imaging results with novel resist processing methods on the wafer track. In this work, we present recent advances in the search for resist materials that work in conjunction with novel resist processing methods on the wafer track to enable DTD. Recent experimental results with new resist chemistries, specifically designed for DTD, are presented in this work. We also present simulation studies that help and support identifying resist properties that could enable DTD imaging, which ultimately lead to producing viable DTD resist materials.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Nestor, Adrian; Vettel, Jean M; Tarr, Michael J
2013-11-01
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.
Processing and Analysis of Multibeam Sonar Data and Images near the Yellow River Estuary
NASA Astrophysics Data System (ADS)
Tang, Q.
2017-12-01
Yellow River Estuary is a typical high-suspended particulate matter estuary in the world. A lot of sediments from Yellow River and other substances produced by human activity cause high-concentration suspended matter and depositional system in the estuary and adjacent water area. Multibeam echo sounder (MBES) was developed in the 1970s, and it not only provided high-precision bathymetric data, but also provided seabed backscatter strength data and water column data with high temporal and spatial resolution. Here, based on high-precision sonar data of the seabed and water column collected by SeaBat7125 MBES system near the Yellow River Estuary, we use advanced data and image processing methods to generate seabed sonar images and water suspended particulate matter acoustic images. By analyzing these data and images, we get a lot of details of the seabed and whole water column features, and we also acquire their shape, size and basic physical characteristics of suspended particulate matters in the experiment area near the Yellow River Estuary. This study shows great potential for monitoring suspended particulate matter use MBES, and the research results will contribute to a comprehensive understanding of sediment transportation, evolution of river trough and shoal in Yellow River Estuary.
Live imaging of targeted cell ablation in Xenopus: a new model to study demyelination and repair
Kaya, F.; Mannioui, A.; Chesneau, A.; Sekizar, S.; Maillard, E.; Ballagny, C.; Houel-Renault, L.; Du Pasquier, D.; Bronchain, O.; Holtzmann, I.; Desmazieres, A.; Thomas, J.-L.; Demeneix, B. A.; Brophy, P. J.; Zalc, B.; Mazabraud, A.
2012-01-01
Live imaging studies of the processes of demyelination and remyelination have so far been technically limited in mammals. We have thus generated a Xenopus laevis transgenic line allowing live imaging and conditional ablation of myelinating oligodendrocytes throughout the central nervous system (CNS). In these transgenic pMBP-eGFP-NTR tadpoles the myelin basic protein (MBP) regulatory sequences, specific to mature oligodendrocytes, are used to drive expression of an eGFP (enhanced green fluorescent protein) reporter fused to the E. coli nitroreductase (NTR) selection enzyme. This enzyme converts the innocuous pro-drug metronidazole (MTZ) to a cytotoxin. Using two-photon imaging in vivo, we show that pMBP-eGFP-NTR tadpoles display a graded oligodendrocyte ablation in response to MTZ, which depends on the exposure time to MTZ. MTZ-induced cell death was restricted to oligodendrocytes, without detectable axonal damage. After cessation of MTZ treatment, remyelination proceeded spontaneously, but was strongly accelerated by retinoic acid. Altogether, these features establish the Xenopus pMBP-eGFP-NTR line as a novel in vivo model for the study of demyelination/remyelination processes and for large-scale screens of therapeutic agents promoting myelin repair. PMID:22973012
Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging
Joshi, Bishnu P.; Wang, Thomas D.
2010-01-01
Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839
NASA Astrophysics Data System (ADS)
Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.
2015-04-01
Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.
The OSHA standard setting process: role of the occupational health nurse.
Klinger, C S; Jones, M L
1994-08-01
1. Occupational health nurses are the health professionals most often involved with the worker who suffers as a result of ineffective or non-existent safety and health standards. 2. Occupational health nurses are familiar with health and safety standards, but may not understand or participate in the rulemaking process used to develop them. 3. Knowing the eight basic steps of rulemaking and actively participating in the process empowers occupational health nurses to influence national policy decisions affecting the safety and health of millions of workers. 4. By actively participating in rulemaking activities, occupational health nurses also improve the quality of occupational health nursing practice and enhance the image of the nursing profession.
Teaching by research at undergraduate schools: an experience
NASA Astrophysics Data System (ADS)
Costa, Manuel F. M.
1997-12-01
On this communication I will report a pedagogical experience undertaken in the 1995 class of Image Processing of the course of Applied Physics of the University of Minho. The learning process requires always an active critical participation of the student in an experience essentially personal that should and must be rewarding and fulfilling. To us scientists virtually nothing gives us more pleasure and fulfillment than the research processes. Furthermore it is our main way to improve our, and I stress our, knowledge. Thus I decided to center my undergraduate students' learning process of the basics of digital image processing on a simple applied research program. The proposed project was to develop a process of inspection to be introduced in a generic production line. Measured should be the transversal distance between an object and the extremity of a conveyor belt where it is transported. The measurement method was proposed to be optical triangulation combined with shadow analysis. To the students was given almost entire liberty and responsibility. I limited my self to asses the development of the project orienting them and point out different or pertinent points of view only when strictly necessary.
Image encryption based on nonlinear encryption system and public-key cryptography
NASA Astrophysics Data System (ADS)
Zhao, Tieyu; Ran, Qiwen; Chi, Yingying
2015-03-01
Recently, optical asymmetric cryptosystem (OACS) has became the focus of discussion and concern of researchers. Some researchers pointed out that OACS was not tenable because of misunderstanding the concept of asymmetric cryptosystem (ACS). We propose an improved cryptosystem using RSA public-key algorithm based on existing OACS and the new system conforms to the basic agreement of public key cryptosystem. At the beginning of the encryption process, the system will produce an independent phase matrix and allocate the input image, which also conforms to one-time pad cryptosystem. The simulation results show that the validity of the improved cryptosystem and the high robustness against attack scheme using phase retrieval technique.
Seasonal erosion and restoration of Mars' northern polar dunes.
Hansen, C J; Bourke, M; Bridges, N T; Byrne, S; Colon, C; Diniega, S; Dundas, C; Herkenhoff, K; McEwen, A; Mellon, M; Portyankina, G; Thomas, N
2011-02-04
Despite radically different environmental conditions, terrestrial and martian dunes bear a strong resemblance, indicating that the basic processes of saltation and grainfall (sand avalanching down the dune slipface) operate on both worlds. Here, we show that martian dunes are subject to an additional modification process not found on Earth: springtime sublimation of Mars' CO(2) seasonal polar caps. Numerous dunes in Mars' north polar region have experienced morphological changes within a Mars year, detected in images acquired by the High-Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter. Dunes show new alcoves, gullies, and dune apron extension. This is followed by remobilization of the fresh deposits by the wind, forming ripples and erasing gullies. The widespread nature of these rapid changes, and the pristine appearance of most dunes in the area, implicates active sand transport in the vast polar erg in Mars' current climate.
NASA Astrophysics Data System (ADS)
Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing
2015-09-01
In this paper, a novel image encryption system with fingerprint used as a secret key is proposed based on the phase retrieval algorithm and RSA public key algorithm. In the system, the encryption keys include the fingerprint and the public key of RSA algorithm, while the decryption keys are the fingerprint and the private key of RSA algorithm. If the users share the fingerprint, then the system will meet the basic agreement of asymmetric cryptography. The system is also applicable for the information authentication. The fingerprint as secret key is used in both the encryption and decryption processes so that the receiver can identify the authenticity of the ciphertext by using the fingerprint in decryption process. Finally, the simulation results show the validity of the encryption scheme and the high robustness against attacks based on the phase retrieval technique.
Seasonal erosion and restoration of Mars' northern polar dunes
Hansen, C.J.; Bourke, M.; Bridges, N.T.; Byrne, S.; Colon, C.; Diniega, S.; Dundas, C.; Herkenhoff, K.; McEwen, A.; Mellon, M.; Portyankina, G.; Thomas, N.
2011-01-01
Despite radically different environmental conditions, terrestrial and martian dunes bear a strong resemblance, indicating that the basic processes of saltation and grainfall (sand avalanching down the dune slipface) operate on both worlds. Here, we show that martian dunes are subject to an additional modification process not found on Earth: springtime sublimation of Mars' CO 2 seasonal polar caps. Numerous dunes in Mars' north polar region have experienced morphological changes within a Mars year, detected in images acquired by the High-Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter. Dunes show new alcoves, gullies, and dune apron extension. This is followed by remobilization of the fresh deposits by the wind, forming ripples and erasing gullies. The widespread nature of these rapid changes, and the pristine appearance of most dunes in the area, implicates active sand transport in the vast polar erg in Mars' current climate.
From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging
Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell
2010-01-01
During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516
NASA Astrophysics Data System (ADS)
Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.
2018-03-01
Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations
1992-05-01
relatively independent of the 29 30 Basic Objects Support Objects GUI Access Objects Displays Display Mapping Menues Pixel Snapshot Gizmos /Widgets...a user interactively or set from some gizmo /widget, or that a particular browser field is to be updated when some state occurs or a process completes...also want to distinguish tree graph browsers.] 4.3.2 Simplified access to GUI objects "* Gizmos and Widgets: The IUE should provide simplified
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
High frequency ultrasound with color Doppler in dermatology*
Barcaui, Elisa de Oliveira; Carvalho, Antonio Carlos Pires; Lopes, Flavia Paiva Proença Lobo; Piñeiro-Maceira, Juan; Barcaui, Carlos Baptista
2016-01-01
Ultrasonography is a method of imaging that classically is used in dermatology to study changes in the hypoderma, as nodules and infectious and inflammatory processes. The introduction of high frequency and resolution equipments enabled the observation of superficial structures, allowing differentiation between skin layers and providing details for the analysis of the skin and its appendages. This paper aims to review the basic principles of high frequency ultrasound and its applications in different areas of dermatology. PMID:27438191
Standards to support information systems integration in anatomic pathology.
Daniel, Christel; García Rojo, Marcial; Bourquard, Karima; Henin, Dominique; Schrader, Thomas; Della Mea, Vincenzo; Gilbertson, John; Beckwith, Bruce A
2009-11-01
Integrating anatomic pathology information- text and images-into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. The IHE anatomic pathology technical framework describes a first integration profile, "Anatomic Pathology Workflow," dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).
ERIC Educational Resources Information Center
Gerber, Andrew J.; Peterson, Bradley S.
2008-01-01
The article helps to understand the interpretation of an image by presenting as to what constitutes an image. A common feature in all images is the basic physical structure that can be described with a common set of terms.
NASA Technical Reports Server (NTRS)
Nunez, J. I.; Farmer, J. D.; Sellar, R. G.; Allen, Carlton C.
2010-01-01
To maximize the scientific return, future robotic and human missions to the Moon will need to have in-situ capabilities to enable the selection of the highest value samples for returning to Earth, or a lunar base for analysis. In order to accomplish this task efficiently, samples will need to be characterized using a suite of robotic instruments that can provide crucial information about elemental composition, mineralogy, volatiles and ices. Such spatially-correlated data sets, which place mineralogy into a microtextural context, are considered crucial for correct petrogenetic interpretations. . Combining microscopic imaging with visible= nearinfrared reflectance spectroscopy, provides a powerful in-situ approach for obtaining mineralogy within a microtextural context. The approach is non-destructive and requires minimal mechanical sample preparation. This approach provides data sets that are comparable to what geologists routinely acquire in the field, using a hand lens and in the lab using thin section petrography, and provide essential information for interpreting the primary formational processes in rocks and soils as well as the effects of secondary (diagenetic) alteration processes. Such observations lay a foundation for inferring geologic histories and provide "ground truth" for similar instruments on orbiting satellites; they support astronaut EVA activities and provide basic information about the physical properties of soils required for assessing associated health risks, and are basic tools in the exploration for in-situ resources to support human exploration of the Moon.
Evaluation of area strain response of dielectric elastomer actuator using image processing technique
NASA Astrophysics Data System (ADS)
Sahu, Raj K.; Sudarshan, Koyya; Patra, Karali; Bhaumik, Shovan
2014-03-01
Dielectric elastomer actuator (DEA) is a kind of soft actuators that can produce significantly large electric-field induced actuation strain and may be a basic unit of artificial muscles and robotic elements. Understanding strain development on a pre-stretched sample at different regimes of electrical field is essential for potential applications. In this paper, we report about ongoing work on determination of area strain using digital camera and image processing technique. The setup, developed in house consists of low cost digital camera, data acquisition and image processing algorithm. Samples have been prepared by biaxially stretched acrylic tape and supported between two cardboard frames. Carbon-grease has been pasted on the both sides of the sample, which will be compliant with electric field induced large deformation. Images have been grabbed before and after the application of high voltage. From incremental image area, strain has been calculated as a function of applied voltage on a pre-stretched dielectric elastomer (DE) sample. Area strain has been plotted with the applied voltage for different pre-stretched samples. Our study shows that the area strain exhibits nonlinear relationship with applied voltage. For same voltage higher area strain has been generated on a sample having higher pre-stretched value. Also our characterization matches well with previously published results which have been done with costly video extensometer. The study may be helpful for the designers to fabricate the biaxial pre-stretched planar actuator from similar kind of materials.
Medical Image Analysis Facility
NASA Technical Reports Server (NTRS)
1978-01-01
To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.
Simulated Thin-Film Growth and Imaging
NASA Astrophysics Data System (ADS)
Schillaci, Michael
2001-06-01
Thin-films have become the cornerstone of the electronics, telecommunications, and broadband markets. A list of potential products includes: computer boards and chips, satellites, cell phones, fuel cells, superconductors, flat panel displays, optical waveguides, building and automotive windows, food and beverage plastic containers, metal foils, pipe plating, vision ware, manufacturing equipment and turbine engines. For all of these reasons a basic understanding of the physical processes involved in both growing and imaging thin-films can provide a wonderful research project for advanced undergraduate and first-year graduate students. After producing rudimentary two- and three-dimensional thin-film models incorporating ballsitic deposition and nearest neighbor Coulomb-type interactions, the QM tunneling equations are used to produce simulated scanning tunneling microscope (SSTM) images of the films. A discussion of computational platforms, languages, and software packages that may be used to accomplish similar results is also given.
Classification of normal and abnormal images of lung cancer
NASA Astrophysics Data System (ADS)
Bhatnagar, Divyesh; Tiwari, Amit Kumar; Vijayarajan, V.; Krishnamoorthy, A.
2017-11-01
To find the exact symptoms of lung cancer is difficult, because of the formation of the most cancers tissues, wherein large structure of tissues is intersect in a different way. This problem can be evaluated with the help of digital images. In this strategy images will be examined with basic operation of PCA Algorithm. In this paper, GLCM method is used for pre-processing of the snap shots and function extraction system and to test the level of diseases of a patient in its premature stage get to know it is regular or unusual. With the help of result stage of cancer will be evaluated. With the help of dataset and result survival rate of cancer patient can be estimated. Result is based totally on the precise and wrong arrangement of the patterns of tissues.
Mina Shaughnessy in the 1990s: Some Changing Answers in Basic Writing.
ERIC Educational Resources Information Center
McAlexander, Patricia J.
Although Mina Shaughnessy remains influential in the basic writing field, her answers to the vital questions of who basic writers are and why they underachieve as writers are changing. Whether she intended to or not, Shaughnessy's book "Errors and Expectations" (published in 1977) was a major force in forming an image of basic writers as…
Vanmarcke, Steven; Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.
Calders, Filip; Wagemans, Johan
2016-01-01
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur. PMID:27803794
Van Decker, William A; Villafana, Theodore
2008-01-01
The teaching of basic science with regard to physics, instrumentation, and radiation safety has been part of nuclear cardiology training since its inception. Although there are clear educational and quality rationale for such, regulations associated with the Nuclear Regulatory Commission Subpart J of old 10 CFR section 35 (Title 10, Code of Federal Regulations, Part 35) from the 1960s mandated such prescriptive instruction. Cardiovascular fellowship training programs now have a new opportunity to rethink their basic science imaging curriculums with the era of "revised 10 CFR section 35" and the growing implementation of multimodality imaging training and expertise. This review focuses on the history and the why, what, and how of such a curriculum arising in one city and suggests examples of future implementation in other locations.
How do scientists respond to anomalies? Different strategies used in basic and applied science.
Trickett, Susan Bell; Trafton, J Gregory; Schunn, Christian D
2009-10-01
We conducted two in vivo studies to explore how scientists respond to anomalies. Based on prior research, we identify three candidate strategies: mental simulation, mental manipulation of an image, and comparison between images. In Study 1, we compared experts in basic and applied domains (physics and meteorology). We found that the basic scientists used mental simulation to resolve an anomaly, whereas applied science practitioners mentally manipulated the image. In Study 2, we compared novice and expert meteorologists. We found that unlike experts, novices used comparison to address anomalies. We discuss the nature of expertise in the two kinds of science, the relationship between the type of science and the task performed, and the relationship of the strategies investigated to scientific creativity. Copyright © 2009 Cognitive Science Society, Inc.
Ultrasound Biomicroscopy in Small Animal Research: Applications in Molecular and Preclinical Imaging
Greco, A.; Mancini, M.; Gargiulo, S.; Gramanzini, M.; Claudio, P. P.; Brunetti, A.; Salvatore, M.
2012-01-01
Ultrasound biomicroscopy (UBM) is a noninvasive multimodality technique that allows high-resolution imaging in mice. It is affordable, widely available, and portable. When it is coupled to Doppler ultrasound with color and power Doppler, it can be used to quantify blood flow and to image microcirculation as well as the response of tumor blood supply to cancer therapy. Target contrast ultrasound combines ultrasound with novel molecular targeted contrast agent to assess biological processes at molecular level. UBM is useful to investigate the growth and differentiation of tumors as well as to detect early molecular expression of cancer-related biomarkers in vivo and to monitor the effects of cancer therapies. It can be also used to visualize the embryological development of mice in uterus or to examine their cardiovascular development. The availability of real-time imaging of mice anatomy allows performing aspiration procedures under ultrasound guidance as well as the microinjection of cells, viruses, or other agents into precise locations. This paper will describe some basic principles of high-resolution imaging equipment, and the most important applications in molecular and preclinical imaging in small animal research. PMID:22163379
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
STS Case Study Development Support
NASA Technical Reports Server (NTRS)
Rosa de Jesus, Dan A.; Johnson, Grace K.
2013-01-01
The Shuttle Case Study Collection (SCSC) has been developed using lessons learned documented by NASA engineers, analysts, and contractors. The SCSC provides educators with a new tool to teach real-world engineering processes with the goal of providing unique educational materials that enhance critical thinking, decision-making and problem-solving skills. During this third phase of the project, responsibilities included: the revision of the Hyper Text Markup Language (HTML) source code to ensure all pages follow World Wide Web Consortium (W3C) standards, and the addition and edition of website content, including text, documents, and images. Basic HTML knowledge was required, as was basic knowledge of photo editing software, and training to learn how to use NASA's Content Management System for website design. The outcome of this project was its release to the public.
In vivo optical imaging and dynamic contrast methods for biomedical research
Hillman, Elizabeth M. C.; Amoozegar, Cyrus B.; Wang, Tracy; McCaslin, Addason F. H.; Bouchard, Matthew B.; Mansfield, James; Levenson, Richard M.
2011-01-01
This paper provides an overview of optical imaging methods commonly applied to basic research applications. Optical imaging is well suited for non-clinical use, since it can exploit an enormous range of endogenous and exogenous forms of contrast that provide information about the structure and function of tissues ranging from single cells to entire organisms. An additional benefit of optical imaging that is often under-exploited is its ability to acquire data at high speeds; a feature that enables it to not only observe static distributions of contrast, but to probe and characterize dynamic events related to physiology, disease progression and acute interventions in real time. The benefits and limitations of in vivo optical imaging for biomedical research applications are described, followed by a perspective on future applications of optical imaging for basic research centred on a recently introduced real-time imaging technique called dynamic contrast-enhanced small animal molecular imaging (DyCE). PMID:22006910
Magnetospheric Radio Tomography: Observables, Algorithms, and Experimental Analysis
NASA Technical Reports Server (NTRS)
Cummer, Steven
2005-01-01
This grant supported research towards developing magnetospheric electron density and magnetic field remote sensing techniques via multistatic radio propagation and tomographic image reconstruction. This work was motivated by the need to better develop the basic technique of magnetospheric radio tomography, which holds substantial promise as a technology uniquely capable of imaging magnetic field and electron density in the magnetosphere on large scales with rapid cadence. Such images would provide an unprecedented and needed view into magnetospheric processes. By highlighting the systems-level interconnectedness of different regions, our understanding of space weather processes and ability to predict them would be dramatically enhanced. Three peer-reviewed publications and 5 conference presentations have resulted from this work, which supported 1 PhD student and 1 postdoctoral researcher. One more paper is in progress and will be submitted shortly. Because the main results of this research have been published or are soon to be published in refereed journal articles listed in the reference section of this document, we provide here an overview of the research and accomplishments without describing all of the details that are contained in the articles.
Algorithms and programming tools for image processing on the MPP, part 2
NASA Technical Reports Server (NTRS)
Reeves, Anthony P.
1986-01-01
A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.
Probing neurodegeneration and aging: A PET approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanBrocklin, H.F.
1995-12-31
Positron Emission Tomography (PET) imaging has received wide application to the study of the aging brain and its diseases, most notably Parkinson`s Disease (PD) and Alzheimer`s Disease (AD). Basic neurological processes such as blood flow and glucose metabolism have been most often measured. Radioligands developed for specific neurochemical systems have amplified the flow and metabolism studies by more precisely defining the changes associated with degenerative processes. Our present research focuses on two additional applications of radiopharmaceutical development and PET imaging - (1) investigating the fundamental mechanisms of neurodegeneration and aging, and (2) assessing novel therapeutic intervention for PD with PETmore » imaging. We have synthesized fluorine-18 labeled analogs of rotenone, a natural product that possesses high affinity to Complex I of the mitochondrial electron transport chain, and evaluated their potential to study changes in neuronal mitochondrial density and function. A large body evidence points to mitochondrial dysfunction as a key factor in aging and neurodegeneration. We are also currently evaluating the use of genetically transfected cells to treat PD. Primates are being imaged with [{sup 18}F]flouro-m-L-tyrosine before and after MPTP Parkinsonian type lesioning and following implantation of genetically altered cells capable of secreting tyrosine hydroxylase into the lesioned area. The ability to develop and apply PET probes has significantly enhanced the understanding of normal, aging, and degenerative processes of the brain.« less
A cultural side effect: learning to read interferes with identity processing of familiar objects
Kolinsky, Régine; Fernandes, Tânia
2014-01-01
Based on the neuronal recycling hypothesis (Dehaene and Cohen, 2007), we examined whether reading acquisition has a cost for the recognition of non-linguistic visual materials. More specifically, we checked whether the ability to discriminate between mirror images, which develops through literacy acquisition, interferes with object identity judgments, and whether interference strength varies as a function of the nature of the non-linguistic material. To these aims we presented illiterate, late literate (who learned to read at adult age), and early literate adults with an orientation-independent, identity-based same-different comparison task in which they had to respond “same” to both physically identical and mirrored or plane-rotated images of pictures of familiar objects (Experiment 1) or of geometric shapes (Experiment 2). Interference from irrelevant orientation variations was stronger with plane rotations than with mirror images, and stronger with geometric shapes than with objects. Illiterates were the only participants almost immune to mirror variations, but only for familiar objects. Thus, the process of unlearning mirror-image generalization, necessary to acquire literacy in the Latin alphabet, has a cost for a basic function of the visual ventral object recognition stream, i.e., identification of familiar objects. This demonstrates that neural recycling is not just an adaptation to multi-use but a process of at least partial exaptation. PMID:25400605
Dual-Energy CT: New Horizon in Medical Imaging
Goo, Jin Mo
2017-01-01
Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151
Intelligent Design of Nano-Scale Molecular Imaging Agents
Kim, Sung Bae; Hattori, Mitsuru; Ozawa, Takeaki
2012-01-01
Visual representation and quantification of biological processes at the cellular and subcellular levels within living subjects are gaining great interest in life science to address frontier issues in pathology and physiology. As intact living subjects do not emit any optical signature, visual representation usually exploits nano-scale imaging agents as the source of image contrast. Many imaging agents have been developed for this purpose, some of which exert nonspecific, passive, and physical interaction with a target. Current research interest in molecular imaging has mainly shifted to fabrication of smartly integrated, specific, and versatile agents that emit fluorescence or luminescence as an optical readout. These agents include luminescent quantum dots (QDs), biofunctional antibodies, and multifunctional nanoparticles. Furthermore, genetically encoded nano-imaging agents embedding fluorescent proteins or luciferases are now gaining popularity. These agents are generated by integrative design of the components, such as luciferase, flexible linker, and receptor to exert a specific on–off switching in the complex context of living subjects. In the present review, we provide an overview of the basic concepts, smart design, and practical contribution of recent nano-scale imaging agents, especially with respect to genetically encoded imaging agents. PMID:23235326
Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.
Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire
2018-01-01
Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.
Intelligent design of nano-scale molecular imaging agents.
Kim, Sung Bae; Hattori, Mitsuru; Ozawa, Takeaki
2012-12-12
Visual representation and quantification of biological processes at the cellular and subcellular levels within living subjects are gaining great interest in life science to address frontier issues in pathology and physiology. As intact living subjects do not emit any optical signature, visual representation usually exploits nano-scale imaging agents as the source of image contrast. Many imaging agents have been developed for this purpose, some of which exert nonspecific, passive, and physical interaction with a target. Current research interest in molecular imaging has mainly shifted to fabrication of smartly integrated, specific, and versatile agents that emit fluorescence or luminescence as an optical readout. These agents include luminescent quantum dots (QDs), biofunctional antibodies, and multifunctional nanoparticles. Furthermore, genetically encoded nano-imaging agents embedding fluorescent proteins or luciferases are now gaining popularity. These agents are generated by integrative design of the components, such as luciferase, flexible linker, and receptor to exert a specific on-off switching in the complex context of living subjects. In the present review, we provide an overview of the basic concepts, smart design, and practical contribution of recent nano-scale imaging agents, especially with respect to genetically encoded imaging agents.
Cardiovascular imaging environment: will the future be cloud-based?
Kawel-Boehm, Nadine; Bluemke, David A
2017-07-01
In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.
Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.
Ristic, Dejan; Sanchez, Humberto; Wyman, Claire
2011-01-01
Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.
Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme
NASA Astrophysics Data System (ADS)
Hsin, Cheng-Ho; Inigo, Rafael M.
1990-03-01
The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.
Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series
NASA Astrophysics Data System (ADS)
Champion, Nicolas
2016-06-01
Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.
MO-G-9A-01: Imaging Refresher for Standard of Care Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labby, Z; Sensakovic, W; Hipp, E
2014-06-15
Imaging techniques and technology which were previously the domain of diagnostic medicine are becoming increasingly integrated and utilized in radiation therapy (RT) clinical practice. As such, there are a number of specific imaging topics that are highly applicable to modern radiation therapy physics. As imaging becomes more widely integrated into standard clinical radiation oncology practice, the impetus is on RT physicists to be informed and up-to-date on those imaging modalities relevant to the design and delivery of therapeutic radiation treatments. For example, knowing that, for a given situation, a fluid attenuated inversion recovery (FLAIR) image set is most likely whatmore » the physician would like to import and contour is helpful, but may not be sufficient to providing the best quality of care. Understanding the physics of how that pulse sequence works and why it is used could help assess its utility and determine if it is the optimal sequence for aiding in that specific clinical situation. It is thus important that clinical medical physicists be able to understand and explain the physics behind the imaging techniques used in all aspects of clinical radiation oncology practice. This session will provide the basic physics for a variety of imaging modalities for applications that are highly relevant to radiation oncology practice: computed tomography (CT) (including kV, MV, cone beam CT [CBCT], and 4DCT), positron emission tomography (PET)/CT, magnetic resonance imaging (MRI), and imaging specific to brachytherapy (including ultrasound and some brachytherapy specific topics in MR). For each unique modality, the image formation process will be reviewed, trade-offs between image quality and other factors (e.g. imaging time or radiation dose) will be clarified, and typically used cases for each modality will be introduced. The current and near-future uses of these modalities and techniques in radiation oncology clinical practice will also be discussed. Learning Objectives: To review the basic physical science principles of CT, PET, MR, and ultrasound imaging. To understand how the images are created, and present their specific role in patient management and treatment planning for therapeutic radiation (both external beam and brachytherapy). To discuss when and how each specific imaging modality is currently used in clinical practice, as well as how they may come to be used in the near future.« less
[Basic concept in computer assisted surgery].
Merloz, Philippe; Wu, Hao
2006-03-01
To investigate application of medical digital imaging systems and computer technologies in orthopedics. The main computer-assisted surgery systems comprise the four following subcategories. (1) A collection and recording process for digital data on each patient, including preoperative images (CT scans, MRI, standard X-rays), intraoperative visualization (fluoroscopy, ultrasound), and intraoperative position and orientation of surgical instruments or bone sections (using 3D localises). Data merging based on the matching of preoperative imaging (CT scans, MRI, standard X-rays) and intraoperative visualization (anatomical landmarks, or bone surfaces digitized intraoperatively via 3D localiser; intraoperative ultrasound images processed for delineation of bone contours). (2) In cases where only intraoperative images are used for computer-assisted surgical navigation, the calibration of the intraoperative imaging system replaces the merged data system, which is then no longer necessary. (3) A system that provides aid in decision-making, so that the surgical approach is planned on basis of multimodal information: the interactive positioning of surgical instruments or bone sections transmitted via pre- or intraoperative images, display of elements to guide surgical navigation (direction, axis, orientation, length and diameter of a surgical instrument, impingement, etc. ). And (4) A system that monitors the surgical procedure, thereby ensuring that the optimal strategy defined at the preoperative stage is taken into account. It is possible that computer-assisted orthopedic surgery systems will enable surgeons to better assess the accuracy and reliability of the various operative techniques, an indispensable stage in the optimization of surgery.
Phage display and molecular imaging: expanding fields of vision in living subjects.
Cochran, R; Cochran, Frank
2010-01-01
In vivo molecular imaging enables non-invasive visualization of biological processes within living subjects, and holds great promise for diagnosis and monitoring of disease. The ability to create new agents that bind to molecular targets and deliver imaging probes to desired locations in the body is critically important to further advance this field. To address this need, phage display, an established technology for the discovery and development of novel binding agents, is increasingly becoming a key component of many molecular imaging research programs. This review discusses the expanding role played by phage display in the field of molecular imaging with a focus on in vivo applications. Furthermore, new methodological advances in phage display that can be directly applied to the discovery and development of molecular imaging agents are described. Various phage library selection strategies are summarized and compared, including selections against purified target, intact cells, and ex vivo tissue, plus in vivo homing strategies. An outline of the process for converting polypeptides obtained from phage display library selections into successful in vivo imaging agents is provided, including strategies to optimize in vivo performance. Additionally, the use of phage particles as imaging agents is also described. In the latter part of the review, a survey of phage-derived in vivo imaging agents is presented, and important recent examples are highlighted. Other imaging applications are also discussed, such as the development of peptide tags for site-specific protein labeling and the use of phage as delivery agents for reporter genes. The review concludes with a discussion of how phage display technology will continue to impact both basic science and clinical applications in the field of molecular imaging.
Sakurai, T; Kawamata, R; Kozai, Y; Kaku, Y; Nakamura, K; Saito, M; Wakao, H; Kashima, I
2010-05-01
The aim of the study was to clarify the change in image quality upon X-ray dose reduction and to re-analyse the possibility of X-ray dose reduction in photostimulable phosphor luminescence (PSPL) X-ray imaging systems. In addition, the study attempted to verify the usefulness of multiobjective frequency processing (MFP) and flexible noise control (FNC) for X-ray dose reduction. Three PSPL X-ray imaging systems were used in this study. Modulation transfer function (MTF), noise equivalent number of quanta (NEQ) and detective quantum efficiency (DQE) were evaluated to compare the basic physical performance of each system. Subjective visual evaluation of diagnostic ability for normal anatomical structures was performed. The NEQ, DQE and diagnostic ability were evaluated at base X-ray dose, and 1/3, 1/10 and 1/20 of the base X-ray dose. The MTF of the systems did not differ significantly. The NEQ and DQE did not necessarily depend on the pixel size of the system. The images from all three systems had a higher diagnostic utility compared with conventional film images at the base and 1/3 X-ray doses. The subjective image quality was better at the base X-ray dose than at 1/3 of the base dose in all systems. The MFP and FNC-processed images had a higher diagnostic utility than the images without MFP and FNC. The use of PSPL imaging systems may allow a reduction in the X-ray dose to one-third of that required for conventional film. It is suggested that MFP and FNC are useful for radiation dose reduction.
ERIC Educational Resources Information Center
Nielsen, Dorte Guldbrand; Gotzsche, Ole; Sonne, Ole; Eika, Berit
2012-01-01
Two major views on the relationship between basic science knowledge and clinical knowledge stand out; the Two-world view seeing basic science and clinical science as two separate knowledge bases and the encapsulated knowledge view stating that basic science knowledge plays an overt role being encapsulated in the clinical knowledge. However, resent…
Monteleone, Alessio Maria; Monteleone, Palmiero; Esposito, Fabrizio; Prinster, Anna; Volpe, Umberto; Cantone, Elena; Pellegrino, Francesca; Canna, Antonietta; Milano, Walter; Aiello, Marco; Di Salle, Francesco; Maj, Mario
2017-07-01
Functional magnetic resonance imaging (fMRI) studies have displayed a dysregulation in the way in which the brain processes pleasant taste stimuli in patients with anorexia nervosa (AN) and bulimia nervosa (BN). However, exactly how the brain processes disgusting basic taste stimuli has never been investigated, even though disgust plays a role in food intake modulation and AN and BN patients exhibit high disgust sensitivity. Therefore, we investigated the activation of brain areas following the administration of pleasant and aversive basic taste stimuli in symptomatic AN and BN patients compared to healthy subjects. Twenty underweight AN women, 20 symptomatic BN women and 20 healthy women underwent fMRI while tasting 0.292 M sucrose solution (sweet taste), 0.5 mM quinine hydrochloride solution (bitter taste) and water as a reference taste. In symptomatic AN and BN patients the pleasant sweet stimulus induced a higher activation in several brain areas than that induced by the aversive bitter taste. The opposite occurred in healthy controls. Moreover, compared to healthy controls, AN patients showed a decreased response to the bitter stimulus in the right amygdala and left anterior cingulate cortex, while BN patients showed a decreased response to the bitter stimulus in the right amygdala and left insula. These results show an altered processing of rewarding and aversive taste stimuli in ED patients, which may be relevant for understanding the pathophysiology of AN and BN. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hughes, Stephen
2001-01-01
Explains the basic principles of ultrasound using everyday physics. Topics include the generation of ultrasound, basic interactions with material, and the measurement of blood flow using the Doppler effect. (Author/MM)
Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.
2013-01-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196
Image editing with Adobe Photoshop 6.0.
Caruso, Ronald D; Postel, Gregory C
2002-01-01
The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002
Žuravleva, G. F.
1972-01-01
This paper reports an investigation of the activity of three basic groups of oxidoreductases in lepromatous leprosy: specific dehydrogenases, flavoprotein enzymes, and cytochrome oxidase. The activity of the enzymes was studied before treatment, at various stages of treatment during exacerbations, and in the stage of regression. The data obtained are of importance for evaluating metabolic process in the cells of the specific infiltrates and the dermal connective tissue in leprosy, for determining the nature and intensity of the inflammatory process, and for control purposes in cases of regression. ImagesFig. 4Fig. 5Fig. 6Fig. 1Fig. 2Fig. 3 PMID:4342274
Bongianni, Wayne L.
1984-01-01
A method and apparatus for electronically focusing and electronically scanning microscopic specimens are given. In the invention, visual images of even moving, living, opaque specimens can be acoustically obtained and viewed with virtually no time needed for processing (i.e., real time processing is used). And planar samples are not required. The specimens (if planar) need not be moved during scanning, although it will be desirable and possible to move or rotate nonplanar specimens (e.g., laser fusion targets) against the lens of the apparatus. No coupling fluid is needed, so specimens need not be wetted. A phase acoustic microscope is also made from the basic microscope components together with electronic mixers.
Bongianni, W.L.
1984-04-17
A method and apparatus for electronically focusing and electronically scanning microscopic specimens are given. In the invention, visual images of even moving, living, opaque specimens can be acoustically obtained and viewed with virtually no time needed for processing (i.e., real time processing is used). And planar samples are not required. The specimens (if planar) need not be moved during scanning, although it will be desirable and possible to move or rotate nonplanar specimens (e.g., laser fusion targets) against the lens of the apparatus. No coupling fluid is needed, so specimens need not be wetted. A phase acoustic microscope is also made from the basic microscope components together with electronic mixers. 7 figs.
Life is three-dimensional, and it begins with molecules.
Bourne, Philip E
2017-03-01
The iconic image of the DNA double helix embodies the central role that three-dimensional structures play in understanding biological processes, which, in turn, impact health and well-being. Here, that role is explored through the eyes of one scientist, who has been lucky enough to have over 150 talented people pass through his laboratory. Each contributed to that understanding. What follows is a small fraction of their story, with an emphasis on basic research outcomes of importance to society at large.
Low-cost data analysis systems for processing multispectral scanner data
NASA Technical Reports Server (NTRS)
Whitely, S. L.
1976-01-01
The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.
Exploring pain pathophysiology in patients.
Sommer, Claudia
2016-11-04
Although animal models of pain have brought invaluable information on basic processes underlying pain pathophysiology, translation to humans is a problem. This Review will summarize what information has been gained by the direct study of patients with chronic pain. The techniques discussed range from patient phenotyping using quantitative sensory testing to specialized nociceptor neurophysiology, imaging methods of peripheral nociceptors, analyses of body fluids, genetics and epigenetics, and the generation of sensory neurons from patients via inducible pluripotent stem cells. Copyright © 2016, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
USDA-ARS?s Scientific Manuscript database
Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...
Aural analysis of image texture via cepstral filtering and sonification
NASA Astrophysics Data System (ADS)
Rangayyan, Rangaraj M.; Martins, Antonio C. G.; Ruschioni, Ruggero A.
1996-03-01
Texture plays an important role in image analysis and understanding, with many applications in medical imaging and computer vision. However, analysis of texture by image processing is a rather difficult issue, with most techniques being oriented towards statistical analysis which may not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of (quasi-) periodic texture (where a basic texture element or `texton' is repeated over the image field) and random texture (which could be modeled as filtered or `spot' noise). Although the AD designed is not intended to be speech- like or musical, we draw analogies between the two types of texture mentioned above and voiced/unvoiced speech, and design a sonification algorithm which incorporates physical and perceptual concepts of texture and speech. More specifically, we present a method for AD of texture where the projections of the image at various angles (Radon transforms or integrals) are mapped to audible signals and played in sequence. In the case of random texture, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the AD provides timber and pitch related to the texton and periodicity. In another procedure for sonification of periodic texture, we propose to first deconvolve the image using cepstral analysis to extract information about the texton and horizontal and vertical periodicities. The projections of individual textons at various angles are used to create a voiced-speech-like signal with each projection mapped to a basic wavelet, the horizontal period to pitch, and the vertical period to rhythm on a longer time scale. The sound pattern then consists of a serial, melody-like sonification of the patterns for each projection. We believe that our approaches provide the much-desired `natural' connection between the image data and the sounds generated. We have evaluated the sonification techniques with a number of synthetic textures. The sound patterns created have demonstrated the potential of the methods in distinguishing between different types of texture. We are investigating the application of these techniques to auditory analysis of texture in medical images such as magnetic resonance images.
[MUC4 research progress in tumor molecular markers].
Zhu, Hua; You, Jinhui
2014-02-01
Mucin antigen 4 (MUC4) is a molecular marker for some malignant tumors for early tumor diagnosis, prognosis and targeted therapy. It provides a new research direction in tumor diagnosis and treatment that will have a wide application prospect. In recent years, there has been a large number of research reports on the basic and clini-a wide application prospect. In recent years, there has been a large number of research reports on the basic and clinical studies about MUC4, but the molecular imaging study about MUC4 is seldom reported. In this paper the recentcal studies about MUC4, but the molecular imaging study about MUC4 is seldom reported. In this paper the recent research about MUC4 on basic and clinical studies is briefly reviewed, and it is expected to promote the development of tumor molecular imaging.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
NASA Astrophysics Data System (ADS)
Tampubolon, Togi; Abdullah, Khiruddin bin; San, Lim Hwee
2013-09-01
The spectral characteristics of land cover are basic references in classifying satellite image for geophysics analysis. It can be obtained from the measurements using spectrometer and satellite image processing. The aims of this study to investigate the spectral characteristics of land cover based on the results of measurement using Spectrometer Cropscan MSR 16R and Landsat satellite imagery. The area of study in this research is in Medan, (Deli Serdang, North Sumatera) Indonesia. The scope of this study is the basic survey from the measurements of spectral land cover which is covered several type of land such as a cultivated and managed terrestrial areas, natural and semi-natural, cultivated aquatic or regularly flooded areas, natural and semi-natural aquatic, artificial surfaces and associated areas, bare areas, artificial waterbodies and natural waterbodies. The measurement and verification were conducted using a spectrometer provided their spectral characteristics and Landsat imagery, respectively. The results of the spectral characteristics of land cover shows that each type of land cover have a unique characteristic. The correlation of spectral land cover based on spectrometer Cropscan MSR 16R and Landsat satellite image are above 90 %. However, the land cover of artificial waterbodiese have a correlation under 40 %. That is because the measurement of spectrometer Cropscan MSR 16R and acquisition of Landsat satellite imagery has a time different.
Research relative to automated multisensor image registration
NASA Technical Reports Server (NTRS)
Kanal, L. N.
1983-01-01
The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam
2008-02-01
Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.
Helioviewer.org: Browsing Very Large Image Archives Online Using JPEG 2000
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Schmidt, L.; Wamsler, B.; Beck, J.; Alexanderian, A.; Fleck, B.
2009-12-01
As the amount of solar data available to scientists continues to increase at faster and faster rates, it is important that there exist simple tools for navigating this data quickly with a minimal amount of effort. By combining heterogeneous solar physics datatypes such as full-disk images and coronagraphs, along with feature and event information, Helioviewer offers a simple and intuitive way to browse multiple datasets simultaneously. Images are stored in a repository using the JPEG 2000 format and tiled dynamically upon a client's request. By tiling images and serving only the portions of the image requested, it is possible for the client to work with very large images without having to fetch all of the data at once. In addition to a focus on intercommunication with other virtual observatories and browsers (VSO, HEK, etc), Helioviewer will offer a number of externally-available application programming interfaces (APIs) to enable easy third party use, adoption and extension. Recent efforts have resulted in increased performance, dynamic movie generation, and improved support for mobile web browsers. Future functionality will include: support for additional data-sources including RHESSI, SDO, STEREO, and TRACE, a navigable timeline of recorded solar events, social annotation, and basic client-side image processing.
InSAR imaging of volcanic deformation over cloud-prone areas - Aleutian islands
Lu, Zhong
2007-01-01
Interferometric synthetic aperture radar (INSAR) is capable of measuring ground-surface deformation with centimeter-tosubcentimeter precision and spatial resolution of tens-of meters over a relatively large region. With its global coverage and all-weather imaging capability, INSAR is an important technique for measuring ground-surface deformation of volcanoes over cloud-prone and rainy regions such as the Aleutian Islands, where only less than 5 percent of optical imagery is usable due to inclement weather conditions. The spatial distribution of surface deformation data, derived from INSAR images, enables the construction of detailed mechanical models to enhance the study of magmatic processes. This paper reviews the basics of INSAR for volcanic deformation mapping and the INSAR studies of ten Aleutian volcanoes associated with both eruptive and noneruptive activity. These studies demonstrate that all-weather INSAR imaging can improve our understanding of how the Aleutian volcanoes work and enhance our capability to predict future eruptions and associated hazards.
McDermott, Gerry; Le Gros, Mark A.; Larabell, Carolyn A.
2012-01-01
Living cells are structured to create a range of microenvironments that support specific chemical reactions and processes. Understanding how cells function therefore requires detailed knowledge of both the subcellular architecture and the location of specific molecules within this framework. Here we review the development of two correlated cellular imaging techniques that fulfill this need. Cells are first imaged using cryogenic fluorescence microscopy to determine the location of molecules of interest that have been labeled with fluorescent tags. The same specimen is then imaged using soft X-ray tomography to generate a high-contrast, 3D reconstruction of the cells. Data from the two modalities are then combined to produce a composite, information-rich view of the cell. This correlated imaging approach can be applied across the spectrum of problems encountered in cell biology, from basic research to biotechnological and biomedical applications such as the optimization of biofuels and the development of new pharmaceuticals. PMID:22242730
Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Héctor; Castaño, Víctor M
2014-01-01
This research maps the knowledge translation process for two different types of nanotechnologies applied to cancer: liposomes and metallic nanostructures (MNs). We performed a structural analysis of citation networks and text mining supported in controlled vocabularies. In the case of liposomes, our results identify subnetworks (invisible colleges) associated with different therapeutic strategies: nanopharmacology, hyperthermia, and gene therapy. Only in the pharmacological strategy was an organized knowledge translation process identified, which, however, is monopolized by the liposomal doxorubicins. In the case of MNs, subnetworks are not differentiated by the type of therapeutic strategy, and the content of the documents is still basic research. Research on MNs is highly focused on developing a combination of molecular imaging and photothermal therapy.
Fajardo-Ortiz, David; Duran, Luis; Moreno, Laura; Ochoa, Héctor; Castaño, Víctor M
2014-01-01
This research maps the knowledge translation process for two different types of nanotechnologies applied to cancer: liposomes and metallic nanostructures (MNs). We performed a structural analysis of citation networks and text mining supported in controlled vocabularies. In the case of liposomes, our results identify subnetworks (invisible colleges) associated with different therapeutic strategies: nanopharmacology, hyperthermia, and gene therapy. Only in the pharmacological strategy was an organized knowledge translation process identified, which, however, is monopolized by the liposomal doxorubicins. In the case of MNs, subnetworks are not differentiated by the type of therapeutic strategy, and the content of the documents is still basic research. Research on MNs is highly focused on developing a combination of molecular imaging and photothermal therapy. PMID:24920900
Empowering potential: a theory of wellness motivation.
Fleury, J D
1991-01-01
Data were collected from 29 individuals who were attempting to initiate and sustain programs of cardiac risk factor modification. Data were analyzed through the technique of constant comparative analysis. Empowering potential, the basic social process identified from the data, explained individual motivation to initiate and sustain cardiovascular health behavior. Empowering potential was a continuous process of individual growth and development which facilitated the emergence of new and positive health patterns. Within the process of empowering potential, individuals use a variety of strategies which guide the initiation and maintenance of health-related change. The process of empowering potential consists of three stages: appraising readiness, changing, and integrating change. Two categories occurred throughout the process of empowering potential: imaging and social support systems. These findings provide a better understanding of how motivated action is initiated and reinitiated over time.
GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing
NASA Astrophysics Data System (ADS)
Johl, John T.; Baker, Nick C.
1988-10-01
The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.
Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.
He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej
2011-12-01
Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.
Securing quality of camera-based biomedical optics
NASA Astrophysics Data System (ADS)
Guse, Frank; Kasper, Axel; Zinter, Bob
2009-02-01
As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.
Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine
NASA Astrophysics Data System (ADS)
Lawi, Armin; Sya'Rani Machrizzandi, M.
2018-03-01
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
Hard X-ray Microscopic Images of the Human Hair
NASA Astrophysics Data System (ADS)
Goo, Jawoong; Jeon, Soo Young; Oh, Tak Heon; Hong, Seung Phil; Yon, Hwa Shik; Lee, Won-Soo
2007-01-01
The better visualization of the human organs or internal structure is challenging to the physicist and physicians. It can lead to more understanding of the morphology, pathophysiology and the diagnosis. Conventionally used methods to investigate cells or architectures, show limited value due to sample processing procedures and lower resolution. In this respect, Zernike type phase contrast hard x-ray microscopy using 6.95keV photon energy has advantages. We investigated hair fibers of the normal healthy persons. Coherence based phase contrast images revealed three distinct structures of hair, medulla, cortex, and cuticular layer. Some different detailed characters of each sample were noted. And further details would be shown and these results would be utilized as basic data of morphologic study of human hair.
Smart Cameras for Remote Science Survey
NASA Technical Reports Server (NTRS)
Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.
2012-01-01
Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.
Detection of high molecular weight proteins by MALDI imaging mass spectrometry.
Mainini, Veronica; Bovo, Giorgio; Chinello, Clizia; Gianazza, Erica; Grasso, Marco; Cattoretti, Giorgio; Magni, Fulvio
2013-06-01
MALDI imaging mass spectrometry (IMS) is a unique technology to explore the spatial distribution of biomolecules directly on tissues. It allows the in situ investigation of a large number of small proteins and peptides. Detection of high molecular weight proteins through MALDI IMS still represents an important challenge, as it would allow the direct investigation of the distribution of more proteins involved in biological processes, such as cytokines, enzymes, neuropeptide precursors and receptors. In this work we compare the traditional method performed with sinapinic acid with a comparable protocol using ferulic acid as the matrix. Data show a remarkable increase of signal acquisition in the mass range of 20k to 150k Th. Moreover, we report molecular images of biomolecules above 70k Th, demonstrating the possibility of expanding the application of this technology both in clinical investigations and basic science.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
An evaluation of EREP (Skylab) and ERTS imagery for integrated natural resources survey
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. An experimental procedure has been devised and is being tested for natural resource surveys to cope with the problems of interpreting and processing the large quantities of data provided by Skylab and ERTS. Some basic aspects of orbital imagery such as scale, the role of repetitive coverage, and types of sensors are being examined in relation to integrated surveys of natural resources and regional development planning. Extrapolation away from known ground conditions, a fundamental technique for mapping resources, becomes very effective when used on orbital imagery supported by field mapping. Meaningful boundary delimitations can be made on orbital images using various image enhancement techniques. To meet the needs of many developing countries, this investigation into the use of satellite imagery for integrated resource surveys involves the analysis of the images by means of standard visual photointerpretation methods.
THz-wave parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
NVSIM: UNIX-based thermal imaging system simulator
NASA Astrophysics Data System (ADS)
Horger, John D.
1993-08-01
For several years the Night Vision and Electronic Sensors Directorate (NVESD) has been using an internally developed forward looking infrared (FLIR) simulation program. In response to interest in the simulation part of these projects by other organizations, NVESD has been working on a new version of the simulation, NVSIM, that will be made generally available to the FLIR using community. NVSIM uses basic FLIR specification data, high resolution thermal input imagery and spatial domain image processing techniques to produce simulated image outputs from a broad variety of FLIRs. It is being built around modular programming techniques to allow simpler addition of more sensor effects. The modularity also allows selective inclusion and exclusion of individual sensor effects at run time. The simulation has been written in the industry standard ANSI C programming language under the widely used UNIX operating system to make it easily portable to a wide variety of computer platforms.
Dynamic imaging with electron microscopy
Campbell, Geoffrey; McKeown, Joe; Santala, Melissa
2018-02-13
Livermore researchers have perfected an electron microscope to study fast-evolving material processes and chemical reactions. By applying engineering, microscopy, and laser expertise to the decades-old technology of electron microscopy, the dynamic transmission electron microscope (DTEM) team has developed a technique that can capture images of phenomena that are both very small and very fast. DTEM uses a precisely timed laser pulse to achieve a short but intense electron beam for imaging. When synchronized with a dynamic event in the microscope's field of view, DTEM allows scientists to record and measure material changes in action. A new movie-mode capability, which earned a 2013 R&D 100 Award from R&D Magazine, uses up to nine laser pulses to sequentially capture fast, irreversible, even one-of-a-kind material changes at the nanometer scale. DTEM projects are advancing basic and applied materials research, including such areas as nanostructure growth, phase transformations, and chemical reactions.
NASA Astrophysics Data System (ADS)
Weber, Walter H.; Mair, H. Douglas; Jansen, Dion
2003-03-01
A suite of basic signal processors has been developed. These basic building blocks can be cascaded together to form more complex processors without the need for programming. The data structures between each of the processors are handled automatically. This allows a processor built for one purpose to be applied to any type of data such as images, waveform arrays and single values. The processors are part of Winspect Data Acquisition software. The new processors are fast enough to work on A-scan signals live while scanning. Their primary use is to extract features, reduce noise or to calculate material properties. The cascaded processors work equally well on live A-scan displays, live gated data or as a post-processing engine on saved data. Researchers are able to call their own MATLAB or C-code from anywhere within the processor structure. A built-in formula node processor that uses a simple algebraic editor may make external user programs unnecessary. This paper also discusses the problems associated with ad hoc software development and how graphical programming languages can tie up researchers writing software rather than designing experiments.
Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader
2004-01-01
One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.
Unsupervised domain adaptation for early detection of drought stress in hyperspectral images
NASA Astrophysics Data System (ADS)
Schmitter, P.; Steinrücken, J.; Römer, C.; Ballvora, A.; Léon, J.; Rascher, U.; Plümer, L.
2017-09-01
Hyperspectral images can be used to uncover physiological processes in plants if interpreted properly. Machine Learning methods such as Support Vector Machines (SVM) and Random Forests have been applied to estimate development of biomass and detect and predict plant diseases and drought stress. One basic requirement of machine learning implies, that training and testing is done in the same domain and the same distribution. Different genotypes, environmental conditions, illumination and sensors violate this requirement in most practical circumstances. Here, we present an approach, which enables the detection of physiological processes by transferring the prior knowledge within an existing model into a related target domain, where no label information is available. We propose a two-step transformation of the target features, which enables a direct application of an existing model. The transformation is evaluated by an objective function including additional prior knowledge about classification and physiological processes in plants. We have applied the approach to three sets of hyperspectral images, which were acquired with different plant species in different environments observed with different sensors. It is shown, that a classification model, derived on one of the sets, delivers satisfying classification results on the transformed features of the other data sets. Furthermore, in all cases early non-invasive detection of drought stress was possible.
NASA Astrophysics Data System (ADS)
Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.
2017-12-01
The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.
Research on spatial-variant property of bistatic ISAR imaging plane of space target
NASA Astrophysics Data System (ADS)
Guo, Bao-Feng; Wang, Jun-Ling; Gao, Mei-Guo
2015-04-01
The imaging plane of inverse synthetic aperture radar (ISAR) is the projection plane of the target. When taking an image using the range-Doppler theory, the imaging plane may have a spatial-variant property, which causes the change of scatter’s projection position and results in migration through resolution cells. In this study, we focus on the spatial-variant property of the imaging plane of a three-axis-stabilized space target. The innovative contributions are as follows. 1) The target motion model in orbit is provided based on a two-body model. 2) The instantaneous imaging plane is determined by the method of vector analysis. 3) Three Euler angles are introduced to describe the spatial-variant property of the imaging plane, and the image quality is analyzed. The simulation results confirm the analysis of the spatial-variant property. The research in this study is significant for the selection of the imaging segment, and provides the evidence for the following data processing and compensation algorithm. Project supported by the National Natural Science Foundation of China (Grant No. 61401024), the Shanghai Aerospace Science and Technology Innovation Foundation, China (Grant No. SAST201240), and the Basic Research Foundation of Beijing Institute of Technology (Grant No. 20140542001).
Noninvasive imaging of hepatocellular carcinoma: From diagnosis to prognosis
Jiang, Han-Yu; Chen, Jie; Xia, Chun-Chao; Cao, Li-Kun; Duan, Ting; Song, Bin
2018-01-01
Hepatocellular carcinoma (HCC) is the most common primary liver cancer and a major public health problem worldwide. Hepatocarcinogenesis is a complex multistep process at molecular, cellular, and histologic levels with key alterations that can be revealed by noninvasive imaging modalities. Therefore, imaging techniques play pivotal roles in the detection, characterization, staging, surveillance, and prognosis evaluation of HCC. Currently, ultrasound is the first-line imaging modality for screening and surveillance purposes. While based on conclusive enhancement patterns comprising arterial phase hyperenhancement and portal venous and/or delayed phase wash-out, contrast enhanced dynamic computed tomography and magnetic resonance imaging (MRI) are the diagnostic tools for HCC without requirements for histopathologic confirmation. Functional MRI techniques, including diffusion-weighted imaging, MRI with hepatobiliary contrast agents, perfusion imaging, and magnetic resonance elastography, show promise in providing further important information regarding tumor biological behaviors. In addition, evaluation of tumor imaging characteristics, including nodule size, margin, number, vascular invasion, and growth patterns, allows preoperative prediction of tumor microvascular invasion and patient prognosis. Therefore, the aim of this article is to review the current state-of-the-art and recent advances in the comprehensive noninvasive imaging evaluation of HCC. We also provide the basic key concepts of HCC development and an overview of the current practice guidelines. PMID:29904242
Implementation of a General Real-Time Visual Anomaly Detection System Via Soft Computing
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve; Ferrell, Bob; Steinrock, Todd (Technical Monitor)
2001-01-01
The intelligent visual system detects anomalies or defects in real time under normal lighting operating conditions. The application is basically a learning machine that integrates fuzzy logic (FL), artificial neural network (ANN), and generic algorithm (GA) schemes to process the image, run the learning process, and finally detect the anomalies or defects. The system acquires the image, performs segmentation to separate the object being tested from the background, preprocesses the image using fuzzy reasoning, performs the final segmentation using fuzzy reasoning techniques to retrieve regions with potential anomalies or defects, and finally retrieves them using a learning model built via ANN and GA techniques. FL provides a powerful framework for knowledge representation and overcomes uncertainty and vagueness typically found in image analysis. ANN provides learning capabilities, and GA leads to robust learning results. An application prototype currently runs on a regular PC under Windows NT, and preliminary work has been performed to build an embedded version with multiple image processors. The application prototype is being tested at the Kennedy Space Center (KSC), Florida, to visually detect anomalies along slide basket cables utilized by the astronauts to evacuate the NASA Shuttle launch pad in an emergency. The potential applications of this anomaly detection system in an open environment are quite wide. Another current, potentially viable application at NASA is in detecting anomalies of the NASA Space Shuttle Orbiter's radiator panels.
NASA Astrophysics Data System (ADS)
Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun
2014-01-01
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
NASA Astrophysics Data System (ADS)
Latief, F. D. E.; Sari, D. S.; Fitri, L. A.
2017-08-01
High-resolution tomographic imaging by means of x-ray micro-computed tomography (μCT) has been widely utilized for morphological evaluations in dentistry and medicine. The use of μCT follows a standard procedure: image acquisition, reconstruction, processing, evaluation using image analysis, and reporting of results. This paper discusses methods of μCT using a specific scanning device, the Bruker SkyScan 1173 High Energy Micro-CT. We present a description of the general workflow, information on terminology for the measured parameters and corresponding units, and further analyses that can potentially be conducted with this technology. Brief qualitative and quantitative analyses, including basic image processing (VOI selection and thresholding) and measurement of several morphometrical variables (total VOI volume, object volume, percentage of total volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity) were conducted on two samples, the mandible of a wistar rat and a urinary tract stone, to illustrate the abilities of this device and its accompanying software package. The results of these analyses for both samples are reported, along with a discussion of the types of analyses that are possible using digital images obtained with a μCT scanning device, paying particular attention to non-diagnostic ex vivo research applications.
Reducing Interpolation Artifacts for Mutual Information Based Image Registration
Soleimani, H.; Khosravifard, M.A.
2011-01-01
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673
Advances in Monitoring Cell-Based Therapies with Magnetic Resonance Imaging: Future Perspectives
Ngen, Ethel J.; Artemov, Dmitri
2017-01-01
Cell-based therapies are currently being developed for applications in both regenerative medicine and in oncology. Preclinical, translational, and clinical research on cell-based therapies will benefit tremendously from novel imaging approaches that enable the effective monitoring of the delivery, survival, migration, biodistribution, and integration of transplanted cells. Magnetic resonance imaging (MRI) offers several advantages over other imaging modalities for elucidating the fate of transplanted cells both preclinically and clinically. These advantages include the ability to image transplanted cells longitudinally at high spatial resolution without exposure to ionizing radiation, and the possibility to co-register anatomical structures with molecular processes and functional changes. However, since cellular MRI is still in its infancy, it currently faces a number of challenges, which provide avenues for future research and development. In this review, we describe the basic principle of cell-tracking with MRI; explain the different approaches currently used to monitor cell-based therapies; describe currently available MRI contrast generation mechanisms and strategies for monitoring transplanted cells; discuss some of the challenges in tracking transplanted cells; and suggest future research directions. PMID:28106829
Rizk, Aurélien; Paul, Grégory; Incardona, Pietro; Bugarski, Milica; Mansouri, Maysam; Niemann, Axel; Ziegler, Urs; Berger, Philipp; Sbalzarini, Ivo F
2014-03-01
Detection and quantification of fluorescently labeled molecules in subcellular compartments is a key step in the analysis of many cell biological processes. Pixel-wise colocalization analyses, however, are not always suitable, because they do not provide object-specific information, and they are vulnerable to noise and background fluorescence. Here we present a versatile protocol for a method named 'Squassh' (segmentation and quantification of subcellular shapes), which is used for detecting, delineating and quantifying subcellular structures in fluorescence microscopy images. The workflow is implemented in freely available, user-friendly software. It works on both 2D and 3D images, accounts for the microscope optics and for uneven image background, computes cell masks and provides subpixel accuracy. The Squassh software enables both colocalization and shape analyses. The protocol can be applied in batch, on desktop computers or computer clusters, and it usually requires <1 min and <5 min for 2D and 3D images, respectively. Basic computer-user skills and some experience with fluorescence microscopy are recommended to successfully use the protocol.
NASA Astrophysics Data System (ADS)
Suchwalko, Agnieszka; Buzalewicz, Igor; Podbielska, Halina
2012-01-01
In the presented paper the optical system with converging spherical wave illumination for classification of bacteria species, is proposed. It allows for compression of the observation space, observation of Fresnel patterns, diffraction pattern scaling and low level of optical aberrations, which are not possessed by other optical configurations. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Analysis of Fresnel diffraction patterns of bacteria colonies can be fast and reliable method for classification and recognition of bacteria species. To determine the unique features of bacteria colonies diffraction patterns the image processing analysis was proposed. Classification can be performed by analyzing the spatial structure of diffraction patterns, which can be characterized by set of concentric rings. The characteristics of such rings depends on the bacteria species. In the paper, the influence of basic features and ring partitioning number on the bacteria classification, is analyzed. It is demonstrated that Fresnel patterns can be used for classification of following species: Salmonella enteritidis, Staplyococcus aureus, Proteus mirabilis and Citrobacter freundii. Image processing is performed by free ImageJ software, for which a special macro with human interaction, was written. LDA classification, CV method, ANOVA and PCA visualizations preceded by image data extraction were conducted using the free software R.
Meso-Scale Wetting of Paper Towels
NASA Astrophysics Data System (ADS)
Abedsoltan, Hossein
In this study, a new experimental approach is proposed to investigate the absorption properties of some selected retail paper towels. The samples were selected from two important manufacturing processes, conventional wet pressing (CWP) considered value products, and through air drying (TAD) considered as high or premium products. The tested liquids were water, decane, dodecane, and tetradecane with the total volumes in micro-liter range. The method involves the point source injection of liquid with different volumetric flowrates, in the nano-liter per second range. The local site for injection was chosen arbitrarily on the sample surface. The absorption process was monitored and recorded as the liquid advances, with two distinct imaging system methods, infrared imaging and optical imaging. The microscopic images were analyzed to calculate the wetted regions during the absorption test, and the absorption diagrams were generated. These absorption diagrams were dissected to illustrate the absorption phenomenon, and the absorption properties of the samples. The local (regional) absorption rates were computed for Mardi Gras and Bounty Basic as the representative samples for CWP and TAD, respectively in order to be compared with the absorption capacity property of these two samples. Then, the absorption capacity property was chosen as an index factor to compare the absorption properties of all the tested paper towels.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
NASA Astrophysics Data System (ADS)
Dijk, J.; Bijl, P.; Oppeneer, M.; ten Hove, R. J. M.; van Iersel, M.
2017-10-01
The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.
The 3D scanner prototype utilize object profile imaging using line laser and octave software
NASA Astrophysics Data System (ADS)
Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus
2016-11-01
Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.
Investigation of Lung Structure-Function Relationships Using Hyperpolarized Noble Gases
NASA Astrophysics Data System (ADS)
Thomen, Robert P.
Magnetic Resonance Imaging (MRI) is an application of the nuclear magnetic resonance (NMR) phenomenon to non-invasively generate 3D tomographic images. MRI is an emerging modality for the lung, but it suffers from low sensitivity due to inherent low tissue density and short T(*/2) . Hyperpolarization is a process by which the nuclear contribution to NMR signal is greatly enhanced to more than 100,000 times that of samples in thermal equilibrium. The noble gases 3He and 129Xe are most often hyperpolarized by transfer of light angular momentum through the electron of a vaporized alkali metal to the noble gas nucleus (called Spin Exchange Optical Pumping). The enhancement in NMR signal is so great that the gas itself can be imaged via MRI, and because noble gases are chemically inert, they can be safely inhaled by a subject, and the gas distribution within the interior of the lung can be imaged. The mechanics of respiration is an elegant physical process by which air is is brought into the distal airspaces of the lungs for oxygen/carbon dioxide gas exchange with blood. Therefore proper description of lung function is intricately related to its physical structure , and the basic mechanical operation of healthy lungs -- from pressure driven airflow, to alveolar airspace gas kinetics, to gas exchange by blood/gas concentration gradients, to elastic contraction of parenchymal tissue -- is a process decidedly governed by the laws of physics. This dissertation will describe experiments investigating the relationship of lung structure and function using hyperpolarized (HP) noble gas MRI. In particular HP gases will be applied to the study of several pulmonary diseases each of which demonstrates unique structure-function abnormalities: asthma, cystic fibrosis, and chronic obstructive pulmonary disease. Successful implementation of an HP gas acquisition protocol for pulmonary studies is an involved and stratified undertaking which requires a solid theoretical foundation in NMR and hyperpolarization theory, construction of dedicated hardware, development of dedicated software, and appropriate image analysis techniques for all acquired data. The author has been actively involved in each of these and has dedicated specific chapters of this dissertation to their description. First, a brief description of lung structure-function investigations and pulmonary imaging will be given (chapter 1). Brief discussions of basic NMR, MRI, and hyperpolarization theory will be given (chapters 2 and 3) followed by their particular methods of implementation in this work (chapters 4 and 5). Analysis of acquired HP gas images will be discussed (chapter 6), and the investigational procedures and results for each lung disease examined will be detailed (chapter 7). Finally, a quick digression on the strengths and limitations of HP gas MRI will be provided (chapter 8).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourne, Roger
2013-03-15
This commentary outlines how magnetic resonance imaging (MRI) microscopy studies of prostate tissue samples and whole organs have shed light on a number of clinical imaging mysteries and may enable more effective development of new clinical imaging methods.
Colour flow and motion imaging.
Evans, D H
2010-01-01
Colour flow imaging (CFI) is an ultrasound imaging technique whereby colour-coded maps of tissue velocity are superimposed on grey-scale pulse-echo images of tissue anatomy. The most widespread use of the method is to image the movement of blood through arteries and veins, but it may also be used to image the motion of solid tissue. The production of velocity information is technically more demanding than the production of the anatomical information, partly because the target of interest is often blood, which backscatters significantly less power than solid tissues, and partly because several transmit-receive cycles are necessary for each velocity estimate. This review first describes the various components of basic CFI systems necessary to generate the velocity information and to combine it with anatomical information. It then describes a number of variations on the basic autocorrelation technique, including cross-correlation-based techniques, power Doppler, Doppler tissue imaging, and three-dimensional (3D) Doppler imaging. Finally, a number of limitations of current techniques and some potential solutions are reviewed.
Long-range non-contact imaging photoplethysmography: cardiac pulse wave sensing at a distance
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.; Piasecki, Alyssa M.; Bowers, Margaret A.; Klosterman, Samantha L.
2016-03-01
Non-contact, imaging photoplethysmography uses photo-optical sensors to measure variations in light absorption, caused by blood volume pulsations, to assess cardiopulmonary parameters including pulse rate, pulse rate variability, and respiration rate. Recently, researchers have studied the applications and methodology of imaging photoplethysmography. Basic research has examined some of the variables affecting data quality and accuracy of imaging photoplethysmography including signal processing, imager parameters (e.g. frame rate and resolution), lighting conditions, subject motion, and subject skin tone. This technology may be beneficial for long term or continuous monitoring where contact measurements may be harmful (e.g. skin sensitivities) or where imperceptible or unobtrusive measurements are desirable. Using previously validated signal processing methods, we examined the effects of imager-to-subject distance on one-minute, windowed estimates of pulse rate. High-resolution video of 22, stationary participants was collected using an enthusiast-grade, mirrorless, digital camera equipped with a fully-manual, super-telephoto lens at distances of 25, 50, and 100 meters with simultaneous contact measurements of electrocardiography, and fingertip photoplethysmography. By comparison, previous studies have usually been conducted with imager-to-subject distances of up to only a few meters. Mean absolute error for one-minute, windowed, pulse rate estimates (compared to those derived from gold-standard electrocardiography) were 2.0, 4.1, and 10.9 beats per minute at distances of 25, 50, and 100 meters, respectively. Long-range imaging presents several unique challenges among which include decreased, observed light reflectance and smaller regions of interest. Nevertheless, these results demonstrate that accurate pulse rate measurements can be obtained from over long imager-to-participant distances given these constraints.
NASA Technical Reports Server (NTRS)
LaBonte, Barry J.
2004-01-01
A small amount of work has been done on this project; the strategy to be adopted has been better defined, though no experimental work has been started. 1) Wavefront error signals: The best choice appears use a lenslet array at a pupil image to produce defocused image pairs for each subaperture. Then use the method proposed by Molodij et al. to produce subaperture curvature signals. Basically, this method samples a moderate number of locations in the image where the value of the image Laplacian is high, then taking the curvature signal from the difference of the Laplacians of the extrafocal images at those locations. The tip-tilt error is obtained from the temporal dependence of the first spatial derivatives of an in-focus image, at selected locations where these derivatives are significant. The wavefront tilt can be obtained from the full-aperture image. 2) Extrafocal image generation: The important aspect here is to generate symmetrically defocused images, with dynamically adjustable defocus. The adjustment is needed because larger defocus is required before the feedback loop is closed, and at times when the seeing is worse. It may be that the usual membrane mirror is the best choice, though other options should be explored. 3) Detector: Since the proposed sensor is to work on solar granulation, rather than a point source, an array detector for each subaperture is required. A fast CMOS camera such as that developed by the National Solar Observatory would be a satisfactory choice. 4) Processing: Processing requirements have not been defined in detail, though significantly fewer operations per cycle are required than for a correlation tracker.
Image, imagination, and reality: on effectiveness of introductory work with vocalists.
Gullaer, Irene; Walker, Robert; Badin, Pierre; Lamalle, Laurent
2006-01-01
Fifty-four sung tokens, each consisting of eight images were generated with the help of magnetic resonance imaging (MRI) technique to demonstrate the work of intrapharyngeal muscles when singing and speaking, and to help the educational process. The MRI images can be used as a part of a visualization feed-back method in vocal education and contribute to creation of proper mental images. The use of visualization (pictures, drafts, graphs, spectra, MRI images, etc.), along with mental images, facilitates simplification and acceleration of the process of understanding and learning how to master the basics of vocal technique, especially in the initial period of study. It is shown that work on muscle development and use of imagination should progress with close interaction between the two. For higher effectiveness and tangible results, mental images used by a vocal pedagogue should correspond to the technical and emotional level of a student. Therefore, mental images have to undertake the same evolution as articulation technique-from simplified and comprehensible to complex and abstract. Our integrated approach suggests continuing the work on muscle development and use of imagination in singing classes, employing the experience of voice-speech teachers. Their exercises are modified using the empirical method and other techniques developed creatively by singing teachers. In this method, sensitivity towards the state of the tissues becomes increasingly refined; students acquire a conscious control over the muscle work, students gain full awareness of both sensation and muscle activity. As a result, a complex of professional conditioned reflexes is being developed. A case study of the New Zealand experience was conducted with groups of Maori and European students. Unique properties and trends in the voices of Maori people are discussed.
a Study of the Effects of Processing Chemistry on the Holographic Image Space.
NASA Astrophysics Data System (ADS)
Kocher, Clive Joseph
Available from UMI in association with The British Library. Processing methods for reflection and transmission holograms were evaluated with a view to minimising distortion in the images of small, metallic, near field subjects, whilst retaining optimum quality. The study was limited to recordings made with the HeNe laser (633 nm) in conjunction with the Agfa Gevaert 8E75 HD silver halide emulsion on glass or film support (5^{' '} x 4^{' '} format). Simple ray diagrams were used to help predict angular distortion arising from emulsion shrinkage for a two-dimensional model. The main conclusions are: (a) Serious distortion of the order of several millimetres, and loss of resolution will occur in the images of reflection holograms unless careful attention is given to processing procedures. Evidence supports the hypothesis that shrinkage due to processing causes the fringe system to collapse with a resultant change in inclination angle, and hence a distortion of the reconstructed image. Minimum distortion occurs with a laser reconstructed hologram processed in a high tanning developer and rehalogenating bleach, none being detected under the test conditions. (b) The same problem was not apparent for the transmission hologram due to a different fringe orientation, and within the limitations of the measuring system, no distortion was detected for any processing system. Comparative tests were made to evaluate the differences in performance for the Agfa 8E75 HD emulsion on plate and film support. Results show a significant increase in speed for film (as high as times4) and shrinkage (~3%), under all processing conditions. The advantages of using Phenidone based developers are shown. The report also includes a comprehensive background theory section covering basic concepts, silver halide recording material, holographic processing chemistry, distortion in holograms and pulsed laser holography. A review of previous work on phase holograms is given. Although primarily intended for measurement, this report contains useful information of benefit to display holography.
Cogbill, Thomas H; Ziegelbein, Kurt J
2011-02-01
The basic principles underlying computed tomography, magnetic resonance, and ultrasound are reviewed to promote better understanding of the properties and appropriate applications of these 3 common imaging modalities. A glossary of frequently used terms for each technique is appended for convenience. Risks to patient safety including contrast-induced nephropathy, radiation-induced malignancy, and nephrogenic systemic fibrosis are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Munoz, Karen E.; Hyde, Luke W.; Hariri, Ahmad R.
2009-01-01
Imaging genetics is an experimental strategy that integrates molecular genetics and neuroimaging technology to examine biological mechanisms that mediate differences in behavior and the risks for psychiatric disorder. The basic principles in imaging genetics and the development of the field are discussed.
Astronomy In The Cloud: Using Mapreduce For Image Coaddition
NASA Astrophysics Data System (ADS)
Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-01-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.
Astronomy in the Cloud: Using MapReduce for Image Co-Addition
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-03-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
Imaging Thermal He(+)in Geospace from the Lunar Surface
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Sandel, B. R.; Adrian, Mark L.; Goldstein, Jerry; Jahn, Joerg-Micha; Spasojevic, Maria; Griffin, Brand
2007-01-01
By mass, thermal plasma dominates near-earth space and strongly influences the transport of energy and mass into the earth's atmosphere. It is proposed to play an important role in modifying the strength of space weather storms by its presence in regions of magnetic reconnection in the dayside magnetopause and in the near to mid-magnetotail. Ionospheric-origin thermal plasma also represents the most significant potential loss of atmospheric mass from our planet over geological time. Knowledge of the loss of convected thermal plasma into the solar wind versus its recirculation across high latitudes and through the magnetospheric flanks into the magnetospheric tail will enable determination of the mass balance for this mass-dominant component of the Geospace system and of its influence on global magnetospheric processes that are critical to space weather prediction and hence to the impact of space processes on human technology in space and on Earth. Our proposed concept addresses this basic issue of Geospace dynamics by imaging thermal He(+) ions in extreme ultraviolet light with an instrument on the lunar surface. The concept is derived from the highly successful Extreme Ultraviolet imager (EUV) flown on the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) spacecraft. From the lunar surface an advanced EUV imager is anticipated to have much higher sensitivity, lower background noise, and higher communication bandwidth back to Earth. From the near-magnetic equatorial location on the lunar surface, such an imager would be ideally located to follow thermal He(+) ions to high latitudes, into the magnetospheric flanks, and into the magnetotail.
Villa, Carlo E.; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe
2010-01-01
The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done. PMID:20808918
Villa, Carlo E; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe
2010-08-17
The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done.
25 CFR 15.11 - What are the basic steps of the probate process?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What are the basic steps of the probate process? 15.11 Section 15.11 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR PROBATE PROBATE OF INDIAN... are the basic steps of the probate process? The basic steps of the probate process are: (a) We learn...
Dental Imaging - A basic guide for the radiologist.
Masthoff, Max; Gerwing, Mirjam; Masthoff, Malte; Timme, Maximilian; Kleinheinz, Johannes; Berninger, Markus; Heindel, Walter; Wildgruber, Moritz; Schülke, Christoph
2018-06-18
As dental imaging accounts for approximately 40 % of all X-ray examinations in Germany, profound knowledge of this topic is essential not only for the dentist but also for the clinical radiologist. This review focuses on basic imaging findings regarding the teeth. Therefore, tooth structure, currently available imaging techniques and common findings in conserving dentistry including endodontology, periodontology, implantology and dental trauma are presented. Literature research on the current state of dental radiology was performed using Pubmed. Currently, the most frequent imaging techniques are the orthopantomogram (OPG) and single-tooth radiograph, as well as computer tomography (CT) and cone beam CT mainly for implantology (planning or postoperative control) or trauma indications. Especially early diagnosis and correct classification of a dental trauma, such as dental pulp involvement, prevents from treatment delays or worsening of therapy options and prognosis. Furthermore, teeth are commonly a hidden focus of infection.Since radiologists are frequently confronted with dental imaging, either concerning a particular question such as a trauma patient or regarding incidental findings throughout head and neck imaging, further training in this field is more than worthwhile to facilitate an early and sufficient dental treatment. · This review focuses on dental imaging techniques and the most important pathologies.. · Dental pathologies may not only be locally but also systemically relevant.. · Reporting of dental findings is important for best patient care.. · Masthoff M, Gerwing M, Masthoff M et al. Dental Imaging - A basic guide for the radiologist. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0636-4129. © Georg Thieme Verlag KG Stuttgart · New York.
Basic Energy Sciences Program Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-01-04
The U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences (BES) supports fundamental research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels to provide the foundations for new energy technologies and to support DOE missions in energy, environment, and national security. The research disciplines covered by BES—condensed matter and materials physics, chemistry, geosciences, and aspects of physical biosciences— are those that discover new materials and design new chemical processes. These disciplines touch virtually every aspect of energy resources, production, conversion, transmission, storage, efficiency, and waste mitigation. BES also plans, constructs, andmore » operates world-class scientific user facilities that provide outstanding capabilities for imaging and spectroscopy, characterizing materials of all kinds ranging from hard metals to fragile biological samples, and studying the chemical transformation of matter. These facilities are used to correlate the microscopic structure of materials with their macroscopic properties and to study chemical processes. Such experiments provide critical insights to electronic, atomic, and molecular configurations, often at ultrasmall length and ultrafast time scales.« less
The Raptor Real-Time Processing Architecture
NASA Astrophysics Data System (ADS)
Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
A large, switchable optical clearing skull window for cerebrovascular imaging
Zhang, Chao; Feng, Wei; Zhao, Yanjie; Yu, Tingting; Li, Pengcheng; Xu, Tonghui; Luo, Qingming; Zhu, Dan
2018-01-01
Rationale: Intravital optical imaging is a significant method for investigating cerebrovascular structure and function. However, its imaging contrast and depth are limited by the turbid skull. Tissue optical clearing has a great potential for solving this problem. Our goal was to develop a transparent skull window, without performing a craniotomy, for use in assessing cerebrovascular structure and function. Methods: Skull optical clearing agents were topically applied to the skulls of mice to create a transparent window within 15 min. The clearing efficacy, repeatability, and safety of the skull window were then investigated. Results: Imaging through the optical clearing skull window enhanced both the contrast and the depth of intravital imaging. The skull window could be used on 2-8-month-old mice and could be expanded from regional to bi-hemispheric. In addition, the window could be repeatedly established without inducing observable inflammation and metabolic toxicity. Conclusion: We successfully developed an easy-to-handle, large, switchable, and safe optical clearing skull window. Combined with various optical imaging techniques, cerebrovascular structure and function can be observed through this optical clearing skull window. Thus, it has the potential for use in basic research on the physiopathologic processes of cortical vessels. PMID:29774069
Basic test framework for the evaluation of text line segmentation and text parameter extraction.
Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran
2010-01-01
Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.
Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction
Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran
2010-01-01
Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932
Current and future trends in marine image annotation software
NASA Astrophysics Data System (ADS)
Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.
2016-12-01
Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
NASA Technical Reports Server (NTRS)
Davis, Frank W.; Quattrochi, Dale A.; Ridd, Merrill K.; Lam, Nina S.-N.; Walsh, Stephen J.
1991-01-01
This paper discusses some basic scientific issues and research needs in the joint processing of remotely sensed and GIS data for environmental analysis. Two general topics are treated in detail: (1) scale dependence of geographic data and the analysis of multiscale remotely sensed and GIS data, and (2) data transformations and information flow during data processing. The discussion of scale dependence focuses on the theory and applications of spatial autocorrelation, geostatistics, and fractals for characterizing and modeling spatial variation. Data transformations during processing are described within the larger framework of geographical analysis, encompassing sampling, cartography, remote sensing, and GIS. Development of better user interfaces between image processing, GIS, database management, and statistical software is needed to expedite research on these and other impediments to integrated analysis of remotely sensed and GIS data.
Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager
2014-03-01
quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS
Full-field wrist pulse signal acquisition and analysis by 3D Digital Image Correlation
NASA Astrophysics Data System (ADS)
Xue, Yuan; Su, Yong; Zhang, Chi; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan; Zhang, Qingchuan; Wu, Xiaoping
2017-11-01
Pulse diagnosis is an essential part in four basic diagnostic methods (inspection, listening, inquiring and palpation) in traditional Chinese medicine, which depends on longtime training and rich experience, so computerized pulse acquisition has been proposed and studied to ensure the objectivity. To imitate the process that doctors using three fingertips with different pressures to feel fluctuations in certain areas containing three acupoints, we established a five dimensional pulse signal acquisition system adopting a non-contacting optical metrology method, 3D digital image correlation, to record the full-field displacements of skin fluctuations under different pressures. The system realizes real-time full-field vibration mode observation with 10 FPS. The maximum sample frequency is 472 Hz for detailed post-processing. After acquisition, the signals are analyzed according to the amplitude, pressure, and pulse wave velocity. The proposed system provides a novel optical approach for digitalizing pulse diagnosis and massive pulse signal data acquisition for various types of patients.
Determine the Sun's Rotation Period using D.I.Y Sunspotter and Smartphone
NASA Astrophysics Data System (ADS)
Lim, JongHo; Lim, Jihey; Sohn, Jungjoo; Jo, Hoon
2016-04-01
This is an astronomy education program for rotation period of the Sun using a sunspotter of one's own making made by the easy manageable materials and generic smart phone as a detector. Students had immediate chances to understand the principle of the telescope and optical system. Tries to make better product appears during making it. For example, they reduced the number of reflectors to decrease loss of light and changed outer shape of it to make easy for storage. D.I.Y. sunspotter is free to adjust to altazimuth mount and marked the azimuth and altitude to determine viewing direction. The images taken with smartphones were processed by using Pixlr/editor(free web-based image processing program). Rotation period of sun was calculated by using the basic formula. In addition, its accuracy was confirmed by comparison result from the SOHO satellite data. Learning by manufacturing the sunspotter is increased to understanding the principles of solar observation and to concentrate on the project following the scientist's practical study.
Soleilhac, Emmanuelle; Nadon, Robert; Lafanechere, Laurence
2010-02-01
Screening compounds with cell-based assays and microscopy image-based analysis is an approach currently favored for drug discovery. Because of its high information yield, the strategy is called high-content screening (HCS). This review covers the application of HCS in drug discovery and also in basic research of potential new pathways that can be targeted for treatment of pathophysiological diseases. HCS faces several challenges, however, including the extraction of pertinent information from the massive amount of data generated from images. Several proposed approaches to HCS data acquisition and analysis are reviewed. Different solutions from the fields of mathematics, bioinformatics and biotechnology are presented. Potential applications and limits of these recent technical developments are also discussed. HCS is a multidisciplinary and multistep approach for understanding the effects of compounds on biological processes at the cellular level. Reliable results depend on the quality of the overall process and require strong interdisciplinary collaborations.
Large-aperture space optical system testing based on the scanning Hartmann.
Wei, Haisong; Yan, Feng; Chen, Xindong; Zhang, Hao; Cheng, Qiang; Xue, Donglin; Zeng, Xuefeng; Zhang, Xuejun
2017-03-10
Based on the Hartmann testing principle, this paper proposes a novel image quality testing technology which applies to a large-aperture space optical system. Compared with the traditional testing method through a large-aperture collimator, the scanning Hartmann testing technology has great advantages due to its simple structure, low cost, and ability to perform wavefront measurement of an optical system. The basic testing principle of the scanning Hartmann testing technology, data processing method, and simulation process are presented in this paper. Certain simulation results are also given to verify the feasibility of this technology. Furthermore, a measuring system is developed to conduct a wavefront measurement experiment for a 200 mm aperture optical system. The small deviation (6.3%) of root mean square (RMS) between experimental results and interferometric results indicates that the testing system can measure low-order aberration correctly, which means that the scanning Hartmann testing technology has the ability to test the imaging quality of a large-aperture space optical system.
Examples of current radar technology and applications, chapter 5, part B
NASA Technical Reports Server (NTRS)
1975-01-01
Basic principles and tradeoff considerations for SLAR are summarized. There are two fundamental types of SLAR sensors available to the remote sensing user: real aperture and synthetic aperture. The primary difference between the two types is that a synthetic aperture system is capable of significant improvements in target resolution but requires equally significant added complexity and cost. The advantages of real aperture SLAR include long range coverage, all-weather operation, in-flight processing and image viewing, and lower cost. The fundamental limitation of the real aperture approach is target resolution. Synthetic aperture processing is the most practical approach for remote sensing problems that require resolution higher than 30 to 40 m.
Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D
2003-11-01
Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and corrected for attenuation and visual response, basically no differences were found. Radiographs processed for correction for attenuation and correction for attenuation and visual response may improve perception, especially for 12 bit originals.