Science.gov

Sample records for additional image processing

  1. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. R.; St. Clair, T. L.; Burks, H. D.; Stoakley, D. M.

    1987-01-01

    A method has been found for enhancing the melt flow of thermoplastic polyimides during processing. A high molecular weight 422 copoly(amic acid) or copolyimide was fused with approximately 0.05 to 5 pct by weight of a low molecular weight amic acid or imide additive, and this melt was studied by capillary rheometry. Excellent flow and improved composite properties on graphite resulted from the addition of a PMDA-aniline additive to LARC-TPI. Solution viscosity studies imply that amic acid additives temporarily lower molecular weight and, hence, enlarge the processing window. Thus, compositions containing the additive have a lower melt viscosity for a longer time than those unmodified.

  2. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1992-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  3. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1993-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  4. Oil additive process

    SciTech Connect

    Bishop, H.

    1988-10-18

    This patent describes a method of making an additive comprising: (a) adding 2 parts by volume of 3% sodium hypochlorite to 45 parts by volume of diesel oil fuel to form a sulphur free fuel, (b) removing all water and foreign matter formed by the sodium hypochlorite, (c) blending 30 parts by volume of 24% lead naphthanate with 15 parts by volume of the sulphur free fuel, 15 parts by volume of light-weight material oil to form a blended mixture, and (d) heating the blended mixture slowly and uniformly to 152F.

  5. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  6. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  7. Meteorological image processing applications

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Hasler, A. F.; Adler, R. F.

    1979-01-01

    Meteorologists at NASA's Goddard Space Flight Center are conducting an extensive program of research in weather and climate related phenomena. This paper focuses on meteorological image processing applications directed toward gaining a detailed understanding of severe weather phenomena. In addition, the paper discusses the ground data handling and image processing systems used at the Goddard Space Flight Center to support severe weather research activities and describes three specific meteorological studies which utilized these facilities.

  8. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  9. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists. PMID:15352557

  10. [Utility of noise addition image made by using water phantom and image addition and subtraction software].

    PubMed

    Watanabe, Ryo; Ogawa, Masato; Mituzono, Hiroki; Aoki, Takahiro; Hayano, Mizuho; Watanabe, Yuka

    2010-08-20

    In optimizing exposures, it is very important to evaluate the impact of image noise on image quality. To realize this, there is a need to evaluate how much image noise will make the subject disease invisible. But generally it is very difficult to shoot images of different quality in a clinical examination. Thus, a method to create a noise addition image by adding the image noise to raw data has been reported. However, this approach requires a special system, so it is difficult to implement in many facilities. We have invented a method to easily create a noise addition image by using the water phantom and image add-subtract software that accompanies the device. To create a noise addition image, first we made a noise image by subtracting the water phantom with different SD. A noise addition image was then created by adding the noise image to the original image. By using this method, a simulation image with intergraded SD can be created from the original. Moreover, the noise frequency component of the created noise addition image is as same as the real image. Thus, the relationship of image quality to SD in the clinical image can be evaluated. Although this method is an easy method of LDSI creation on image data, a noise addition image can be easily created by using image addition and subtraction software and water phantom, and this can be implemented in many facilities. PMID:20953102

  11. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  12. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  13. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  14. Hyperspectral image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  15. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  16. Forensic detection of noise addition in digital images

    NASA Astrophysics Data System (ADS)

    Cao, Gang; Zhao, Yao; Ni, Rongrong; Ou, Bo; Wang, Yongbin

    2014-03-01

    We proposed a technique to detect the global addition of noise to a digital image. As an anti-forensics tool, noise addition is typically used to disguise the visual traces of image tampering or to remove the statistical artifacts left behind by other operations. As such, the blind detection of noise addition has become imperative as well as beneficial to authenticate the image content and recover the image processing history, which is the goal of general forensics techniques. Specifically, the special image blocks, including constant and strip ones, are used to construct the features for identifying noise addition manipulation. The influence of noising on blockwise pixel value distribution is formulated and analyzed formally. The methodology of detectability recognition followed by binary decision is proposed to ensure the applicability and reliability of noising detection. Extensive experimental results demonstrate the efficacy of our proposed noising detector.

  17. Hybrid image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    Partly-digital, partly-optical 'hybrid' image processing attempts to use the properties of each domain to synergistic advantage: while Fourier optics furnishes speed, digital processing allows the use of much greater algorithmic complexity. The video-rate image-coordinate transformation used is a critical technology for real-time hybrid image-pattern recognition. Attention is given to the separation of pose variables, image registration, and both single- and multiple-frame registration.

  18. An interactive image processing system.

    PubMed

    Troxel, D E

    1981-01-01

    A multiuser multiprocessing image processing system has been developed. It is an interactive picture manipulation and enhancement facility which is capable of executing a variety of image processing operations while simultaneously controlling real-time input and output of pictures. It was designed to provide a reliable picture processing system which would be cost-effective in the commercial production environment. Additional goals met by the system include flexibility and ease of operation and modification. PMID:21868923

  19. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  20. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  1. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  2. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  3. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  4. Energetic additive manufacturing process with feed wire

    SciTech Connect

    Harwell, Lane D.; Griffith, Michelle L.; Greene, Donald L.; Pressly, Gary A.

    2000-11-07

    A process for additive manufacture by energetic wire deposition is described. A source wire is fed into a energy beam generated melt-pool on a growth surface as the melt-pool moves over the growth surface. This process enables the rapid prototyping and manufacture of fully dense, near-net shape components, as well as cladding and welding processes. Alloys, graded materials, and other inhomogeneous materials can be grown using this process.

  5. Visual color image processing

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Schaefer, Gerald

    1999-12-01

    In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

  6. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  7. Design of smart imagers with image processing

    NASA Astrophysics Data System (ADS)

    Serova, Evgeniya N.; Shiryaev, Yury A.; Udovichenko, Anton O.

    2005-06-01

    This paper is devoted to creation of novel CMOS APS imagers with focal plane parallel image preprocessing for smart technical vision and electro-optical systems based on neural implementation. Using analysis of main biological vision features, the desired artificial vision characteristics are defined. Image processing tasks can be implemented by smart focal plane preprocessing CMOS imagers with neural networks are determined. Eventual results are important for medicine, aerospace ecological monitoring, complexity, and ways for CMOS APS neural nets implementation. To reduce real image preprocessing time special methods based on edge detection and neighbored frame subtraction will be considered and simulated. To select optimal methods and mathematical operators for edge detection various medical, technical and aerospace images will be tested. The important research direction will be devoted to analogue implementation of main preprocessing operations (addition, subtraction, neighbored frame subtraction, module, and edge detection of pixel signals) in focal plane of CMOS APS imagers. We present the following results: the algorithm of edge detection for analog realization, and patented focal plane circuits for analog image reprocessing (edge detection and motion detection).

  8. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  9. Onboard image processing

    NASA Technical Reports Server (NTRS)

    Martin, D. R.; Samulon, A. S.

    1979-01-01

    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  10. Image sets for satellite image processing systems

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Horner, Toby; Temple, Asael

    2011-06-01

    The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of images and video across noisy or low-bandwidth channels. Unfortunately, the DWT algorithm's performance deteriorates in the presence of noise. Evolutionary algorithms are often able to train image filters that outperform DWT filters in noisy environments. Here, we present and evaluate two image sets suitable for the training of such filters for satellite and unmanned aerial vehicle imagery applications. We demonstrate the use of the first image set as a training platform for evolutionary algorithms that optimize discrete wavelet transform (DWT)-based image transform filters for satellite image compression. We evaluate the suitability of each image as a training image during optimization. Each image is ranked according to its suitability as a training image and its difficulty as a test image. The second image set provides a test-bed for holdout validation of trained image filters. These images are used to independently verify that trained filters will provide strong performance on unseen satellite images. Collectively, these image sets are suitable for the development of image processing algorithms for satellite and reconnaissance imagery applications.

  11. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  12. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  13. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  14. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  15. Situ process for making multifunctional fuel additives

    SciTech Connect

    Carrier, R.C.; Allen, B.R.

    1984-02-28

    Disclosed is an in situ or ''one pot'' process for making a fuel additive comprising reacting an excess of at least one N-primary alkylalkylene diamine with maleic anhydride in the presence of from 20 to 36 weight percent of a mineral oil reaction diluent at a temperature ranging from ambient to about 225/sup 0/ F. and recovering a product containing a primary aliphatic hydrocarbon amino alkylene substituted asparagine, an N-primary alkylalkylene diamine in the reaction oil with the product having a by-product succinimide content not in excess of 1.0 weight percent, based on the weight of asparagine present.

  16. Retinomorphic image processing.

    PubMed

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  17. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  18. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  19. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  20. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  1. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  2. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  3. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  4. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  5. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  6. Cleaning Process Development for Metallic Additively Manufactured Parts

    NASA Technical Reports Server (NTRS)

    Tramel, Terri L.; Welker, Roger; Lowery, Niki; Mitchell, Mark

    2014-01-01

    Additive Manufacturing of metallic components for aerospace applications offers many advantages over traditional manufacturing techniques. As a new technology, many aspects of its widespread utilization remain open to investigation. Among these are the cleaning processes that can be used for post finishing of parts and measurements to verify effectiveness of the cleaning processes. Many cleaning and drying processes and measurement methods that have been used for parts manufactured using conventional techniques are candidates that may be considered for cleaning and verification of additively manufactured parts. Among these are vapor degreasing, ultrasonic immersion and spray cleaning, followed by hot air drying, vacuum baking and solvent displacement drying. Differences in porosity, density, and surface finish of additively manufactured versus conventionally manufactured parts may introduce new considerations in the selection of cleaning and drying processes or the method used to verify their effectiveness. This presentation will review the relative strengths and weaknesses of different candidate cleaning and drying processes as they may apply to additively manufactured metal parts for aerospace applications. An ultrasonic cleaning technique for exploring the cleanability of parts will be presented along with an example using additively manufactured Inconel 718 test specimens to illustrate its use. The data analysis shows that this ultrasonic cleaning approach results in a well-behaved ultrasonic cleaning/extraction behavior. That is, it does not show signs of accelerated cavitation erosion of the base material, which was later confirmed by neutron imaging. In addition, the analysis indicated that complete cleaning would be achieved by ultrasonic immersion cleaning at approximately 5 minutes, which was verified by subsequent cleaning of additional parts.

  7. Deconvolution of partially compensated solar images from additional wavefront sensing.

    PubMed

    Miura, Noriaki; Oh-Ishi, Akira; Kuwamura, Susumu; Baba, Naoshi; Ueno, Satoru; Nakatani, Yoshikazu; Ichimoto, Kiyoshi

    2016-04-01

    A technique for restoring solar images partially compensated with adaptive optics is developed. An additional wavefront sensor is installed in an adaptive optics system to acquire residual wavefront information simultaneously to a solar image. A point spread function is derived from the wavefront information and used to deconvolve the solar image. Successful image restorations are demonstrated when the estimated point spread functions have relatively high Strehl ratios. PMID:27139647

  8. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  9. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  10. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  11. Image processing photosensor for robots

    NASA Astrophysics Data System (ADS)

    Vinogradov, Sergey L.; Shubin, Vitaly E.

    1995-01-01

    Some aspects of the possible applications of new, nontraditional generation of the advanced photosensors having the inherent internal image processing for multifunctional optoelectronic systems such as machine vision systems (MVS) are discussed. The optical information in these solid-state photosensors, so-called photoelectric structures with memory (PESM), is registered and stored in the form of 2D charge and potential patterns in the plane of the layers, and then it may be transferred and transformed in a normal direction due to interaction of these patterns. PESM ensure high operation potential of the massively parallel processing with effective rate up to 1014 operation/bit/s in such integral operations as addition, subtraction, contouring, correlation of images and so on. Most diverse devices and apparatus may be developed on their base, ranging from automatic rangefinders to the MVS for furnishing robotized industries. Principal features, physical backgrounds of the main primary operations, complex functional algorithms for object selection, tracking, and guidance are briefly described. The examples of the possible application of the PESM as an intellectual 'supervideosensor', that combines a high-quality imager, memory media and a high-capacity special-purpose processor will be presented.

  12. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  13. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  14. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  15. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  16. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  17. Stereoscopic high-speed imaging using additive colors

    NASA Astrophysics Data System (ADS)

    Sankin, Georgy N.; Piech, David; Zhong, Pei

    2012-04-01

    An experimental system for digital stereoscopic imaging produced by using a high-speed color camera is described. Two bright-field image projections of a three-dimensional object are captured utilizing additive-color backlighting (blue and red). The two images are simultaneously combined on a two-dimensional image sensor using a set of dichromatic mirrors, and stored for off-line separation of each projection. This method has been demonstrated in analyzing cavitation bubble dynamics near boundaries. This technique may be useful for flow visualization and in machine vision applications.

  18. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  19. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  20. Image processing of aerodynamic data

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1985-01-01

    The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

  1. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  2. Process monitoring of additive manufacturing by using optical tomography

    SciTech Connect

    Zenzinger, Guenter E-mail: alexander.ladewig@mtu.de; Bamberg, Joachim E-mail: alexander.ladewig@mtu.de; Ladewig, Alexander E-mail: alexander.ladewig@mtu.de; Hess, Thomas E-mail: alexander.ladewig@mtu.de; Henkel, Benjamin E-mail: alexander.ladewig@mtu.de; Satzger, Wilhelm E-mail: alexander.ladewig@mtu.de

    2015-03-31

    Parts fabricated by means of additive manufacturing are usually of complex shape and owing to the fabrication procedure by using selective laser melting (SLM), potential defects and inaccuracies are often very small in lateral size. Therefore, an adequate quality inspection of such parts is rather challenging, while non-destructive-techniques (NDT) are difficult to realize, but considerable efforts are necessary in order to ensure the quality of SLM-parts especially used for aerospace components. Thus, MTU Aero Engines is currently focusing on the development of an Online Process Control system which monitors and documents the complete welding process during the SLM fabrication procedure. A high-resolution camera system is used to obtain images, from which tomographic data for a 3dim analysis of SLM-parts are processed. From the analysis, structural irregularities and structural disorder resulting from any possible erroneous melting process become visible and may be allocated anywhere within the 3dim structure. Results of our optical tomography (OT) method as obtained on real defects are presented.

  3. Process monitoring of additive manufacturing by using optical tomography

    NASA Astrophysics Data System (ADS)

    Zenzinger, Guenter; Bamberg, Joachim; Ladewig, Alexander; Hess, Thomas; Henkel, Benjamin; Satzger, Wilhelm

    2015-03-01

    Parts fabricated by means of additive manufacturing are usually of complex shape and owing to the fabrication procedure by using selective laser melting (SLM), potential defects and inaccuracies are often very small in lateral size. Therefore, an adequate quality inspection of such parts is rather challenging, while non-destructive-techniques (NDT) are difficult to realize, but considerable efforts are necessary in order to ensure the quality of SLM-parts especially used for aerospace components. Thus, MTU Aero Engines is currently focusing on the development of an Online Process Control system which monitors and documents the complete welding process during the SLM fabrication procedure. A high-resolution camera system is used to obtain images, from which tomographic data for a 3dim analysis of SLM-parts are processed. From the analysis, structural irregularities and structural disorder resulting from any possible erroneous melting process become visible and may be allocated anywhere within the 3dim structure. Results of our optical tomography (OT) method as obtained on real defects are presented.

  4. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  5. ICORE: Image Co-addition with Optional Resolution Enhancement

    NASA Astrophysics Data System (ADS)

    Masci, Frank

    2013-02-01

    ICORE is a command-line driven co-addition, mosaicking, and resolution enhancement (HiRes) tool for creating science quality products from image data in FITS format and with World Coordinate System information following the FITS-WCS standard. It includes preparatory steps such as image background matching, photometric gain-matching, and pixel-outlier rejection. Co-addition and/or HiRes'ing can be performed in either the inertial WCS or in the rest frame of a moving object. Three interpolation methods are supported: overlap-area weighting, drizzle, and weighting by the detector Point Response Function (PRF). The latter enables the creation of matched-filtered products for optimal point-source detection, but most importantly allows for resolution enhancement using a spatially-dependent deconvolution method. This is a variant of the classic Richardson-Lucy algorithm with the added benefit to simultaneously register and co-add multiple images to optimize signal-to-noise and sampling of the instrumental PSF. It can assume real (or otherwise "flat") image priors, mitigate "ringing" artifacts, and assess the quality of image solutions using statistically-motivated convergence criteria. Uncertainties are also estimated and internally validated for all products. The software supports multithreading that can be configured for different architectures. Numerous example scripts are included (with test data) to co-add and/or HiRes image data from Spitzer-IRAC/MIPS, WISE, and Herschel-SPIRE.

  6. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  7. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  8. Signal and Image Processing Operations

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  9. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  10. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision. PMID:18285181

  11. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  12. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  13. Seismic Imaging Processing and Migration

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  14. EXPOSURE TO CHEMICAL ADDITIVES FROM POLYVINYL CHLORIDE POLYMER EXTRUSION PROCESSING

    EPA Science Inventory

    This report presents a model to predict worker inhalation exposure due to off-gassing of additives during polyvinyl chloride (PVC) extrusion processing. ata on off-gassing of additives were reviewed in the literature, the off-gassing at normal PVC processing temperatures was stud...

  15. Roles for RNA in Telomerase Nucleotide and Repeat Addition Processivity

    PubMed Central

    Lai, Cary K.; Miller, Michael C.; Collins, Kathleen

    2010-01-01

    Summary Telomerase is a ribonucleoprotein reverse transcriptase with two subunits critical for catalytic activity, the protein telomerase reverse transcriptase (TERT) and telomerase RNA. In this study, we establish additional roles of the telomerase RNA subunit by demonstrating that RNA motifs stimulate the processivity of nucleotide and repeat addition. These functions are both functionally and physically separable from the roles of other RNA motifs in establishing a properly defined template. Binding of Tetrahymena telomerase RNA stem IV to TERT enhances nucleotide addition processivity, while a cooperation of the RNA pseudoknot and stem IV promotes repeat addition processivity. The low processivity of DNA synthesis by telomerase ribonucleoproteins lacking the pseudoknot and/or stem IV can be rescued by addition of the deleted region in trans. These findings demonstrate RNA elements with roles in telomerase elongation processivity that are distinct from RNA elements that specify the internal template. PMID:12820978

  16. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  17. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  18. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  19. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  20. Exposure to chemical additives from polyvinyl chloride polymer extrusion processing

    SciTech Connect

    Lamb, C.S.

    1989-12-01

    The report presents a model to predict worker inhalation exposure due to off-gassing of additives during polyvinyl chloride (PVC) extrusion processing. Data on off-gassing of additives were reviewed in the literature, the off-gassing at normal PVC processing temperatures was studied in the laboratory, process variables were estimated from an equipment manufacturer survey, and worker-activities and possible exposure sources were observed in an industrial survey. The purpose of the study was to develop a theoretical model to predict worker inhalation exposure to additives used during PVC extrusion processing. A model to estimate the generation rate of the additive from the polymer extrudate was derived from the mass transport equations governing diffusion. The mass flow rate, initial additive volatile weight fraction, off-gassing time, diffusivity, and slab thickness are required to determine the generation rate from the model.

  1. Porosity of additive manufacturing parts for process monitoring

    SciTech Connect

    Slotwinski, J. A.; Garboczi, E. J.

    2014-02-18

    Some metal additive manufacturing processes can produce parts with internal porosity, either intentionally (with careful selection of the process parameters) or unintentionally (if the process is not well-controlled.) Material porosity is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants, since surface-breaking pores allow for better integration with biological tissue. Changes in a part's porosity during an additive manufacturing build may also be an indication of an undesired change in the process. We are developing an ultrasonic sensor for detecting changes in porosity in metal parts during fabrication on a metal powder bed fusion system, for use as a process monitor. This paper will describe our work to develop an ultrasonic-based sensor for monitoring part porosity during an additive build, including background theory, the development and detailed characterization of reference additive porosity samples, and a potential design for in-situ implementation.

  2. Porosity of additive manufacturing parts for process monitoring

    NASA Astrophysics Data System (ADS)

    Slotwinski, J. A.; Garboczi, E. J.

    2014-02-01

    Some metal additive manufacturing processes can produce parts with internal porosity, either intentionally (with careful selection of the process parameters) or unintentionally (if the process is not well-controlled.) Material porosity is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants, since surface-breaking pores allow for better integration with biological tissue. Changes in a part's porosity during an additive manufacturing build may also be an indication of an undesired change in the process. We are developing an ultrasonic sensor for detecting changes in porosity in metal parts during fabrication on a metal powder bed fusion system, for use as a process monitor. This paper will describe our work to develop an ultrasonic-based sensor for monitoring part porosity during an additive build, including background theory, the development and detailed characterization of reference additive porosity samples, and a potential design for in-situ implementation.

  3. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  4. Simple Additivity of Stochastic Psychological Processes: Tests and Measures.

    ERIC Educational Resources Information Center

    Balakrishnan, J. D.

    1994-01-01

    Methods of testing relatively complete (distributional) models of internal psychological processes are described. It is shown that there is a sufficient condition for additive models to imply this property of the likelihood ratio. Also discussed are the examination of hazard rate functions of component processes and change in cumulative…

  5. Vehicle positioning using image processing

    NASA Astrophysics Data System (ADS)

    Kaur, Amardeep; Watkins, Steve E.; Swift, Theresa M.

    2009-03-01

    An image-processing approach is described that detects the position of a vehicle on a bridge. A load-bearing vehicle must be carefully positioned on a bridge for quantitative bridge monitoring. The personnel required for setup and testing and the time required for bridge closure or traffic control are important management and cost considerations. Consequently, bridge monitoring and inspections are good candidates for smart embedded systems. The objectives of this work are to reduce the need for personnel time and to minimize the time for bridge closure. An approach is proposed that uses a passive target on the bridge and camera instrumentation on the load vehicle. The orientation of the vehicle-mounted camera and the target determine the position. The experiment used pre-defined concentric circles as the target, a FireWire camera for image capture, and MATLAB for computer processing. Various image-processing techniques are compared for determining the orientation of the target circles with respect to speed and accuracy in the positioning application. The techniques for determining the target orientation use algorithms based on using the centroid feature, template matching, color feature, and Hough transforms. Timing parameters are determined for each algorithm to determine the feasibility for real-time use in a position triggering system. Also, the effect of variations in the size and color of the circles are examined. The development can be combined with embedded sensors and sensor nodes for a complete automated procedure. As the load vehicle moves to the proper position, the image-based system can trigger an embedded measurement, which is then transmitted back to the vehicle control computer through a wireless link.

  6. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  7. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  8. Control of pyrite addition in coal liquefaction process

    DOEpatents

    Schmid, Bruce K.; Junkin, James E.

    1982-12-21

    Pyrite addition to a coal liquefaction process (22, 26) is controlled (118) in inverse proportion to the calcium content of the feed coal to maximize the C.sub.5 --900.degree. F. (482.degree. C.) liquid yield per unit weight of pyrite added (110). The pyrite addition is controlled in this manner so as to minimize the amount of pyrite used and thus reduce pyrite contribution to the slurry pumping load and disposal problems connected with pyrite produced slag.

  9. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  10. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  11. Fundamental Aspects of Selective Melting Additive Manufacturing Processes

    SciTech Connect

    van Swol, Frank B.; Miller, James E.

    2014-12-01

    Certain details of the additive manufacturing process known as selective laser melting (SLM) affect the performance of the final metal part. To unleash the full potential of SLM it is crucial that the process engineer in the field receives guidance about how to select values for a multitude of process variables employed in the building process. These include, for example, the type of powder (e.g., size distribution, shape, type of alloy), orientation of the build axis, the beam scan rate, the beam power density, the scan pattern and scan rate. The science-based selection of these settings con- stitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy, reactive, dynamic wetting followed by re-solidification. In addition, inherent to the process is its considerable variability that stems from the powder packing. Each time a limited number of powder particles are placed, the stacking is intrinsically different from the previous, possessing a different geometry, and having a different set of contact areas with the surrounding particles. As a result, even if all other process parameters (scan rate, etc) are exactly the same, the shape and contact geometry and area of the final melt pool will be unique to that particular configuration. This report identifies the most important issues facing SLM, discusses the fundamental physics associated with it and points out how modeling can support the additive manufacturing efforts.

  12. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  13. Effects of acetylacetone additions on PZT thin film processing

    SciTech Connect

    Schwartz, R.W.; Assink, R.A.; Dimos, D.; Sinclair, M.B.; Boyle, T.J.; Buchheit, C.D.

    1995-02-01

    Sol-gel processing methods are frequently used for the fabrication of lead zirconate titanate (PZT) thin films for many electronic applications. Our standard approach for film fabrication utilizes lead acetate and acetic acid modified metal alkoxides of zirconium and titanium in the preparation of our precursor solutions. This report highlights some of our recent results on the effects of the addition of a second chelating ligand, acetylacetone, to this process. The authors discuss the changes in film drying behavior, densification and ceramic microstructure which accompany acetylacetone additions to the precursor solution and relate the observed variations in processing behavior to differences in chemical precursor structure induced by the acetylacetone ligand. Improvements in thin film microstructure, ferroelectric and optical properties are observed when acetylacetone is added to the precursor solution.

  14. Remote online processing of multispectral image data

    NASA Astrophysics Data System (ADS)

    Groh, Christine; Rothe, Hendrik

    2005-10-01

    Within the scope of this paper a both compact and economical data acquisition system for multispecral images is described. It consists of a CCD camera, a liquid crystal tunable filter in combination with an associated concept for data processing. Despite of their limited functionality (e.g.regarding calibration) in comparison with commercial systems such as AVIRIS the use of these upcoming compact multispectral camera systems can be advantageous in many applications. Additional benefit can be derived adding online data processing. In order to maintain the systems low weight and price this work proposes to separate data acquisition and processing modules, and transmit pre-processed camera data online to a stationary high performance computer for further processing. The inevitable data transmission has to be optimised because of bandwidth limitations. All mentioned considerations hold especially for applications involving mini-unmanned-aerial-vehicles (mini-UAVs). Due to their limited internal payload the use of a lightweight, compact camera system is of particular importance. This work emphasises on the optimal software interface in between pre-processed data (from the camera system), transmitted data (regarding small bandwidth) and post-processed data (based on high performance computer). Discussed parameters are pre-processing algorithms, channel bandwidth, and resulting accuracy in the classification of multispectral image data. The benchmarked pre-processing algorithms include diagnostic statistics, test of internal determination coefficients as well as loss-free and lossy data compression methods. The resulting classification precision is computed in comparison to a classification performed with the original image dataset.

  15. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  16. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  17. Additive Manufacturing of High-Entropy Alloys by Laser Processing

    NASA Astrophysics Data System (ADS)

    Ocelík, V.; Janssen, N.; Smith, S. N.; De Hosson, J. Th. M.

    2016-07-01

    This contribution concentrates on the possibilities of additive manufacturing of high-entropy clad layers by laser processing. In particular, the effects of the laser surface processing parameters on the microstructure and hardness of high-entropy alloys (HEAs) were examined. AlCoCrFeNi alloys with different amounts of aluminum prepared by arc melting were investigated and compared with the laser beam remelted HEAs with the same composition. Attempts to form HEAs coatings with a direct laser deposition from the mixture of elemental powders were made for AlCoCrFeNi and AlCrFeNiTa composition. A strong influence of solidification rate on the amounts of face-centered cubic and body-centered cubic phase, their chemical composition, and spatial distribution was detected for two-phase AlCoCrFeNi HEAs. It is concluded that a high-power laser is a versatile tool to synthesize interesting HEAs with additive manufacturing processing. Critical issues are related to the rate of (re)solidification, the dilution with the substrate, powder efficiency during cladding, and differences in melting points of clad powders making additive manufacturing processing from a simple mixture of elemental powders a challenging approach.

  18. Additive Manufacturing of High-Entropy Alloys by Laser Processing

    NASA Astrophysics Data System (ADS)

    Ocelík, V.; Janssen, N.; Smith, S. N.; De Hosson, J. Th. M.

    2016-04-01

    This contribution concentrates on the possibilities of additive manufacturing of high-entropy clad layers by laser processing. In particular, the effects of the laser surface processing parameters on the microstructure and hardness of high-entropy alloys (HEAs) were examined. AlCoCrFeNi alloys with different amounts of aluminum prepared by arc melting were investigated and compared with the laser beam remelted HEAs with the same composition. Attempts to form HEAs coatings with a direct laser deposition from the mixture of elemental powders were made for AlCoCrFeNi and AlCrFeNiTa composition. A strong influence of solidification rate on the amounts of face-centered cubic and body-centered cubic phase, their chemical composition, and spatial distribution was detected for two-phase AlCoCrFeNi HEAs. It is concluded that a high-power laser is a versatile tool to synthesize interesting HEAs with additive manufacturing processing. Critical issues are related to the rate of (re)solidification, the dilution with the substrate, powder efficiency during cladding, and differences in melting points of clad powders making additive manufacturing processing from a simple mixture of elemental powders a challenging approach.

  19. The metallurgy and processing science of metal additive manufacturing

    SciTech Connect

    Sames, William J.; List, III, Frederick Alyious; Pannala, Sreekanth; Dehoff, Ryan R.; Babu, Sudarsanam Suresh

    2016-01-01

    Here, additive Manufacturing (AM), widely known as 3D printing, is a method of manufacturing that forms parts from powder, wire, or sheets in a process that proceeds layer-by-layer.Many techniques (using many different names) have been developed to accomplish this via melting or solid - state joining. In this review, these techniques for producing metal parts are explored, with a focus on the science of metal AM: processing defects, heat transfer, solidification, solid- state precipitation, mechanical properties, and post-processing metallurgy. The various metal AM techniques are compared, with analysis of the strengths and limitations of each. Few alloys have been developed for commercial production, but recent development efforts are presented as a path for the ongoing development of new materials for AM processes.

  20. The metallurgy and processing science of metal additive manufacturing

    DOE PAGESBeta

    Sames, William J.; List, III, Frederick Alyious; Pannala, Sreekanth; Dehoff, Ryan R.; Babu, Sudarsanam Suresh

    2016-03-07

    Here, additive Manufacturing (AM), widely known as 3D printing, is a method of manufacturing that forms parts from powder, wire, or sheets in a process that proceeds layer-by-layer.Many techniques (using many different names) have been developed to accomplish this via melting or solid - state joining. In this review, these techniques for producing metal parts are explored, with a focus on the science of metal AM: processing defects, heat transfer, solidification, solid- state precipitation, mechanical properties, and post-processing metallurgy. The various metal AM techniques are compared, with analysis of the strengths and limitations of each. Few alloys have been developedmore » for commercial production, but recent development efforts are presented as a path for the ongoing development of new materials for AM processes.« less

  1. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  2. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  3. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  4. In-line image analysis on the effects of additives in batch cooling crystallization

    NASA Astrophysics Data System (ADS)

    Qu, Haiyan; Louhi-Kultanen, Marjatta; Kallas, Juha

    2006-03-01

    The effects of two potassium salt additives, ethylene diamine tetra acetic acid dipotassium salt (EDTA) and potassium pyrophosphate (KPY), on the batch cooling crystallization of potassium dihydrogen phosphate (KDP) were investigated. The crystal growth rates of certain crystal faces were determined from in-line images taken with a MTS particle image analysis (PIA) video microscope. An in-line image processing method was developed to characterize the size and shape of the crystals. The nucleation kinetics was studied by measurement of the metastable zone width and induction time. A significant promotion effect on both nucleation and growth of KDP was observed when EDTA was used as an additive. KPY, however, exhibited strong inhibiting impacts. The mechanism underlying the EDTA promotion effect on crystal growth was further studied with the 2-dimension nucleation model. It is shown that the presence of EDTA increased the density of adsorbed molecules of the crystallizing solute on the surface of the crystal.

  5. A patch-based cross masking model for natural images with detail loss and additive defects

    NASA Astrophysics Data System (ADS)

    Liu, Yucheng; Allebach, Jan P.

    2015-03-01

    Visual masking is an effect that contents of the image reduce the detectability of a given target signal hidden in the image. The effect of visual masking has found its application in numerous image processing and vision tasks. In the past few decades, numerous research has been conducted on visual masking based on models optimized for artificial targets placed upon unnatural masks. Over the years, there is a tendency to apply masking model to predict natural image quality and detection threshold of distortion presented in natural images. However, to our knowledge few studies have been conducted to understand the generalizability of masking model to different types of distortion presented in natural images. In this work, we measure the ability of natural image patches in masking three different types of distortion, and analyse the performance of conventional gain control model in predicting the distortion detection threshold. We then propose a new masking model, where detail loss and additive defects are modeled in two parallel vision channels and interact with each other via a cross masking mechanism. We show that the proposed cross masking model has better adaptability to various image structures and distortions in natural scenes.

  6. Structural order in additive processed bulk heterojunction organic solar cells

    NASA Astrophysics Data System (ADS)

    Rogers, James Thomas

    Considerable academic and industrial efforts have been dedicated to resolving scientific and technological issues associated with the fabrication of efficient plastic solar cells via solution deposition techniques. The most successful strategy used to generate solution processable devices implements a two component donor-acceptor type system composed of a (p-type) narrow bandgap conjugated polymer donor blended with a (n-type) fullerene acceptor. Due to the limited exciton diffusion lengths (~10 nm) inherent to these materials, efficient photoinduced charge generation requires heterojunction formation (i.e. donor/acceptor interfaces) in close proximity to the region of exciton generation. Maximal charge extraction therefore requires that donor and acceptor components form nanoscale phase separated percolating pathways to their respective electrodes. Devices exhibiting these structural characteristics are termed bulk heterojunction devices (BHJ). Although the BHJ architecture highlights the basic characteristics of functional donor-acceptor type organic solar cells, device optimization requires internal order within each phase and proper organization relative to the substrate in order to maximize charge transport efficiencies and minimize charge carrier recombination losses. The economic viability of BHJ solar cells hinges upon the minimization of processing costs; thus, commercially relevant processing techniques should generate optimal structural characteristics during film formation, eliminating the need for additional post deposition processing steps. Empirical optimization has shown that solution deposition using high boiling point additives (e.g. octanedithiol (ODT)) provides a simple and widely used fabrication method for maximizing the power conversion efficiencies of BHJ solar cells. This work will show using x-ray scattering that a small percentage of ODT (~2%) in chlorobenzene induces the nucleation of polymeric crystallites within 2 min of deposition

  7. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  8. Astronomy in the Cloud: Using MapReduce for Image Co-Addition

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-03-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach

  9. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  10. Processable high temperature resistant addition type polyimide laminating resins

    NASA Technical Reports Server (NTRS)

    Serafini, T. T.; Delvigs, P.

    1973-01-01

    Basic studies that were performed using model compounds to elucidate the polymerization mechanism of the so-called addition-type (A-type) polyimides are reviewed. The fabrication and properties of polyimide/graphite fiber composites using A-type polyimide prepolymers as the matrix are also reviewed. An alternate method for preparing processable A-type polyimides by means of in situ polymerization of monomer reactants (PMR) on the fiber reinforcement is described. The elevated temperature properties of A-type PMR/graphite fiber composites are also presented.

  11. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  12. A model for simulation and processing of radar images

    NASA Technical Reports Server (NTRS)

    Stiles, J. A.; Frost, V. S.; Shanmugam, K. S.; Holtzman, J. C.

    1981-01-01

    A model for recording, processing, presentation, and analysis of radar images in digital form is presented. The observed image is represented as having two random components, one which models the variation due to the coherent addition of electromagnetic energy scattered from different objects in the illuminated areas. This component is referred to as fading. The other component is a representation of the terrain variation which can be described as the actual signal which the radar is attempting to measure. The combination of these two components provides a description of radar images as being the output of a linear space-variant filter operating on the product of the fading and terrain random processes. In addition, the model is applied to a digital image processing problem using the design and implementation of enhancement scene. Finally, parallel approaches are being employed as possible means of solving other processing problems such as SAR image map-matching, data compression, and pattern recognition.

  13. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  14. Sequential Processes In Image Generation.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.; And Others

    1988-01-01

    Results of three experiments are reported, which indicate that images of simple two-dimensional patterns are formed sequentially. The subjects included 48 undergraduates and 16 members of the Harvard University (Cambridge, Mass.) community. A new objective methodology indicates that images of complex letters require more time to generate. (TJH)

  15. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  16. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  17. Method for controlling a laser additive process using intrinsic illumination

    NASA Astrophysics Data System (ADS)

    Tait, Robert; Cai, Guoshuang; Azer, Magdi; Chen, Xiaobin; Liu, Yong; Harding, Kevin

    2015-05-01

    One form of additive manufacturing is to use a laser to generate a melt pool from powdered metal that is sprayed from a nozzle. The laser net-shape machining system builds the part a layer at a time by following a predetermined path. However, because the path may need to take many turns, maintaining a constant melt pool may not be easy. A straight section may require one speed and power while a sharp bend would over melt the metal at the same settings. This paper describes a process monitoring method that uses the intrinsic IR radiation from the melt pool along with a process model configured to establish target values for the parameters associated with the manufacture or repair. This model is based upon known properties of the metal being used as well as the properties of the laser beam. An adaptive control technique is then employed to control process parameters of the machining system based upon the real-time weld pool measurement. Since the system uses the heat radiant from the melt pool, other previously deposited metal does not confuse the system as only the melted material is seen by the camera.

  18. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  19. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  20. Direct laser additive fabrication system with image feedback control

    DOEpatents

    Griffith, Michelle L.; Hofmeister, William H.; Knorovsky, Gerald A.; MacCallum, Danny O.; Schlienger, M. Eric; Smugeresky, John E.

    2002-01-01

    A closed-loop, feedback-controlled direct laser fabrication system is disclosed. The feedback refers to the actual growth conditions obtained by real-time analysis of thermal radiation images. The resulting system can fabricate components with severalfold improvement in dimensional tolerances and surface finish.

  1. Fabrication of a Flexible Amperometric Glucose Sensor Using Additive Processes

    PubMed Central

    Du, Xiaosong; Durgan, Christopher J.; Matthews, David J.; Motley, Joshua R.; Tan, Xuebin; Pholsena, Kovit; Árnadóttir, Líney; Castle, Jessica R.; Jacobs, Peter G.; Cargill, Robert S.; Ward, W. Kenneth; Conley, John F.; Herman, Gregory S.

    2015-01-01

    This study details the use of printing and other additive processes to fabricate a novel amperometric glucose sensor. The sensor was fabricated using a Au coated 12.7 μm thick polyimide substrate as a starting material, where micro-contact printing, electrochemical plating, chloridization, electrohydrodynamic jet (e-jet) printing, and spin coating were used to pattern, deposit, chloridize, print, and coat functional materials, respectively. We have found that e-jet printing was effective for the deposition and patterning of glucose oxidase inks with lateral feature sizes between ~5 to 1000 μm in width, and that the glucose oxidase was still active after printing. The thickness of the permselective layer was optimized to obtain a linear response for glucose concentrations up to 32 mM and no response to acetaminophen, a common interfering compound, was observed. The use of such thin polyimide substrates allow wrapping of the sensors around catheters with high radius of curvature ~250 μm, where additive and microfabrication methods may allow significant cost reductions. PMID:26634186

  2. Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control

    PubMed Central

    Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M

    2014-01-01

    Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part’s porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented. PMID:26601041

  3. Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control.

    PubMed

    Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M

    2014-01-01

    Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part's porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented. PMID:26601041

  4. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  5. Multiscale simulation process and application to additives in porous composite battery electrodes

    NASA Astrophysics Data System (ADS)

    Wieser, Christian; Prill, Torben; Schladitz, Katja

    2015-03-01

    Structure-resolving simulation of porous materials in electrochemical cells such as fuel cells and lithium ion batteries allows for correlating electrical performance with material morphology. In lithium ion batteries characteristic length scales of active material particles and additives range several orders of magnitude. Hence, providing a computational mesh resolving all length scales is not reasonably feasible and requires alternative approaches. In the work presented here a virtual process to simulate lithium ion batteries by bridging the scales is introduced. Representative lithium ion battery electrode coatings comprised of μm-scale graphite particles as active material and a nm-scale carbon/polymeric binder mixture as an additive are imaged with synchrotron radiation computed tomography (SR-CT) and sequential focused ion beam/scanning electron microscopy (FIB/SEM), respectively. Applying novel image processing methodologies for the FIB/SEM images, data sets are binarized to provide a computational grid for calculating the effective mass transport properties of the electrolyte phase in the nanoporous additive. Afterwards, the homogenized additive is virtually added to the micropores of the binarized SR-CT data set representing the active particle structure, and the resulting electrode structure is assembled to a virtual half-cell for electrochemical microheterogeneous simulation. Preliminary battery performance simulations indicate non-negligible impact of the consideration of the additive.

  6. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  7. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  8. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  9. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  10. Utilizing image processing techniques to compute herbivory.

    PubMed

    Olson, T E; Barlow, V M

    2001-01-01

    Leafy spurge (Euphorbia esula L. sensu lato) is a perennial weed species common to the north-central United States and southern Canada. The plant is a foreign species toxic to cattle. Spurge infestation can reduce cattle carrying capacity by 50 to 75 percent [1]. University of Wyoming Entomology doctoral candidate Vonny Barlow is conducting research in the area of biological control of leafy spurge via the Aphthona nigriscutis Foudras flea beetle. He is addressing the question of variability within leafy spurge and its potential impact on flea beetle herbivory. One component of Barlow's research consists of measuring the herbivory of leafy spurge plant specimens after introducing adult beetles. Herbivory is the degree of consumption of the plant's leaves and was measured in two different manners. First, Barlow assigned each consumed plant specimen a visual rank from 1 to 5. Second, image processing techniques were applied to "before" and "after" images of each plant specimen in an attempt to more accurately quantify herbivory. Standardized techniques were used to acquire images before and after beetles were allowed to feed on plants for a period of 12 days. Matlab was used as the image processing tool. The image processing algorithm allowed the user to crop the portion of the "before" image containing only plant foliage. Then Matlab cropped the "after" image with the same dimensions, converted the images from RGB to grayscale. The grayscale image was converted to binary based on a user defined threshold value. Finally, herbivory was computed based on the number of black pixels in the "before" and "after" images. The image processing results were mixed. Although, this image processing technique depends on user input and non-ideal images, the data is useful to Barlow's research and offers insight into better imaging systems and processing algorithms. PMID:11347423

  11. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  12. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  13. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  14. Anthropological methods of optical image processing

    NASA Astrophysics Data System (ADS)

    Ginzburg, V. M.

    1981-12-01

    Some applications of the new method for optical image processing, based on a prior separation of informative elements (IE) with the help of a defocusing equal to the average eye defocusing, considered in a previous paper, are described. A diagram of a "drawing" robot with the use of defocusing and other mechanisms of the human visual system (VS) is given. Methods of narrowing the TV channel bandwidth and elimination of noises in computer image processing by prior image defocusing are described.

  15. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  16. Additive manufacturing of stretchable tactile sensors: Processes, materials, and applications

    NASA Astrophysics Data System (ADS)

    Vatani, Morteza

    3D printing technology is becoming more ubiquitous every day especially in the area of smart structures. However, fabrication of multi-material, functional, and smart structures is problematic because of the process and material limitations. This thesis sought to develop a Direct Print Photopolymerization (DPP) fabrication technique that appreciably extends the manufacturing space for the 3D smart structures. This method employs a robotically controlled micro-extrusion of a filament equipped with a photopolymerization process. The ability to use polymers and ultimately their nanocomposites in this process is the advantage of the proposed process over the current fabrication methods in the fabrication of 3D structures featuring mechanical, physical, and electrical functionalities. In addition, this study focused to develop a printable, conductive, and stretchable nanocomposite based on a photocurable and stretchable liquid resin filled with multi-walled carbon nanotubes (MWNTs). This nanocomposite exhibited piezoresistivity, means its resistivity changes as it deforms. This property is a favorable factor in developing resistance based tactile sensors. They were also able to resist high tensile strains while they showed conductivity. Furthermore, this study offered a possible and low-cost method to have a unique and highly stretchable pressure sensitive polymer. This disruptive pressure sensitive polymer composed of an Ionic Liquid (IL) and a stretchable photopolymer embedded between two layers of Carbon Nanotube (CNTs) based stretchable electrodes. The developed IL-polymer showed both field effect property and piezoresistivity that can detect large tensile strains up 30%. In summary, this research study focused to present feasible methods and materials for printing a 3D smart structure especially in the context of flexible tactile sensors. This study provides a foundation for the future efforts in fabrication of skin like tactile sensors in three-dimensional motifs

  17. Protocols for Image Processing based Underwater Inspection of Infrastructure Elements

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram

    2015-07-01

    Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.

  18. SUPRIM: easily modified image processing software.

    PubMed

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest. PMID:8742734

  19. Pyrrolidinone derivatives as processing additives for solution processed organic solar cells

    NASA Astrophysics Data System (ADS)

    Vongsaysy, Uyxing; Pavageau, Bertrand; Servant, Laurent; Aziz, Hany

    2014-10-01

    Processing additives are widely used to increase the efficiency of solution processed organic solar cells. We use the Hansen solubility parameters (HSPs) to investigate novel processing additives. The HSPs predict pyrrolidinone derivatives to be efficient processing additives for OSC systems based on poly(3-hexylthiophene)/[6,6]-phenyl-C61- butyric acid methyl ester (P3HT/PCBM). Two pyrrolidinone derivatives are identified: 1-methyl-2-pyrrolidinone and 1- benzyl-2-pyrrolidinone. The processing additives are introduced with various concentrations in the formulation of P3HT and PCBM solution. The electrical characterizations show that the two processing additives significantly increase the short circuit current and thus the power conversion efficiency of the OSCs. The results thus highlight HSPs as an effective and relatively straightforward tool that can be employed to optimize OSC morphology from a theoretical standpoint. Such a tool will be invaluable for identifying additives for novel high efficiency polymer species as they are synthesized, and thus to streamline the device fabrication and device optimization process.

  20. Thermal processing of EVA encapsulants and effects of formulation additives

    SciTech Connect

    Pern, F.J.; Glick, S.H.

    1996-05-01

    The authors investigated the in-situ processing temperatures and effects of various formulation additives on the formation of ultraviolet (UV) excitable chromophores, in the thermal lamination and curing of ethylene-vinyl acetate (EVA) encapsulants. A programmable, microprocessor-controlled, double-bag vacuum laminator was used to study two commercial as formulated EVA films, A9918P and 15295P, and solution-cast films of Elvaxrm (EVX) impregnated with various curing agents and antioxidants. The results show that the actual measured temperatures of EVA lagged significantly behind the programmed profiles for the heating elements and were affected by the total thermal mass loaded inside the laminator chamber. The antioxidant Naugard P{trademark}, used in the two commercial EVA formulations, greatly enhances the formation of UV-excitable, short chromophores upon curing, whereas other tested antioxidants show little effect. A new curing agent chosen specifically for the EVA formulation modification produces little or no effect on chromophore formation, no bubbling problems in the glass/EVX/glass laminates, and a gel content of {approximately}80% when cured at programmed 155{degrees}C for 4 min. Also demonstrated is the greater discoloring effect with higher concentrations of curing-generated chromophores.

  1. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  2. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  3. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  4. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  5. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  6. Applications of Digital Image Processing 11

    NASA Technical Reports Server (NTRS)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  7. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  8. Planetary rover navigation: improving visual odometry via additional images and multisensor fusion

    NASA Astrophysics Data System (ADS)

    Casalino, G.; Zereik, E.; Simetti, E.; Turetta, A.; Torelli, S.; Sperindé, A.

    2013-12-01

    Visual odometry (VO) is very important for a mobile robot, above all in a planetary scenario, to accurately estimate the rover occurred motion. The present work deals with the possibility to improve a previously developed VO technique by means of additional image processing, together with suitable mechanisms such as the classical Extended/Iterated Kalman Filtering and also Sequence Estimators. The possible employment of both techniques is then addressed and, consequently, a better behaving integration scheme is proposed. Moreover, the eventuality of exploiting other localization sensors is also investigated, leading to a final multisensor scheme.

  9. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  10. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  11. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  12. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  13. Rationalization of Microstructure Heterogeneity in INCONEL 718 Builds Made by the Direct Laser Additive Manufacturing Process

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; McAllister, Donald; Colijn, Hendrik; Mills, Michael; Farson, Dave; Nordin, Mark; Babu, Sudarsanam

    2014-09-01

    Simulative builds, typical of the tip-repair procedure, with matching compositions were deposited on an INCONEL 718 substrate using the laser additive manufacturing process. In the as-processed condition, these builds exhibit spatial heterogeneity in microstructure. Electron backscattering diffraction analyses showed highly misoriented grains in the top region of the builds compared to those of the lower region. Hardness maps indicated a 30 pct hardness increase in build regions close to the substrate over those of the top regions. Detailed multiscale characterizations, through scanning electron microscopy, electron backscattered diffraction imaging, high-resolution transmission electron microscopy, and ChemiSTEM, also showed microstructure heterogeneities within the builds in different length scales including interdendritic and interprecipitate regions. These multiscale heterogeneities were correlated to primary solidification, remelting, and solid-state precipitation kinetics of γ″ induced by solute segregation, as well as multiple heating and cooling cycles induced by the laser additive manufacturing process.

  14. Accelerated image processing on FPGAs.

    PubMed

    Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica

    2003-01-01

    The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709

  15. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  16. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  17. Checking Fits With Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  18. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    NASA Technical Reports Server (NTRS)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  19. CAD/CAM-coupled image processing systems

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen; Rauh, W.

    1990-08-01

    Image processing systems have found wide application in industry. For most computer integrated manufacturing faci- lities it is necessary to adapt these systems thus that they can automate the interaction with and the integration of CAD and CAM Systems. In this paper new approaches will be described that make use of the coupling of CAD and image processing as well as the automatic generation of programmes for the machining of products.

  20. Evaluation of Select Surface Processing Techniques for In Situ Application During the Additive Manufacturing Build Process

    NASA Astrophysics Data System (ADS)

    Book, Todd A.; Sangid, Michael D.

    2016-07-01

    Although additive manufacturing offers numerous performance advantages for different applications, it is not being used for critical applications due to uncertainties in structural integrity as a result of innate process variability and defects. To minimize uncertainty, the current approach relies on the concurrent utilization of process monitoring, post-processing, and non-destructive inspection in addition to an extensive material qualification process. This paper examines an alternative approach by evaluating the application of select surface process techniques, to include sliding severe plastic deformation (SPD) and fine particle shot peening, on direct metal laser sintering-produced AlSi10Mg materials. Each surface processing technique is compared to baseline as-built and post-processed samples as a proof of concept for surface enhancement. Initial results pairing sliding SPD with the manufacture's recommended thermal stress relief cycle demonstrated uniform recrystallization of the microstructure, resulting in a more homogeneous distribution of strain among the microstructure than as-built or post-processed conditions. This result demonstrates the potential for the in situ application of various surface processing techniques during the layerwise direct metal laser sintering build process.

  1. Evaluation of Select Surface Processing Techniques for In Situ Application During the Additive Manufacturing Build Process

    NASA Astrophysics Data System (ADS)

    Book, Todd A.; Sangid, Michael D.

    2016-03-01

    Although additive manufacturing offers numerous performance advantages for different applications, it is not being used for critical applications due to uncertainties in structural integrity as a result of innate process variability and defects. To minimize uncertainty, the current approach relies on the concurrent utilization of process monitoring, post-processing, and non-destructive inspection in addition to an extensive material qualification process. This paper examines an alternative approach by evaluating the application of select surface process techniques, to include sliding severe plastic deformation (SPD) and fine particle shot peening, on direct metal laser sintering-produced AlSi10Mg materials. Each surface processing technique is compared to baseline as-built and post-processed samples as a proof of concept for surface enhancement. Initial results pairing sliding SPD with the manufacture's recommended thermal stress relief cycle demonstrated uniform recrystallization of the microstructure, resulting in a more homogeneous distribution of strain among the microstructure than as-built or post-processed conditions. This result demonstrates the potential for the in situ application of various surface processing techniques during the layerwise direct metal laser sintering build process.

  2. Combining Advanced Oxidation Processes: Assessment Of Process Additivity, Synergism, And Antagonism

    SciTech Connect

    Peters, Robert W.; Sharma, M.P.; Gbadebo Adewuyi, Yusuf

    2007-07-01

    This paper addresses the process interactions from combining integrated processes (such as advanced oxidation processes (AOPs), biological operations, air stripping, etc.). AOPs considered include: Fenton's reagent, ultraviolet light, titanium dioxide, ozone (O{sub 3}), hydrogen peroxide (H{sub 2}O{sub 2}), sonication/acoustic cavitation, among others. A critical review of the technical literature has been performed, and the data has been analyzed in terms of the processes being additive, synergistic, or antagonistic. Predictions based on the individual unit operations are made and compared against the behavior of the combined unit operations. The data reported in this paper focus primarily on treatment of petroleum hydrocarbons and chlorinated solvents. (authors)

  3. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  4. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  5. Nanosecond image processing using stimulated photon echoes.

    PubMed

    Xu, E Y; Kröll, S; Huestis, D L; Kachru, R; Kim, M K

    1990-05-15

    Processing of two-dimensional images on a nanosecond time scale is demonstrated using the stimulated photon echoes in a rare-earth-doped crystal (0.1 at. % Pr(3+):LaF(3)). Two spatially encoded laser pulses (pictures) resonant with the (3)P(0)-(3)H(4) transition of Pr(3+) were stored by focusing the image pulses sequentially into the Pr(3+):LaF(3) crystal. The stored information is retrieved and processed by a third read pulse, generating the echo that is the spatial convolution or correlation of the input images. Application of this scheme to high-speed pattern recognition is discussed. PMID:19768008

  6. New approach for underwater imaging and processing

    NASA Astrophysics Data System (ADS)

    Wen, Yanan; Tian, Weijian; Zheng, Bing; Zhou, Guozun; Dong, Hui; Wu, Qiong

    2014-05-01

    Due to the absorptive and scattering nature of water, the characteristic of underwater image is different with it in the air. Underwater image is characterized by their poor visibility and noise. Getting clear original image and image processing are two important problems to be solved in underwater clear vision area. In this paper a new approach technology is presented to solve these problems. Firstly, an inhomogeneous illumination method is developed to get the clear original image. Normal illumination image system and inhomogeneous illumination image system are used to capture the image in same distance. The result shows that the contrast and definition of processed image is get great improvement by inhomogeneous illumination method. Secondly, based on the theory of photon transmitted in the water and the particularity of underwater target detecting, the characters of laser scattering on underwater target surface and spatial and temporal characters of oceanic optical channel have been studied. Based on the Monte Carlo simulation, we studied how the parameters of water quality and other systemic parameters affect the light transmitting through water at spatial and temporal region and provided the theoretical sustentation of enhancing the SNR and operational distance.

  7. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  8. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  9. Effects of chemical additives on microbial enhanced oil recovery processes

    SciTech Connect

    Bryant, R.S.; Chase, K.L.; Bertus, K.M.; Stepp, A.K.

    1989-12-01

    An extensive laboratory study has been conducted to determine (1) the role of the microbial cells and products in oil displacement, (2) the relative rates of transport of microbial cells and chemical products from the metabolism of nutrient in porous media, and (3) the effects of chemical additives on the oil recovery efficiency of microbial formulations. This report describes experiments relating to the effects of additives on oil recovery efficiency of microbial formulations. The effects of additives on the oil recovery efficiency of microbial formulations were determined by conducting oil displacement experiments in 1-foot-long Berea sandstone cores. Sodium tripolyphosphate (STPP), a low-molecular-weight polyacrylamide polymer, a lignosulfonate surfactant, and sodium bicarbonate were added to a microbial formulation at a concentration of 1%. The effects of using these additives in a preflush prior to injection of the microbial formulation were also evaluated. Oil-displacement experiments with and without a sodium bicarbonate preflush were conducted in 4-foot-long Berea sandstone cores, and samples of in situ fluids were collected at various times at four intermediate points along the core. The concentrations of metabolic products and microbes in the fluid samples were determined. 9 refs., 22 figs., 8 tabs.

  10. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  11. Image processing in a maritime environment

    NASA Astrophysics Data System (ADS)

    Pietrzak, Kenneth A.; Alberg, Matthew T.

    2015-05-01

    The performance of mast mounted imaging sensors operating near the near marine boundary layer can be severely impacted by environmental issues. Haze, atmospheric turbulence, and rough seas can all impact imaging system performance. Examples of these impacts are provided in this paper. In addition, sensor artifacts such as deinterlace artifacts can also impact imaging performance. Deinterlace artifacts caused by a rotating mast are often too severe to be useful by an operator for detection of contacts. An artifact edge minimization approach is presented that eliminates these global motion-based deinterlace artifacts.

  12. System Design For A Dental Image Processing System

    NASA Astrophysics Data System (ADS)

    Cady, Fredrick M.; Stover, John C.; Senecal, William J.

    1988-12-01

    An image processing system for a large clinic dental practice has been designed and tested. An analysis of spatial resolution requirements and field tests by dentists show that a system built with presently available, PC-based, image processing equipment can provide diagnostic quality images without special digital image processing. By giving the dentist a tool to digitally enhance x-ray images, increased diagnostic capabilities can be achieved. Very simple image processing procedures such as linear and non-linear contrast expansion, edge enhancement, and image zooming can be shown to be very effective. In addition to providing enhanced imagery in the dentist's treatment room, the system is designed to be a fully automated, dental records management system. It is envisioned that a patient's record, including x-rays and tooth charts, may be retrieved from optical disk storage as the patient enters the office. Dental procedures undertaken during the visit may be entered into the record via the imaging workstation by the dentist or the dental assistant. Patient billing and records keeping may be generated automatically.

  13. Computer image processing - The Viking experience. [digital enhancement techniques

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    Computer processing of digital imagery from the Viking mission to Mars is discussed, with attention given to subjective enhancement and quantitative processing. Contrast stretching and high-pass filtering techniques of subjective enhancement are described; algorithms developed to determine optimal stretch and filtering parameters are also mentioned. In addition, geometric transformations to rectify the distortion of shapes in the field of view and to alter the apparent viewpoint of the image are considered. Perhaps the most difficult problem in quantitative processing of Viking imagery was the production of accurate color representations of Orbiter and Lander camera images.

  14. Microstructure-controllable Laser Additive Manufacturing Process for Metal Products

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Chin; Chuang, Chuan-Sheng; Lin, Ching-Chih; Wu, Chih-Hsien; Lin, De-Yau; Liu, Sung-Ho; Tseng, Wen-Peng; Horng, Ji-Bin

    Controlling the cooling rate of alloy during solidification is the most commonly used method for varying the material microstructure. However, the cooling rate of selective laser melting (SLM) production is constrained by the optimal parameter settings for a dense product. This study proposes a method for forming metal products via the SLM process with electromagnetic vibrations. The electromagnetic vibrations change the solidification process for a given set of SLM parameters, allowing the microstructure to be varied via magnetic flux density. This proposed method can be used for creating microstructure-controllable bio-implant products with complex shapes.

  15. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  16. Part height control of laser metal additive manufacturing process

    NASA Astrophysics Data System (ADS)

    Pan, Yu-Herng

    Laser Metal Deposition (LMD) has been used to not only make but also repair damaged parts in a layer-by-layer fashion. Parts made in this manner may produce less waste than those made through conventional machining processes. However, a common issue of LMD involves controlling the deposition's layer thickness. Accuracy is important, and as it increases, both the time required to produce the part and the material wasted during the material removal process (e.g., milling, lathe) decrease. The deposition rate is affected by multiple parameters, such as the powder feed rate, laser input power, axis feed rate, material type, and part design, the values of each of which may change during the LMD process. Using a mathematical model to build a generic equation that predicts the deposition's layer thickness is difficult due to these complex parameters. In this thesis, we propose a simple method that utilizes a single device. This device uses a pyrometer to monitor the current build height, thereby allowing the layer thickness to be controlled during the LMD process. This method also helps the LMD system to build parts even with complex parameters and to increase material efficiency.

  17. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  18. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  19. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  20. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  1. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  2. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  3. Thermographic process monitoring in powderbed based additive manufacturing

    SciTech Connect

    Krauss, Harald Zaeh, Michael F.; Zeugner, Thomas

    2015-03-31

    Selective Laser Melting is utilized to build metallic parts directly from CAD-Data by solidification of thin powder layers through application of a fast scanning laser beam. In this study layerwise monitoring of the temperature distribution is used to gather information about the process stability and the resulting part quality. The heat distribution varies with different kinds of parameters including scan vector length, laser power, layer thickness and inter-part distance in the job layout which in turn influence the resulting part quality. By integration of an off-axis mounted uncooled thermal detector the solidification as well as the layer deposition are monitored and evaluated. Errors in the generation of new powder layers usually result in a locally varying layer thickness that may cause poor part quality. For effect quantification, the locally applied layer thickness is determined by evaluating the heat-up of the newly deposited powder. During the solidification process space and time-resolved data is used to characterize the zone of elevated temperatures and to derive locally varying heat dissipation properties. Potential quality indicators are evaluated and correlated to the resulting part quality: Thermal diffusivity is derived from a simplified heat dissipation model and evaluated for every pixel and cool-down phase of a layer. This allows the quantification of expected material homogeneity properties. Maximum temperature and time above certain temperatures are measured in order to detect hot spots or delamination issues that may cause a process breakdown. Furthermore, a method for quantification of sputter activity is presented. Since high sputter activity indicates unstable melt dynamics this can be used to identify parameter drifts, improper atmospheric conditions or material binding errors. The resulting surface structure after solidification complicates temperature determination on the one hand but enables the detection of potential surface defects

  4. Thermographic process monitoring in powderbed based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Krauss, Harald; Zeugner, Thomas; Zaeh, Michael F.

    2015-03-01

    Selective Laser Melting is utilized to build metallic parts directly from CAD-Data by solidification of thin powder layers through application of a fast scanning laser beam. In this study layerwise monitoring of the temperature distribution is used to gather information about the process stability and the resulting part quality. The heat distribution varies with different kinds of parameters including scan vector length, laser power, layer thickness and inter-part distance in the job layout which in turn influence the resulting part quality. By integration of an off-axis mounted uncooled thermal detector the solidification as well as the layer deposition are monitored and evaluated. Errors in the generation of new powder layers usually result in a locally varying layer thickness that may cause poor part quality. For effect quantification, the locally applied layer thickness is determined by evaluating the heat-up of the newly deposited powder. During the solidification process space and time-resolved data is used to characterize the zone of elevated temperatures and to derive locally varying heat dissipation properties. Potential quality indicators are evaluated and correlated to the resulting part quality: Thermal diffusivity is derived from a simplified heat dissipation model and evaluated for every pixel and cool-down phase of a layer. This allows the quantification of expected material homogeneity properties. Maximum temperature and time above certain temperatures are measured in order to detect hot spots or delamination issues that may cause a process breakdown. Furthermore, a method for quantification of sputter activity is presented. Since high sputter activity indicates unstable melt dynamics this can be used to identify parameter drifts, improper atmospheric conditions or material binding errors. The resulting surface structure after solidification complicates temperature determination on the one hand but enables the detection of potential surface defects

  5. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  6. Filamentous fungi for production of food additives and processing aids.

    PubMed

    Archer, David B; Connerton, Ian F; MacKenzie, Donald A

    2008-01-01

    Filamentous fungi are metabolically versatile organisms with a very wide distribution in nature. They exist in association with other species, e.g. as lichens or mycorrhiza, as pathogens of animals and plants or as free-living species. Many are regarded as nature's primary degraders because they secrete a wide variety of hydrolytic enzymes that degrade waste organic materials. Many species produce secondary metabolites such as polyketides or peptides and an increasing range of fungal species is exploited commercially as sources of enzymes and metabolites for food or pharmaceutical applications. The recent availability of fungal genome sequences has provided a major opportunity to explore and further exploit fungi as sources of enzymes and metabolites. In this review chapter we focus on the use of fungi in the production of food additives but take a largely pre-genomic, albeit a mainly molecular, view of the topic. PMID:18253709

  7. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  8. Future projects in pulse image processing

    NASA Astrophysics Data System (ADS)

    Kinser, Jason M.

    1999-03-01

    Pulse-Couple Neural Networks have generated quite a bit of interest as image processing tools. Past applications include image segmentation, edge extraction, texture extraction, de-noising, object isolation, foveation and fusion. These past applications do not comprise a complete list of useful applications of the PCNN. Future avenues of research will include level set analysis, binary (optical) correlators, artificial life simulations, maze running and filter jet analysis. This presentation will explore these future avenues of PCNN research.

  9. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  10. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  11. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  12. DSP based image processing for retinal prosthesis.

    PubMed

    Parikh, Neha J; Weiland, James D; Humayun, Mark S; Shah, Saloni S; Mohile, Gaurav S

    2004-01-01

    The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be highly computationally efficient while working on low power. DSPs have computational capabilities of hundreds of millions of instructions per second (MIPS) or millions of floating point operations per second (MFLOPS) along with certain processor configurations having low power. The various image processing algorithms, the DSP requirements and capabilities of different platforms would be discussed in this paper. PMID:17271974

  13. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  14. Management Of Airborne Reconnaissance Images Through Real-Time Processing

    NASA Astrophysics Data System (ADS)

    Endsley, Neil H.

    1985-12-01

    Digital reconnaissance images gathered by low-altitude over-flights with resolutions on the order of a few feet and fields of view up to 120 degrees can generate millions of pixels per second. Storing this data in-flight, transmitting it to the ground, and analyzing it presents significant problems to the tactical community. One potential solution is in-flight preview and pruning of the data where an operator keeps or transmits only those image segments which on first view contain potential intelligence data. To do this, the images must be presented to the operator in a geometrically correct form. Wide-angle dis-tortion, distortions induced by yaw, pitch, roll and altitude variations, and distortions due to non-ideal alignment of the focal plane array must be removed so the operator can quickly assess the scene content and make decisions on which image segments to keep. When multiple sensors are used with a common field of view, they must be mutually coregistered to permit multispectral or multimode processing to exploit these rich data dimensions. In addition, the operator should be able to alter the apparent point of view of the image, i.e., be able to zoom in and out, rotate, and roam through the displayed field of view while maintaining geometric and radiometric precision. These disparate requirements have a common feature in the ability to perform real-time image geometry manipulation. The role of image geometry manipulation, or image warping, is reviewed and a "strawman" system dis-cussed which incorporates the Pipelined Resampling Processor (PRP). The PRP is a real-time image warping processor discussed at this conference in previous years"2'3". Actual results from the PRP prototype are presented. In addition, other image processing aids such as image enhancement and object classification are discussed as they apply to reconnaissance applications.

  15. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  16. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  17. Nitrogen addition using a gas blow in an ESR process

    NASA Astrophysics Data System (ADS)

    Yamamoto, S.; Momoi, Y.; Kajikawa, K.

    2016-07-01

    A new nitrogen method for adding in an ESR process using nitrogen gas blown in through the electrode was investigated. Nitrogen gas blown through a center bore of the electrode enabled contact between the nitrogen gas and the molten steel directly underneath the electrode tip. A ɸ 145mm diameter, laboratory-sized PESR furnace was used for the study on the reaction kinetics. Also, we carried out a water-model experiment in order to check the injection depth of the gas blown in the slag. The water model showed that the gas did not reach the upper surface of the molten metal and flowed on the bottom surface of the electrode only. An EPMA was carried out for a droplet remaining on the tip of the electrode after melting. The molten steel from the tip of the electrode shows that nitrogen gas absorption occurred at the tip of the electrode. The mass transfer coefficient was around 1.0x10-2 cm/sec in the system. This value is almost the same as the coefficient at the molten steel free surface.

  18. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  19. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  20. An additive and lossless watermarking method based on invariant image approximation and Haar wavelet transform.

    PubMed

    Pan, W; Coatrieux, G; Cuppens, N; Cuppens, F; Roux, Ch

    2010-01-01

    In this article, we propose a new additive lossless watermarking scheme which identifies parts of the image that can be reversibly watermarked and conducts message embedding in the conventional Haar wavelet transform coefficients. Our approach makes use of an approximation of the image signal that is invariant to the watermark addition for classifying the image in order to avoid over/underflows. The method has been tested on different sets of medical images and some usual natural test images as Lena. Experimental result analysis conducted with respect to several aspects including data hiding capacity and image quality preservation, shows that our method is one of the most competitive existing lossless watermarking schemes in terms of high capacity and low distortion. PMID:21096246

  1. Cloud based toolbox for image analysis, processing and reconstruction tasks.

    PubMed

    Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A

    2015-01-01

    This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109

  2. Product review: lucis image processing software.

    PubMed

    Johnson, J E

    1999-04-01

    Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

  3. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  4. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  5. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system. PMID:23594233

  7. FITSH: Software Package for Image Processing

    NASA Astrophysics Data System (ADS)

    Pál, András

    2011-11-01

    FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

  8. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  9. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  10. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  11. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1972-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  12. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  13. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  14. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  15. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  16. Watermarking scheme for large images using parallel processing

    NASA Astrophysics Data System (ADS)

    Debes, Eric; Dardier, Genevieve; Ebrahimi, Touradj; Herrigel, Alexander

    2001-08-01

    Large and high-resolution images usually have a high commercial value. Thus they are very good candidates for watermarking. If many images have to be signed in a Client-Server setup, memory and computational requirements could become unrealistic for current and near future solutions. In this paper, we propose to tile the image into sub-images. The watermarking scheme is then applied to each sub-image in the embedding and retrieval process. Thanks to this solution, the first possible optimization consists in creating different threads to read and write the image tile by tile. The time spent in input/output operations, which can be a bottleneck for large images, is reduced. In addition to this optimization, we show that the memory consumption of the application is also highly reduced for large images. Finally, the application can be multithreaded so that different tiles can be watermarked in parallel. Therefore the scheme can take advantage of the processing power of the different processors available in current servers. We show that the correct tile size and the right amount of threads have to be created to efficiently distribute the workload. Eventually, security, robustness and invisibility issues are addressed considering the signal redundancy.

  17. Assessing the use of an infrared spectrum hyperpixel array imager to measure temperature during additive and subtractive manufacturing

    NASA Astrophysics Data System (ADS)

    Whitenton, Eric; Heigel, Jarred; Lane, Brandon; Moylan, Shawn

    2016-05-01

    Accurate non-contact temperature measurement is important to optimize manufacturing processes. This applies to both additive (3D printing) and subtractive (material removal by machining) manufacturing. Performing accurate single wavelength thermography suffers numerous challenges. A potential alternative is hyperpixel array hyperspectral imaging. Focusing on metals, this paper discusses issues involved such as unknown or changing emissivity, inaccurate greybody assumptions, motion blur, and size of source effects. The algorithm which converts measured thermal spectra to emissivity and temperature uses a customized multistep non-linear equation solver to determine the best-fit emission curve. Emissivity dependence on wavelength may be assumed uniform or have a relationship typical for metals. The custom software displays residuals for intensity, temperature, and emissivity to gauge the correctness of the greybody assumption. Initial results are shown from a laser powder-bed fusion additive process, as well as a machining process. In addition, the effects of motion blur are analyzed, which occurs in both additive and subtractive manufacturing processes. In a laser powder-bed fusion additive process, the scanning laser causes the melt pool to move rapidly, causing a motion blur-like effect. In machining, measuring temperature of the rapidly moving chip is a desirable goal to develop and validate simulations of the cutting process. A moving slit target is imaged to characterize how the measured temperature values are affected by motion of a measured target.

  18. Progressive band processing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Schultz, Robert C.

    Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

  19. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  20. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  1. Thermographic In-Situ Process Monitoring of the Electron Beam Melting Technology used in Additive Manufacturing

    SciTech Connect

    Dinwiddie, Ralph Barton; Dehoff, Ryan R; Lloyd, Peter D; Lowe, Larry E; Ulrich, Joseph B

    2013-01-01

    Oak Ridge National Laboratory (ORNL) has been utilizing the ARCAM electron beam melting technology to additively manufacture complex geometric structures directly from powder. Although the technology has demonstrated the ability to decrease costs, decrease manufacturing lead-time and fabricate complex structures that are impossible to fabricate through conventional processing techniques, certification of the component quality can be challenging. Because the process involves the continuous deposition of successive layers of material, each layer can be examined without destructively testing the component. However, in-situ process monitoring is difficult due to metallization on inside surfaces caused by evaporation and condensation of metal from the melt pool. This work describes a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process. Here, the utilization of a continuously moving Mylar film canister is described. Results will be presented related to in-situ process monitoring and how this technique results in improved mechanical properties and reliability of the process.

  2. Processing Infrared Images For Fire Management Applications

    NASA Astrophysics Data System (ADS)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  3. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  4. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  5. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420

  6. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  7. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  8. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGESBeta

    Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  9. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  10. Additive manufacturing of Inconel 718 using electron beam melting: Processing, post-processing, & mechanical properties

    NASA Astrophysics Data System (ADS)

    Sames, William James, V.

    Additive Manufacturing (AM) process parameters were studied for production of the high temperature alloy Inconel 718 using Electron Beam Melting (EBM) to better understand the relationship between processing, microstructure, and mechanical properties. Processing parameters were analyzed for impact on process time, process temperature, and the amount of applied energy. The applied electron beam energy was shown to be integral to the formation of swelling defects. Standard features in the microstructure were identified, including previously unidentified solidification features such as shrinkage porosity and non-equilibrium phases. The as-solidified structure does not persist in the bulk of EBM parts due to a high process hold temperature (˜1000°C), which causes in situ homogenization. The most significant variability in as-fabricated microstructure is the formation of intragranular delta-phase needles, which can form in samples produced with lower process temperatures (< 960°C). A novel approach was developed and demonstrated for controlling the temperature of cool down, thus providing a technique for in situ heat treatment of material. This technique was used to produce material with hardness of 478+/-7 HV with no post-processing, which exceeds the hardness of peak-aged Inconel 718. Traditional post-processing methods of hot isostatic pressing (HIP) and solution treatment and aging (STA) were found to result in variability in grain growth and phase solution. Recrystallization and grain structure are identified as possible mechanisms to promote grain growth. These results led to the conclusion that the first step in thermal post-processing of EBM Inconel 718 should be an optimized solution treatment to reset phase variation in the as-fabricated microstructure without incurring significant grain growth. Such an optimized solution treatment was developed (1120°C, 2hr) for application prior to aging or HIP. The majority of as-fabricated tensile properties met ASTM

  11. The system integration of image processing

    NASA Astrophysics Data System (ADS)

    Chen, Qi-xing; Wu, Qin-zhang; Gao, Xiao-dong; Ren, Guo-qiang

    2008-03-01

    An integration system was designed to apply to the remote communication of optics and electronics detection systems, which was integrated with programmable DSP and FPGA chirps in addition to a few Application Specific Integrated Circuits (ASICs). It could achieve image binarization, image enhancement, data encryption, image compression encoding, channel encoding, data interleaving, etc., and the algorithms of these functions might be renewed or updated easily. The CCD color camera being a signal source, experiments had been done on the platform with a DSP chirp and a FPGA one. The FPGA chirp mainly realized the reconstruction of image's brightness signal and the production of various timing signals, and the DSP chirp mainly accomplished the other functions. The algorithms to compress image data were based on discrete cosine transformation (DCT) and discrete wavelet transformation (DWT), respectively. The experiment results showed that the developed platform was characterized by flexibility, programmability and reconfigurability. The integration system is well suitable for the remote communication of optics and electronics detection systems.

  12. Electronics Signal Processing for Medical Imaging

    NASA Astrophysics Data System (ADS)

    Turchetta, Renato

    This paper describes the way the signal coming from a radiation detector is conditioned and processed to produce images useful for medical applications. First of all, the small signal produce by the radiation is processed by analogue electronics specifically designed to produce a good signal-over-noise ratio. The optimised analogue signal produced at this stage can then be processed and transformed into digital information that is eventually stored in a computer, where it can be further processed as required. After an introduction to the general requirements of the processing electronics, we will review the basic building blocks that process the `tiny' analogue signal coming from a radiation detector. We will in particular analyse how it is possible to optimise the signal-over-noise ratio of the electronics. Some exercises, developed in the tutorial, will help to understand this fundamental part. The blocks needed to process the analogue signal and transform it into a digital code will be described. The description of electronics systems used for medical imaging systems will conclude the lecture.

  13. Biomass estimator for NIR image with a few additional spectral band images taken from light UAS

    NASA Astrophysics Data System (ADS)

    Pölönen, Ilkka; Salo, Heikki; Saari, Heikki; Kaivosoja, Jere; Pesonen, Liisa; Honkavaara, Eija

    2012-05-01

    A novel way to produce biomass estimation will offer possibilities for precision farming. Fertilizer prediction maps can be made based on accurate biomass estimation generated by a novel biomass estimator. By using this knowledge, a variable rate amount of fertilizers can be applied during the growing season. The innovation consists of light UAS, a high spatial resolution camera, and VTT's novel spectral camera. A few properly selected spectral wavelengths with NIR images and point clouds extracted by automatic image matching have been used in the estimation. The spectral wavelengths were chosen from green, red, and NIR channels.

  14. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  15. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  16. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  17. Sorting Olive Batches for the Milling Process Using Image Processing.

    PubMed

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  18. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  19. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  20. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  1. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  2. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  3. Optical processing of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Liu, Shiaw-Dong; Casasent, David

    1988-01-01

    The data-processing problems associated with imaging spectrometer data are reviewed; new algorithms and optical processing solutions are advanced for this computationally intensive application. Optical decision net, directed graph, and neural net solutions are considered. Decision nets and mineral element determination of nonmixture data are emphasized here. A new Fisher/minimum-variance clustering algorithm is advanced, initialization using minimum-variance clustering is found to be preferred and fast. Tests on a 500-class problem show the excellent performance of this algorithm.

  4. Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens.

    PubMed

    Shin, Dong-Hak; Lee, Byoungho; Kim, Eun-Soo

    2006-10-01

    We propose a curved integral imaging system with large depth achieved by the additional use of a large-aperture lens in a conventional large-depth integral imaging system. The additional large-aperture lens provides a multidirectional curvature effect and improves the viewing angle. The proposed system has a simple structure due to the use of well-fabricated, unmodified flat devices. To calculate the proper elemental images for the proposed system, we explain a modified computer-generated pickup technique based on an ABCD matrix and analyze an effective viewing zone in the proposed system. From experiments, we show that the proposed system has an improved viewing angle of more than 7 degrees compared with conventional integral imaging. PMID:16983427

  5. PRONET services for distance learning in mammographic image processing.

    PubMed

    Costaridou, L; Panayiotakis, G; Efstratiou, C; Sakellaropoulos, P; Cavouras, D; Kalogeropoulou, C; Varaki, K; Giannakou, L; Dimopoulos, J

    1997-01-01

    The potential of telematics services is investigated with respect to learning needs of medical physicists and biomedical engineers. Telematics services are integrated into a system, the PRONET, which evolves around multimedia computer based courses and distance tutoring support. In addition, information database access and special interest group support are offered. System architecture is based on a component integration approach. The services are delivered in three modes: LAN, ISDN and Internet. Mammographic image processing is selected as an example content area. PMID:10179585

  6. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  7. High-speed imaging and image processing in voice disorders

    NASA Astrophysics Data System (ADS)

    Tigges, Monika; Wittenberg, Thomas; Rosanowski, Frank; Eysholdt, Ulrich

    1996-12-01

    A digital high-speed camera system for the endoscopic examination of the larynx delivers recording speeds of up to 10,000 frames/s. Recordings of up to 1 s duration can be stored and used for further evaluation. Maximum resolution is 128 multiplied by 128 pixel. The acoustic and electroglottographic signals are recorded simultaneously. An image processing program especially developed for this purpose renders time-way-waveforms (high-speed glottograms) of several locations on the vocal cords. From the graphs all of the known objective parameters of the voice can be derived. Results of examinations in normal subjects and patients are presented.

  8. Detecting jaundice by using digital image processing

    NASA Astrophysics Data System (ADS)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  9. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  10. Thermal Imaging System For Material Processing

    NASA Astrophysics Data System (ADS)

    Auric, Daniel; Hanonge, Eric; Kerrand, Emmanuel; de Miscault, Jean-Claude; Cornillault, Jean

    1987-09-01

    In the field of lasers for welding and surface processing, we need to measure the map of temperatures in order to control the processing in real time by adjusting the laser power, the beam pointing and focussing and the workpiece moving speed. For that purpose, we studied, realized and evaluated a model of thermal imaging system at 2 wavelengths in the mid-infrared. The device is connected to a 3 axis table and to a 3 kW CO2 laser. The range of measured temperatures is 800 C to 1 500 C. The device includes two AGEMA infrared cameras fixed to the welding torch each operating with a choice of filters in the 3, 4 and 5 micrometre band. The field of view of each is about 14 mm by 38 mm. The cameras are connected to an M68000 microprocessor family based microcomputer in which the images enter at the rate of 6. 25 Hz with 64 x 128 pixels by image at both wavelengths. The microcomputer stores the pictures into memory and floppy disk, displays them in false colours and calculates for each pixel the surface temperature of the material with the grey body assumption. The results have been compared with metallurgic analysis of the samples. The precision is about 20 C in most cases and depends on the sample surface state. Simplifications of the laboratory device should lead to a cheap, convenient and reliable product.

  11. Cancer diagnostics using neural network sorting of processed images

    NASA Astrophysics Data System (ADS)

    Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.

    1996-03-01

    A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.

  12. Improvement of the detection rate in digital watermarked images against image degradation caused by image processing

    NASA Astrophysics Data System (ADS)

    Nishio, Masato; Ando, Yutaka; Tsukamoto, Nobuhiro; Kawashima, Hironao; Nakamura, Shinya

    2004-04-01

    In the current environment of medical information disclosure, the general-purpose image format such as JPEG/BMP which does not require special software for viewing, is suitable for carrying and managing medical image information individually. These formats have no way to know patient and study information. We have therefore developed two kinds of ID embedding methods: one is Bit-swapping method for embedding Alteration detection ID and the other is data-imposing method in Fourier domain using Discrete Cosine Transform (DCT) for embedding Original image source ID. We then applied these two digital watermark methods to four modality images (Chest X-ray, Head CT, Abdomen CT, Bone scintigraphy). However, there were some cases where the digital watermarked ID could not be detected correctly due to image degradation caused by image processing. In this study, we improved the detection rate in digital watermarked image using several techniques, which are Error correction method, Majority correction method, and Scramble location method. We applied these techniques to digital watermarked images against image processing (Smoothing) and evaluated the effectiveness. As a result, Majority correction method is effective to improve the detection rate in digital watermarked image against image degradation.

  13. Liquid crystal thermography and true-colour digital image processing

    NASA Astrophysics Data System (ADS)

    Stasiek, J.; Stasiek, A.; Jewartowski, M.; Collins, M. W.

    2006-06-01

    In the last decade thermochromic liquid crystals (TLC) and true-colour digital image processing have been successfully used in non-intrusive technical, industrial and biomedical studies and applications. Thin coatings of TLCs at surfaces are utilized to obtain detailed temperature distributions and heat transfer rates for steady or transient processes. Liquid crystals also can be used to make visible the temperature and velocity fields in liquids by the simple expedient of directly mixing the liquid crystal material into the liquid (water, glycerol, glycol, and silicone oils) in very small quantities to use as thermal and hydrodynamic tracers. In biomedical situations e.g., skin diseases, breast cancer, blood circulation and other medical application, TLC and image processing are successfully used as an additional non-invasive diagnostic method especially useful for screening large groups of potential patients. The history of this technique is reviewed, principal methods and tools are described and some examples are also presented.

  14. Imaging spectrometer for process industry applications

    NASA Astrophysics Data System (ADS)

    Herrala, Esko; Okkonen, Jukka T.; Hyvarinen, Timo S.; Aikio, Mauri; Lammasniemi, Jorma

    1994-11-01

    This paper presents an imaging spectrometer principle based on a novel prism-grating-prism (PGP) element as the dispersive component and advanced camera solutions for on-line applications. The PGP element uses a volume type holographic plane transmission grating made of dichromated gelatin (DCG). Currently, spectrographs have been realized for the 400 - 1050 nm region but the applicable spectral region of the PGP is 380 - 1800 nm. Spectral resolution is typically between 1.5 and 5 nm. The on-axis optical configuration and simple rugged tubular optomechanical construction of the spectrograph provide a good image quality and resistance to harsh environmental conditions. Spectrograph optics are designed to be interfaced to any standard CCD camera. Special camera structures and operating modes can be used for applications requiring on-line data interpretation and process control.

  15. Processing Neutron Imaging Data - Quo Vadis?

    NASA Astrophysics Data System (ADS)

    Kaestner, A. P.; Schulz, M.

    Once an experiment has ended at a neutron imaging instrument, users often ask themselves how to proceed with the collected data. Large amounts of data have been obtained, but for first time users there is often no plan or experience to evaluate the obtained information. The users are then depending on the support from the local contact, who unfortunately does not have the time to perform in-depth studies for every user. By instructing the users and providing evaluation tools either on-site or as free software this situation can be improved. With the continuous development of new instrument features that require increasingly complex analysis methods, there is a deficit on the side of developing tools that bring the new features to the user community. We propose to start a common platform for open source development of analysis tools dedicated to processing neutron imaging data.

  16. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  17. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  18. HYMOSS signal processing for pushbroom spectral imaging

    NASA Astrophysics Data System (ADS)

    Ludwig, David E.

    1991-06-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  19. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  20. Interaction of image noise, spatial resolution, and low contrast fine detail preservation in digital image processing

    NASA Astrophysics Data System (ADS)

    Artmann, Uwe; Wueller, Dietmar

    2009-01-01

    We present a method to improve the validity of noise and resolution measurements on digital cameras. If non-linear adaptive noise reduction is part of the signal processing in the camera, the measurement results for image noise and spatial resolution can be good, while the image quality is low due to the loss of fine details and a watercolor like appearance of the image. To improve the correlation between objective measurement and subjective image quality we propose to supplement the standard test methods with an additional measurement of the texture preserving capabilities of the camera. The proposed method uses a test target showing white Gaussian noise. The camera under test reproduces this target and the image is analyzed. We propose to use the kurtosis of the derivative of the image as a metric for the texture preservation of the camera. Kurtosis is a statistical measure for the closeness of a distribution compared to the Gaussian distribution. It can be shown, that the distribution of digital values in the derivative of the image showing the chart becomes the more leptokurtic (increased kurtosis) the stronger the noise reduction has an impact on the image.

  1. ESO-MIDAS: General tools for image processing and data reduction

    NASA Astrophysics Data System (ADS)

    European Southern Observatory

    2013-02-01

    The ESO-MIDAS system provides general tools for image processing and data reduction with emphasis on astronomical applications including imaging and special reduction packages for ESO instrumentation at La Silla and the VLT at Paranal. In addition it contains applications packages for stellar and surface photometry, image sharpening and decomposition, statistics, data fitting, data presentation in graphical form, and more.

  2. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  3. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  4. Radar image processing for rock-type discrimination

    NASA Technical Reports Server (NTRS)

    Blom, R. G.; Daily, M.

    1982-01-01

    Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.

  5. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  6. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  7. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    NASA Technical Reports Server (NTRS)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  8. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  9. Additional Merits of Two-dimensional Single Thick-slice Magnetic Resonance Myelography in Spinal Imaging

    PubMed Central

    Aggarwal, Abhishek; Azad, Rajiv; Ahmad, Armeen; Arora, Pankaj; Gupta, Puneet

    2012-01-01

    Objective: To validate the additional merits of two-dimensional (2D) single thick-slice Magnetic Resonance Myelography (MRM) in spinal imaging. Materials and Methods: 2D single thick-slice MRM was performed using T2 half-Fourier acquisition single-shot turbo spin-echo (HASTE) sequence in addition to routine Magnetic resonance (MR) sequences for spine in 220 patients. The images were evaluated for additional diagnostic information in spinal and extra-spinal regions. A three-point grading system was adopted depending upon the utility of MRM in contributing to the detection of spinal or extra-spinal findings. Grade 1 represented no contribution of MRM while grade 3 would indicate that it was essential to detection of findings. Results: Utility of MRM in spine was categorized as grade 3 in 10.9% cases (24/220), grade 2 in 21.8% (48/220) cases and grade 1 in 67.3% cases (148/220). Thus, the overall additional merit of MRM in spine was seen in 32.7% (72/220) of cases. Besides in 14.1% cases (31/220) extra-spinal pathologies were identified. Conclusion: 2D single thick-slice MRM could have additional merits in spinal imaging when used as an adjunct to routine MR sequences. PMID:23393640

  10. Platform for distributed image processing and image retrieval

    NASA Astrophysics Data System (ADS)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.

    2003-06-01

    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  11. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  12. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  13. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  14. MISR Browse Images: Cold Land Processes Experiment (CLPX)

    Atmospheric Science Data Center

    2013-04-02

    ... MISR Browse Images: Cold Land Processes Experiment (CLPX) These MISR Browse images provide a ... over the region observed during the NASA Cold Land Processes Experiment (CLPX). CLPX involved ground, airborne, and satellite measurements ...

  15. In-process thermal imaging of the electron beam freeform fabrication process

    NASA Astrophysics Data System (ADS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-05-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  16. In-Process Thermal Imaging of the Electron Beam Freeform Fabrication Process

    NASA Technical Reports Server (NTRS)

    Taminger, Karen M.; Domack, Christopher S.; Zalameda, Joseph N.; Taminger, Brian L.; Hafley, Robert A.; Burke, Eric R.

    2016-01-01

    Researchers at NASA Langley Research Center have been developing the Electron Beam Freeform Fabrication (EBF3) metal additive manufacturing process for the past 15 years. In this process, an electron beam is used as a heat source to create a small molten pool on a substrate into which wire is fed. The electron beam and wire feed assembly are translated with respect to the substrate to follow a predetermined tool path. This process is repeated in a layer-wise fashion to fabricate metal structural components. In-process imaging has been integrated into the EBF3 system using a near-infrared (NIR) camera. The images are processed to provide thermal and spatial measurements that have been incorporated into a closed-loop control system to maintain consistent thermal conditions throughout the build. Other information in the thermal images is being used to assess quality in real time by detecting flaws in prior layers of the deposit. NIR camera incorporation into the system has improved the consistency of the deposited material and provides the potential for real-time flaw detection which, ultimately, could lead to the manufacture of better, more reliable components using this additive manufacturing process.

  17. Thermal Imaging for Assessment of Electron-Beam Free Form Fabrication (EBF(sup 3)) Additive Manufacturing Welds

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Burke, Eric R.; Hafley, Robert A.; Taminger, Karen M.; Domack, Christopher S.; Brewer, Amy R.; Martin, Richard E.

    2013-01-01

    Additive manufacturing is a rapidly growing field where 3-dimensional parts can be produced layer by layer. NASA s electron beam free-form fabrication (EBF(sup 3)) technology is being evaluated to manufacture metallic parts in a space environment. The benefits of EBF(sup 3) technology are weight savings to support space missions, rapid prototyping in a zero gravity environment, and improved vehicle readiness. The EBF(sup 3) system is composed of 3 main components: electron beam gun, multi-axis position system, and metallic wire feeder. The electron beam is used to melt the wire and the multi-axis positioning system is used to build the part layer by layer. To insure a quality weld, a near infrared (NIR) camera is used to image the melt pool and solidification areas. This paper describes the calibration and application of a NIR camera for temperature measurement. In addition, image processing techniques are presented for weld assessment metrics.

  18. DKIST visible broadband imager data processing pipeline

    NASA Astrophysics Data System (ADS)

    Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

    2014-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

  19. ATM experiment S-056 image processing requirements definition

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

  20. Technical options for processing additional light tight oil volumes within the United States

    EIA Publications

    2015-01-01

    This report examines technical options for processing additional LTO volumes within the United States. Domestic processing of additional LTO would enable an increase in petroleum product exports from the United States, already the world’s largest net exporter of petroleum products. Unlike crude oil, products are not subject to export limitations or licensing requirements. While this is one possible approach to absorbing higher domestic LTO production in the absence of a relaxation of current limitations on crude exports, domestic LTO would have to be priced at a level required to encourage additional LTO runs at existing refinery units, debottlenecking, or possible additions of processing capacity.

  1. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  2. INDUSTRIAL PROCESS PROFILES FOR ENVIRONMENTAL USE. CHAPTER 10B. PLASTICS ADDITIVES

    EPA Science Inventory

    The research presents an analysis of the chemicals used as additives in the production and processing of plastics, their environmental release, and occupational exposure. It describes in detail each chemical additive used in the plastics industry. The plastics additives are prese...

  3. Methods for processing and imaging marsh foraminifera

    USGS Publications Warehouse

    Dreher, Chandra A.; Flocks, James G.

    2011-01-01

    This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.

  4. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    PubMed

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants. PMID:27153828

  5. Corn plant locating by image processing

    NASA Astrophysics Data System (ADS)

    Jia, Jiancheng; Krutz, Gary W.; Gibson, Harry W.

    1991-02-01

    The feasibility investigation of using machine vision technology to locate corn plants is an important issue for field production automation in the agricultural industry. This paper presents an approach which was developed to locate the center of a corn plant using image processing techniques. Corn plants were first identified using a main vein detection algorithm by detecting a local feature of corn leaves leaf main veins based on the spectral difference between mains and leaves then the center of the plant could be located using a center locating algorithm by tracing and extending each detected vein line and evaluating the center of the plant from intersection points of those lines. The experimental results show the usefulness of the algorithm for machine vision applications related to corn plant identification. Such a technique can be used for pre. cisc spraying of pesticides or biotech chemicals. 1.

  6. Intelligent elevator management system using image processing

    NASA Astrophysics Data System (ADS)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath

    2015-03-01

    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  7. Terahertz imaging and tomography as efficient instruments for testing polymer additive manufacturing objects.

    PubMed

    Perraud, J B; Obaton, A F; Bou-Sleiman, J; Recur, B; Balacey, H; Darracq, F; Guillet, J P; Mounaix, P

    2016-05-01

    Additive manufacturing (AM) technology is not only used to make 3D objects but also for rapid prototyping. In industry and laboratories, quality controls for these objects are necessary though difficult to implement compared to classical methods of fabrication because the layer-by-layer printing allows for very complex object manufacturing that is unachievable with standard tools. Furthermore, AM can induce unknown or unexpected defects. Consequently, we demonstrate terahertz (THz) imaging as an innovative method for 2D inspection of polymer materials. Moreover, THz tomography may be considered as an alternative to x-ray tomography and cheaper 3D imaging for routine control. This paper proposes an experimental study of 3D polymer objects obtained by additive manufacturing techniques. This approach allows us to characterize defects and to control dimensions by volumetric measurements on 3D data reconstructed by tomography. PMID:27140357

  8. Clinical Outcome of Magnetic Resonance Imaging-Detected Additional Lesions in Breast Cancer Patients

    PubMed Central

    Ha, Gi-Won; Yi, Mi Suk; Lee, Byoung Kil; Jung, Sung Hoo

    2011-01-01

    Purpose The aim of this study was to investigate the clinical outcome of additional breast lesions identified with breast magnetic resonance imaging (MRI) in breast cancer patients. Methods A total of 153 patients who underwent breast MRI between July 2006 and March 2008 were retrospectively reviewed. Thirty-three patients (21.6&) were recommended for second-look ultrasound (US) for further characterization of additional lesions detected on breast MRI and these patients constituted our study population. Results Assessment for lesions detected on breast MRI consisted of the following: 25 benign lesions (73.5&), two indeterminate (5.9%), and seven malignant (20.6%) in 33 patients. Second-look US identified 12 additional lesions in 34 lesions (35.3%) and these lesions were confirmed by histological examination. Of the 12 lesions found in the 11 patients, six (50.0%) including one contralateral breast cancer were malignant. The surgical plan was altered in 18.2% (six of 33) of the patients. The use of breast MRI justified a change in treatment for four patients (66.7%) and caused two patients (33.3&) to undergo unwarranted additional surgical procedures. Conclusion Breast MRI identified additional multifocal or contralateral cancer which was not detected initially on conventional imaging in breast cancer patients. Breast MRI has become an indispensable modality in conjunction with conventional modalities for preoperative evaluation of patients with operable breast cancer. PMID:22031803

  9. Image processing and products for the Magellan mission to Venus

    NASA Technical Reports Server (NTRS)

    Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

    1992-01-01

    The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

  10. Determination of detergent and dispensant additives in gasoline by ring-oven and near infrared hypespectral imaging.

    PubMed

    Rodrigues e Brito, Lívia; da Silva, Michelle P F; Rohwedder, Jarbas J R; Pasquini, Celio; Honorato, Fernanda A; Pimentel, Maria Fernanda

    2015-03-10

    A method using the ring-oven technique for pre-concentration in filter paper discs and near infrared hyperspectral imaging is proposed to identify four detergent and dispersant additives, and to determine their concentration in gasoline. Different approaches were used to select the best image data processing in order to gather the relevant spectral information. This was attained by selecting the pixels of the region of interest (ROI), using a pre-calculated threshold value of the PCA scores arranged as histograms, to select the spectra set; summing up the selected spectra to achieve representativeness; and compensating for the superimposed filter paper spectral information, also supported by scores histograms for each individual sample. The best classification model was achieved using linear discriminant analysis and genetic algorithm (LDA/GA), whose correct classification rate in the external validation set was 92%. Previous classification of the type of additive present in the gasoline is necessary to define the PLS model required for its quantitative determination. Considering that two of the additives studied present high spectral similarity, a PLS regression model was constructed to predict their content in gasoline, while two additional models were used for the remaining additives. The results for the external validation of these regression models showed a mean percentage error of prediction varying from 5 to 15%. PMID:25732308

  11. ISLE (Image and Signal Lisp Environment): A functional language interface for signal and image processing

    SciTech Connect

    Azevedo, S.G.; Fitch, J.P.

    1987-05-01

    Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence software.

  12. Spot restoration for GPR image post-processing

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  13. Processing Earth Observing images with Ames Stereo Pipeline

    NASA Astrophysics Data System (ADS)

    Beyer, R. A.; Moratto, Z. M.; Alexandrov, O.; Fong, T.; Shean, D. E.; Smith, B. E.

    2013-12-01

    ICESat with its GLAS instrument provided valuable elevation measurements of glaciers. The loss of this spacecraft caused a demand for alternative elevation sources. In response to that, we have improved our Ames Stereo Pipeline (ASP) software (version 2.1+) to ingest satellite imagery from Earth satellite sources in addition to its support of planetary missions. This enables the open source community a free method to generate digital elevation models (DEM) from Digital Globe stereo imagery and alternatively other cameras using RPC camera models. Here we present details of the software. ASP is a collection of utilities written in C++ and Python that implement stereogrammetry. It contains utilities to manipulate DEMs, project imagery, create KML image quad-trees, and perform simplistic 3D rendering. However its primary application is the creation of DEMs. This is achieved by matching every pixel between the images of a stereo observation via a hierarchical coarse-to-fine template matching method. Matched pixels between images represent a single feature that is triangulated using each image's camera model. The collection of triangulated features represents a point cloud that is then grid resampled to create a DEM. In order for ASP to match pixels/features between images, it requires a search range defined in pixel units. Total processing time is proportional to the area of the first image being matched multiplied by the area of the search range. An incorrect search range for ASP causes repeated false positive matches at each level of the image pyramid and causes excessive processing times with no valid DEM output. Therefore our system contains automatic methods for deducing what the correct search range should be. In addition, we provide options for reducing the overall search range by applying affine epipolar rectification, homography transform, or by map projecting against a prior existing low resolution DEM. Depending on the size of the images, parallax, and image

  14. Valuation of OSA process and folic acid addition as excess sludge minimization alternatives applied in the activated sludge process.

    PubMed

    Martins, C L; Velho, V F; Ramos, S R A; Pires, A S C D; Duarte, E C N F A; Costa, R H R

    2016-01-01

    The aim of this study was to investigate the ability of the oxic-settling-anaerobic (OSA)-process and the folic acid addition applied in the activated sludge process to reduce the excess sludge production. The study was monitored during two distinct periods: activated sludge system with OSA-process, and activated sludge system with folic acid addition. The observed sludge yields (Yobs) were 0.30 and 0.08 kgTSS kg(-1) chemical oxygen demand (COD), control phase and OSA-process (period 1); 0.33 and 0.18 kgTSS kg(-1) COD, control phase and folic acid addition (period 2). The Yobs decreased by 73 and 45% in phases with the OSA-process and folic acid addition, respectively, compared with the control phases. The sludge minimization alternatives result in a decrease in excess sludge production, without negatively affecting the performance of the effluent treatment. PMID:26901714

  15. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  16. Statistical Signal Processing Methods in Scattering and Imaging

    NASA Astrophysics Data System (ADS)

    Zambrano Nunez, Maytee

    This Ph.D. dissertation project addresses two related topics in wave-based signal processing: 1) Cramer-Rao bound (CRB) analysis of scattering systems formed by pointlike scatterers in one-dimensional (1D) and three-dimensional (3D) spaces. 2) Compressive optical coherent imaging, based on the incorporation of sparsity priors in the reconstructions. The first topic addresses for wave scattering systems in 1D and 3D spaces the information content about scattering parameters, in particular, the targets' positions and strengths, and derived quantities, that is contained in scattering data corresponding to reflective, transmissive, and more general sensing modalities. This part of the dissertation derives the Cramer-Rao bound (CRB) for the estimation of parameters of scalar wave scattering systems formed by point scatterers. The results shed light on the fundamental difference between the approximate Born approximation model for weak scatterers and the more general multiple scattering model, and facilitate the identification of regions in parameter space where multiple scattering facilitates or obstructs the estimation of parameters from scattering data, as well as of sensing configurations giving maximal or minimal information about the parameters. The derived results are illustrated with numerical examples, with particular emphasis on the imaging resolution which we quantify via a relative resolution index borrowed from a previous paper. Additionally, this work investigates fundamental limits of estimation performance for the localization of the targets and the inverse scattering problem. The second topic of the effort describes a novel compressive-sensing-based technique for optical imaging with a coherent single-detector system. This hybrid opto-micro-electromechanical, coherent single-detector imaging system applies the latest developments in the nascent field of compressive sensing to the problem of computational imaging of wavefield intensity from a small number

  17. The application of image processing software: Photoshop in environmental design

    NASA Astrophysics Data System (ADS)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  18. The simple fly larval visual system can process complex images.

    PubMed

    Justice, Elizabeth Daubert; Macedonia, Nicholas James; Hamilton, Catherine; Condron, Barry

    2012-01-01

    Animals that have simple eyes are thought to only detect crude visual detail such as light level. However, predatory insect larvae using a small number of visual inputs seem to distinguish complex image targets. Here we show that Drosophila melanogaster larvae, which have 12 photoreceptor cells per hemisphere, are attracted to distinct motions of other, tethered larvae and that this recognition requires the visual system but not the olfactory system. In addition, attraction to tethered larvae still occurs across a clear plastic barrier, does not occur significantly in the dark and attraction occurs to a computer screen movie of larval motion. By altering the artificial attractant movie, we conclude that visual recognition involves both spatial and temporal components. Our results demonstrate that a simple but experimentally tractable visual system can distinguish complex images and that processing in the relatively large central brain may compensate for the simple input. PMID:23093193

  19. Tracker: Image-Processing and Object-Tracking System Developed

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.

  20. The effect of silane addition timing on mixing processability and properties of silica reinforced rubber compound

    NASA Astrophysics Data System (ADS)

    Jeong, Hee-Hoon; Jin, Hyun-Ho; Ha, Sung-Ho; Jang, Suk-Hee; Kang, Yong-Gu; Han, Min-Hyun

    2016-03-01

    A series of experiments were performed to determine an optimum balance between processability and performance of a highly loaded silica compound. The experiments evaluated 4 different silane injection times. All mixing related to silane addition was conducted with a scaled up "Tandem" mixer line. With exception to silane addition timing, almost all operating conditions were controlled between experimental features. It was found that when the silane addition was introduced earlier in the mixing cycle both the reaction was more complete and the bound rubber content was higher. But processability indicators such as sheet forming and Mooney plasticity were negatively impacted. On the other hand, as silane injection was delayed to later in the mixing process the filler dispersion and good sheet forming was improved. However both the bound rubber content and Silane reaction completion were decreased. With the changes in silane addition time, the processability and properties of a silica compound can be controlled.

  1. Process for improving moisture resistance of epoxy resins by addition of chromium ions

    NASA Technical Reports Server (NTRS)

    St.clair, A. K.; Stoakley, D. M.; St.clair, T. L.; Singh, J. J. (Inventor)

    1985-01-01

    A process for improving the moisture resistance properties of epoxidized TGMDA and DGEBA resin system by chemically incorporating chromium ions is described. The addition of chromium ions is believed to prevent the absorption of water molecules.

  2. Viewpoints on Medical Image Processing: From Science to Application

    PubMed Central

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  3. Color reproductivity improvement with additional virtual color filters for WRGB image sensor

    NASA Astrophysics Data System (ADS)

    Kawada, Shun; Kuroda, Rihito; Sugawa, Shigetoshi

    2013-02-01

    We have developed a high accuracy color reproduction method based on an estimated spectral reflectance of objects using additional virtual color filters for a wide dynamic range WRGB color filter CMOS image sensor. The four virtual color filters are created by multiplying the spectral sensitivity of White pixel by gauss functions which have different central wave length and standard deviation, and the virtual sensor outputs of those virtual filters are estimated from the four real output signals of the WRGB image sensor. The accuracy of color reproduction was evaluated with a Macbeth Color Checker (MCC), and the averaged value of the color difference ΔEab of 24 colors was 1.88 with our approach.

  4. Interactive image processing for mobile devices

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney

    2009-01-01

    As the number of consumer digital images escalates by tens of billions each year, an increasing proportion of these images are being acquired using the latest generations of sophisticated mobile devices. The characteristics of the cameras embedded in these devices now yield image-quality outcomes that approach those of the parallel generations of conventional digital cameras, and all aspects of the management and optimization of these vast new image-populations become of utmost importance in providing ultimate consumer satisfaction. However this satisfaction is still limited by the fact that a substantial proportion of all images are perceived to have inadequate image quality, and a lesser proportion of these to be completely unacceptable (for sharing, archiving, printing, etc). In past years at this same conference, the author has described various aspects of a consumer digital-image interface based entirely on an intuitive image-choice-only operation. Demonstrations have been given of this facility in operation, essentially allowing criticalpath navigation through approximately a million possible image-quality states within a matter of seconds. This was made possible by the definition of a set of orthogonal image vectors, and defining all excursions in terms of a fixed linear visual-pixel model, independent of the image attribute. During recent months this methodology has been extended to yield specific user-interactive image-quality solutions in the form of custom software, which at less than 100kb is readily embedded in the latest generations of unlocked portable devices. This has also necessitated the design of new user-interfaces and controls, as well as streamlined and more intuitive versions of the user quality-choice hierarchy. The technical challenges and details will be described for these modified versions of the enhancement methodology, and initial practical experience with typical images will be described.

  5. VIP: Vortex Image Processing pipeline for high-contrast direct imaging of exoplanets

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Christiaens, Valentin; Absil, Olivier; Mawet, Dimitri

    2016-03-01

    VIP (Vortex Image Processing pipeline) provides pre- and post-processing algorithms for high-contrast direct imaging of exoplanets. Written in Python, VIP provides a very flexible framework for data exploration and image processing and supports high-contrast imaging observational techniques, including angular, reference-star and multi-spectral differential imaging. Several post-processing algorithms for PSF subtraction based on principal component analysis are available as well as the LLSG (Local Low-rank plus Sparse plus Gaussian-noise decomposition) algorithm for angular differential imaging. VIP also implements the negative fake companion technique coupled with MCMC sampling for rigorous estimation of the flux and position of potential companions.

  6. Post-processing strategies in image scanning microscopy.

    PubMed

    McGregor, J E; Mitchell, C A; Hartell, N A

    2015-10-15

    Image scanning microscopy (ISM) coupled with pixel reassignment offers a resolution improvement of √2 over standard widefield imaging. By scanning point-wise across the specimen and capturing an image of the fluorescent signal generated at each scan position, additional information about specimen structure is recorded and the highest accessible spatial frequency is doubled. Pixel reassignment can be achieved optically in real time or computationally a posteriori and is frequently combined with the use of a physical or digital pinhole to reject out of focus light. Here, we simulate an ISM dataset using a test image and apply standard and non-standard processing methods to address problems typically encountered in computational pixel reassignment and pinholing. We demonstrate that the predicted improvement in resolution is achieved by applying standard pixel reassignment to a simulated dataset and explore the effect of realistic displacements between the reference and true excitation positions. By identifying the position of the detected fluorescence maximum using localisation software and centring the digital pinhole on this co-ordinate before scaling around translated excitation positions, we can recover signal that would otherwise be degraded by the use of a pinhole aligned to an inaccurate excitation reference. This strategy is demonstrated using experimental data from a multiphoton ISM instrument. Finally we investigate the effect that imaging through tissue has on the positions of excitation foci at depth and observe a global scaling with respect to the applied reference grid. Using simulated and experimental data we explore the impact of a globally scaled reference on the ISM image and, by pinholing around the detected maxima, recover the signal across the whole field of view. PMID:25962644

  7. Bessel filters applied in biomedical image processing

    NASA Astrophysics Data System (ADS)

    Mesa Lopez, Juan Pablo; Castañeda Saldarriaga, Diego Leon

    2014-06-01

    A magnetic resonance is an image obtained by means of an imaging test that uses magnets and radio waves to create body images, however, in some images it's difficult to recognize organs or foreign agents present in the body. With these Bessel filters the objective is to significantly increase the resolution of magnetic resonance images taken to make them much clearer in order to detect anomalies and diagnose the illness. As it's known, Bessel filters appear to solve the Schrödinger equation for a particle enclosed in a cylinder and affect the image distorting the colors and contours of it, therein lies the effectiveness of these filters, since the clear outline shows more defined and easy to recognize abnormalities inside the body.

  8. DTV color and image processing: past, present, and future

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

    2006-01-01

    The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

  9. Two satellite image sets for the training and validation of image processing systems for defense applications

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Aldridge, Shawn; Herzog, Britny; Moore, Frank

    2010-04-01

    Many image processing algorithms utilize the discrete wavelet transform (DWT) to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of data at high levels of compression over noisy channels. In recent years, evolutionary algorithms (EAs) have been utilized to optimize image transform filters that outperform standard wavelets for bandwidth-constrained compression of satellite images. The optimization of these filters requires the use of training images appropriately chosen for the image processing system's intended applications. This paper presents two robust sets of fifty images each intended for the training and validation of satellite and unmanned aerial vehicle (UAV) reconnaissance image processing algorithms. Each set consists of a diverse range of subjects consisting of cities, airports, military bases, and landmarks representative of the types of images that may be captured during reconnaissance missions. Optimized algorithms may be "overtrained" for a specific problem instance and thus exhibit poor performance over a general set of data. To reduce the risk of overtraining an image filter, we evaluate the suitability of each image as a training image. After evolving filters using each image, we assess the average compression performance of each filter across the entire set of images. We thus identify a small subset of images from each set that provide strong performance as training images for the image transform optimization problem. These images will also provide a suitable platform for the development of other algorithms for defense applications. The images are available upon request from the contact author.

  10. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  11. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  12. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  13. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  14. Infrared thermography for laser-based powder bed fusion additive manufacturing processes

    SciTech Connect

    Moylan, Shawn; Whitenton, Eric; Lane, Brandon; Slotwinski, John

    2014-02-18

    Additive manufacturing (AM) has the potential to revolutionize discrete part manufacturing, but improvements in processing of metallic materials are necessary before AM will see widespread adoption. A better understanding of AM processes, resulting from physics-based modeling as well as direct process metrology, will form the basis for these improvements. Infrared (IR) thermography of AM processes can provide direct process metrology, as well as data necessary for the verification of physics-based models. We review selected works examining how IR thermography was implemented and used in various powder-bed AM processes. This previous work, as well as significant experience at the National Institute of Standards and Technology in temperature measurement and IR thermography for machining processes, shapes our own research in AM process metrology with IR thermography. We discuss our experimental design, as well as plans for future IR measurements of a laser-based powder bed fusion AM process.

  15. Infrared thermography for laser-based powder bed fusion additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Moylan, Shawn; Whitenton, Eric; Lane, Brandon; Slotwinski, John

    2014-02-01

    Additive manufacturing (AM) has the potential to revolutionize discrete part manufacturing, but improvements in processing of metallic materials are necessary before AM will see widespread adoption. A better understanding of AM processes, resulting from physics-based modeling as well as direct process metrology, will form the basis for these improvements. Infrared (IR) thermography of AM processes can provide direct process metrology, as well as data necessary for the verification of physics-based models. We review selected works examining how IR thermography was implemented and used in various powder-bed AM processes. This previous work, as well as significant experience at the National Institute of Standards and Technology in temperature measurement and IR thermography for machining processes, shapes our own research in AM process metrology with IR thermography. We discuss our experimental design, as well as plans for future IR measurements of a laser-based powder bed fusion AM process.

  16. GStreamer as a framework for image processing applications in image fusion

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Doe, Joshua M.

    2011-05-01

    Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using this framework, engineers at NVESD have produced a software package that allows for real time manipulation of processing steps for rapid prototyping in image fusion.

  17. On digital image processing technology and application in geometric measure

    NASA Astrophysics Data System (ADS)

    Yuan, Jiugen; Xing, Ruonan; Liao, Na

    2014-04-01

    Digital image processing technique is an emerging science that emerging with the development of semiconductor integrated circuit technology and computer science technology since the 1960s.The article introduces the digital image processing technique and principle during measuring compared with the traditional optical measurement method. It takes geometric measure as an example and introduced the development tendency of digital image processing technology from the perspective of technology application.

  18. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  19. Optimizing signal and image processing applications using Intel libraries

    NASA Astrophysics Data System (ADS)

    Landré, Jérôme; Truchetet, Frédéric

    2007-01-01

    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  20. Image processing methods for visual prostheses based on DSP

    NASA Astrophysics Data System (ADS)

    Liu, Huwei; Zhao, Ying; Tian, Yukun; Ren, Qiushi; Chai, Xinyu

    2008-12-01

    Visual prostheses for extreme vision impairment have come closer to reality during these few years. The task of this research has been to design exoteric devices and study image processing algorithms and methods for different complexity images. We have developed a real-time system capable of image capture and processing to obtain most available and important image features for recognition and simulation experiment based on DSP (Digital Signal Processor). Beyond developing hardware system, we introduce algorithms such as resolution reduction, information extraction, dilation and erosion, square (circular) pixelization and Gaussian pixelization. And we classify images with different stages according to different complexity such as simple images, medium complex images, complex images. As a result, this paper will get the needed signal for transmitting to electrode array and images for simulation experiment.

  1. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  2. Sliding mean edge estimation. [in digital image processing

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1978-01-01

    A method for determining the locations of the major edges of objects in digital images is presented. The method is based on an algorithm utilizing maximum likelihood concepts. An image line-scan interval is processed to determine if an edge exists within the interval and its location. The proposed algorithm has demonstrated good results even in noisy images.

  3. Experiences with digital processing of images at INPE

    NASA Technical Reports Server (NTRS)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  4. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  5. Airy-Kaup-Kupershmidt filters applied to digital image processing

    NASA Astrophysics Data System (ADS)

    Hoyos Yepes, Laura Cristina

    2015-09-01

    The Kaup-Kupershmidt operator is applied to the two-dimensional solution of the Airy-diffusion equation and the resulting filter is applied via convolution to image processing. The full procedure is implemented using Maple code with the package ImageTools. Some experiments were performed using a wide category of images including biomedical images generated by magnetic resonance, computarized axial tomography, positron emission tomography, infrared and photon diffusion. The Airy-Kaup-Kupershmidt filter can be used as a powerful edge detector and as powerful enhancement tool in image processing. It is expected that the Airy-Kaup-Kupershmidt could be incorporated in standard programs for image processing such as ImageJ.

  6. Using quantum filters to process images of diffuse axonal injury

    NASA Astrophysics Data System (ADS)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  7. The Development of Sun-Tracking System Using Image Processing

    PubMed Central

    Lee, Cheng-Dar; Huang, Hong-Cheng; Yeh, Hong-Yih

    2013-01-01

    This article presents the development of an image-based sun position sensor and the algorithm for how to aim at the Sun precisely by using image processing. Four-quadrant light sensors and bar-shadow photo sensors were used to detect the Sun's position in the past years. Nevertheless, neither of them can maintain high accuracy under low irradiation conditions. Using the image-based Sun position sensor with image processing can address this drawback. To verify the performance of the Sun-tracking system including an image-based Sun position sensor and a tracking controller with embedded image processing algorithm, we established a Sun image tracking platform and did the performance testing in the laboratory; the results show that the proposed Sun tracking system had the capability to overcome the problem of unstable tracking in cloudy weather and achieve a tracking accuracy of 0.04°. PMID:23615582

  8. Image process technique used in a large FOV compound eye imaging system

    NASA Astrophysics Data System (ADS)

    Cao, Axiu; Shi, Lifang; Shi, Ruiying; Deng, Qiling; Du, Chunlei

    2012-11-01

    Biological inspiration has produced some successful solutions for different imaging systems. Inspired by the compound eye of insects, this paper presents some image process techniques used in the spherical compound eye imaging system. By analyzing the relationship between the system with large field of view (FOV) and each lens, an imaging system based on compound eyes has been designed, where 37 lenses pointing in different directions are arranged on a spherical substrate. By researching the relationship between the lens position and the corresponding image geometrical shape to realize a large FOV detection, the image process technique is proposed. To verify the technique, experiments are carried out based on the designed compound eye imaging system. The results show that an image with FOV over 166° can be acquired while keeping excellent image process quality.

  9. Study on the improvement of overall optical image quality via digital image processing

    NASA Astrophysics Data System (ADS)

    Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

    2008-12-01

    This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

  10. Magnetic Resonance Current Density Imaging of Chemical Processes and Reactions

    NASA Astrophysics Data System (ADS)

    Beravs, Katarina; Demš Ar, Alojz; Demsar, Franci

    1999-03-01

    Electric current density imaging was used to image conductivity changes that occur as a chemical process or reaction progresses. Feasibility was assessed in two models representing the dissolving of an ionic solid and the formation of an insoluble precipitate. In both models, temporal and spatial changes in ionic concentrations were obtained on current density images. As expected, the images showed significant signal enhancement along the ionization/dissociation sites.

  11. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  12. IPL processing of the Viking orbiter images of Mars

    NASA Technical Reports Server (NTRS)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

    1977-01-01

    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  13. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  14. Effect of Surface-active Additives on Physical Properties of Slurries of Vapor-process Magnesium

    NASA Technical Reports Server (NTRS)

    Pinns, Murray L

    1955-01-01

    The presence of 3 to 5 percent surface-active additive gave the lowest Brookfield apparent viscosity, plastic viscosity, and yield value that were obtained for slurry fuels containing approximately 50 percent vapor-process magnesium in JP-1 fuel. The slurries settled little and were easily remixed. A polyoxyethylene dodecyl alcohol was the most effective of 13 additives tested in reducing the Brookfield apparent viscosity and the yield value of the slurry. The seven most effective additives all had a hydroxyl group plus an ester or polyoxethylene group in the molecule. The densities of some of the slurries were measured.

  15. Development of automatic hologram synthesizer for medical use III: image processing for original medical images

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshifumi; Misaki, Toshikazu; Kato, Tsutomu

    1992-05-01

    An image processing system for providing original images for synthesizing multiplex holograms is developed. This system reconstructs 3D surface rendering images of internal organs and/or bones of a patient from a series of tomograms such as computed tomography. Image processing includes interpolation, enhancement, extraction of diseased parts, selection of axis of projection, and compensation of distortions. This paper presents the features of this system, along with problems and resolutions encountered in actual test operation at hospitals.

  16. Resolution modification and context based image processing for retinal prosthesis

    NASA Astrophysics Data System (ADS)

    Naghdy, Golshah; Beston, Chris; Seo, Jong-Mo; Chung, Hum

    2006-08-01

    This paper focuses on simulating image processing algorithms and exploring issues related to reducing high resolution images to 25 x 25 pixels suitable for the retinal implant. Field of view (FoV) is explored, and a novel method of virtual eye movement discussed. Several issues beyond the normal model of human vision are addressed through context based processing.

  17. Image Processing In Laser-Beam-Steering Subsystem

    NASA Technical Reports Server (NTRS)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  18. Additive controlled synthesis of gold nanorods (GNRs) for two-photon luminescence imaging of cancer cells

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Yong, Ken-Tye; Roy, Indrajit; Hu, Rui; Ding, Hong; Zhao, Lingling; Swihart, Mark T.; He, Guang S.; Cui, Yiping; Prasad, Paras N.

    2010-07-01

    Gold nanorods (GNRs) with a longitudinal surface plasmon resonance peak that is tunable from 600 to 1100 nm have been fabricated in a cetyl trimethylammoniumbromide (CTAB) micellar medium using hydrochloric acid and silver nitrate as additives to control their shape and size. By manipulating the concentrations of silver nitrate and hydrochloric acid, the aspect ratio of the GNRs was reliably and reproducibly tuned from 2.5 to 8. The GNRs were first coated with polyelectrolyte multilayers and then bioconjugated to transferrin (Tf) to target pancreatic cancer cells. Two-photon imaging excited from the bioconjugated GNRs demonstrated receptor-mediated uptake of the bioconjugates into Panc-1 cells, overexpressing the transferrin receptor (TfR). The bioconjugated GNR formulation exhibited very low toxicity, suggesting that it is biocompatible and potentially suitable for targeted two-photon bioimaging.

  19. From Image to Text: Using Images in the Writing Process

    ERIC Educational Resources Information Center

    Andrzejczak, Nancy; Trainin, Guy; Poldberg, Monique

    2005-01-01

    This study looks at the benefits of integrating visual art creation and the writing process. The qualitative inquiry uses student, parent, and teacher interviews coupled with field observation, and artifact analysis. Emergent coding based on grounded theory clearly shows that visual art creation enhances the writing process. Students used more…

  20. DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging.

    PubMed

    Yan, Chao-Gan; Wang, Xin-Di; Zuo, Xi-Nian; Zang, Yu-Feng

    2016-07-01

    Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies. PMID:27075850

  1. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  2. Review of biomedical signal and image processing

    PubMed Central

    2013-01-01

    This article is a review of the book “Biomedical Signal and Image Processing” by Kayvan Najarian and Robert Splinter, which is published by CRC Press, Taylor & Francis Group. It will evaluate the contents of the book and discuss its suitability as a textbook, while mentioning highlights of the book, and providing comparison with other textbooks.

  3. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software

  4. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  5. The Role of Additional Processing Time and Lexical Constraint in Spoken Word Recognition

    ERIC Educational Resources Information Center

    LoCasto, Paul C.; Connine, Cynthia M.; Patterson, David

    2007-01-01

    Three phoneme monitoring experiments examined the manner in which additional processing time influences spoken word recognition. Experiment 1a introduced a version of the phoneme monitoring paradigm in which a silent interval is inserted prior to the word-final target phoneme. Phoneme monitoring reaction time decreased as the silent interval…

  6. 15 CFR 713.4 - Advance declaration requirements for additionally planned production, processing, or consumption...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...)(ii) of the CWCR will produce, process, or consume a Schedule 2 chemical above the applicable...)(ii) of the CWCR an additional Schedule 2 chemical above the applicable declaration threshold;...

  7. 15 CFR 713.4 - Advance declaration requirements for additionally planned production, processing, or consumption...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...)(ii) of the CWCR will produce, process, or consume a Schedule 2 chemical above the applicable...)(ii) of the CWCR an additional Schedule 2 chemical above the applicable declaration threshold;...

  8. 15 CFR 713.4 - Advance declaration requirements for additionally planned production, processing, or consumption...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...)(ii) of the CWCR will produce, process, or consume a Schedule 2 chemical above the applicable...)(ii) of the CWCR an additional Schedule 2 chemical above the applicable declaration threshold;...

  9. 15 CFR 713.4 - Advance declaration requirements for additionally planned production, processing, or consumption...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Advance declaration requirements for additionally planned production, processing, or consumption of Schedule 2 chemicals. 713.4 Section 713.4 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF...

  10. 15 CFR 713.4 - Advance declaration requirements for additionally planned production, processing, or consumption...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE CHEMICAL WEAPONS CONVENTION REGULATIONS ACTIVITIES INVOLVING...)(ii) of the CWCR will produce, process, or consume a Schedule 2 chemical above the applicable...)(ii) of the CWCR an additional Schedule 2 chemical above the applicable declaration threshold;...

  11. 25 CFR 1000.356 - May the trust evaluation process be used for additional reviews?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false May the trust evaluation process be used for additional reviews? 1000.356 Section 1000.356 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ANNUAL FUNDING AGREEMENTS UNDER THE TRIBAL SELF-GOVERNMENT ACT AMENDMENTS TO...

  12. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  13. Spatial correlations, additivity, and fluctuations in conserved-mass transport processes

    NASA Astrophysics Data System (ADS)

    Das, Arghya; Chatterjee, Sayani; Pradhan, Punyabrata

    2016-06-01

    We exactly calculate two-point spatial correlation functions in steady state in a broad class of conserved-mass transport processes, which are governed by chipping, diffusion, and coalescence of masses. We find that the spatial correlations are in general short-ranged and, consequently, on a large scale, these transport processes possess a remarkable thermodynamic structure in the steady state. That is, the processes have an equilibrium-like additivity property and, consequently, a fluctuation-response relation, which help us to obtain subsystem mass distributions in the limit of subsystem size large.

  14. The Khoros software development environment for image and signal processing.

    PubMed

    Konstantinides, K; Rasure, J R

    1994-01-01

    Data flow visual language systems allow users to graphically create a block diagram of their applications and interactively control input, output, and system variables. Khoros is an integrated software development environment for information processing and visualization. It is particularly attractive for image processing because of its rich collection of tools for image and digital signal processing. This paper presents a general overview of Khoros with emphasis on its image processing and DSP tools. Various examples are presented and the future direction of Khoros is discussed. PMID:18291923

  15. Electric poling-assisted additive manufacturing process for PVDF polymer-based piezoelectric device applications

    NASA Astrophysics Data System (ADS)

    Lee, ChaBum; Tarbutton, Joshua A.

    2014-09-01

    This paper presents a new additive manufacturing (AM) process to directly and continuously print piezoelectric devices from polyvinylidene fluoride (PVDF) polymeric filament rods under a strong electric field. This process, called ‘electric poling-assisted additive manufacturing or EPAM, combines AM and electric poling processes and is able to fabricate free-form shape piezoelectric devices continuously. In this process, the PVDF polymer dipoles remain well-aligned and uniform over a large area in a single design, production and fabrication step. During EPAM process, molten PVDF polymer is simultaneously mechanically stresses in-situ by the leading nozzle and electrically poled by applying high electric field under high temperature. The EPAM system was constructed to directly print piezoelectric structures from PVDF polymeric filament while applying high electric field between nozzle tip and printing bed in AM machine. Piezoelectric devices were successfully fabricated using the EPAM process. The crystalline phase transitions that occurred from the process were identified by using the Fourier transform infrared spectroscope. The results indicate that devices printed under a strong electric field become piezoelectric during the EPAM process and that stronger electric fields result in greater piezoelectricity as marked by the electrical response and the formation of sharper peaks at the polar β crystalline wavenumber of the PVDF polymer. Performing this process in the absence of an electric field does not result in dipole alignment of PVDF polymer. The EPAM process is expected to lead to the widespread use of AM to fabricate a variety of piezoelectric PVDF polymer-based devices for sensing, actuation and energy harvesting applications with simple, low cost, single processing and fabrication step.

  16. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    NASA Astrophysics Data System (ADS)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  17. A Design Verification of the Parallel Pipelined Image Processings

    NASA Astrophysics Data System (ADS)

    Wasaki, Katsumi; Harai, Toshiaki

    2008-11-01

    This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.

  18. Temperature Profile and Imaging Analysis of Laser Additive Manufacturing of Stainless Steel

    NASA Astrophysics Data System (ADS)

    Islam, M.; Purtonen, T.; Piili, H.; Salminen, A.; Nyrhilä, O.

    Powder bed fusion is a laser additive manufacturing (LAM) technology which is used to manufacture parts layer-wise from powdered metallic materials. The technology has advanced vastly in the recent years and current systems can be used to manufacture functional parts for e.g. aerospace industry. The performance and accuracy of the systems have improved also, but certain difficulties in the powder fusion process are reducing the final quality of the parts. One of these is commonly known as the balling phenomenon. The aim of this study was to define some of the process characteristics in powder bed fusion by performing comparative studies with two different test setups. This was done by comparing measured temperature profiles and on-line photography of the process. The material used during the research was EOS PH1 stainless steel. Both of the test systems were equipped with 200 W single mode fiber lasers. The main result of the research was that some of the process instabilities are resulting from the energy input during the process.

  19. Early differential processing of material images: Evidence from ERP classification.

    PubMed

    Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R

    2014-01-01

    Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. PMID:24961247

  20. Image processing system to analyze droplet distributions in sprays

    NASA Technical Reports Server (NTRS)

    Bertollini, Gary P.; Oberdier, Larry M.; Lee, Yong H.

    1987-01-01

    An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.

  1. Dynamic infrared imaging in identification of breast cancer tissue with combined image processing and frequency analysis.

    PubMed

    Joro, R; Lääperi, A-L; Soimakallio, S; Järvenpää, R; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Dastidar, P

    2008-01-01

    Five combinations of image-processing algorithms were applied to dynamic infrared (IR) images of six breast cancer patients preoperatively to establish optimal enhancement of cancer tissue before frequency analysis. mid-wave photovoltaic (PV) IR cameras with 320x254 and 640x512 pixels were used. The signal-to-noise ratio and the specificity for breast cancer were evaluated with the image-processing combinations from the image series of each patient. Before image processing and frequency analysis the effect of patient movement was minimized with a stabilization program developed and tested in the study by stabilizing image slices using surface markers set as measurement points on the skin of the imaged breast. A mathematical equation for superiority value was developed for comparison of the key ratios of the image-processing combinations. The ability of each combination to locate the mammography finding of breast cancer in each patient was compared. Our results show that data collected with a 640x512-pixel mid-wave PV camera applying image-processing methods optimizing signal-to-noise ratio, morphological image processing and linear image restoration before frequency analysis possess the greatest superiority value, showing the cancer area most clearly also in the match centre of the mammography estimation. PMID:18666012

  2. Processing of polarametric SAR images. Final report

    SciTech Connect

    Warrick, A.L.; Delaney, P.A.

    1995-09-01

    The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

  3. Processing ISS Images of Titan's Surface

    NASA Technical Reports Server (NTRS)

    Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

    2005-01-01

    One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

  4. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D.G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  5. Distributed image processing for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Cozien, Roger F.

    2001-02-01

    Our purpose is, in medium term, to detect in air images, characteristic shapes and objects such as airports, industrial plants, planes, tanks, trucks, . with great accuracy and low rate of mistakes. However, we also want to value whether the link between neural networks and multi-agents systems is relevant and effective. If it appears to be really effective, we hope to use this kind of technology in other fields. That would be an easy and convenient way to depict and to use the agents' knowledge which is distributed and fragmented. After a first phase of preliminary tests to know if agents are able to give relevant information to a neural network, we verify that only a few agents running on an image are enough to inform the network and let it generalize the agents' distributed and fragmented knowledge. In a second phase, we developed a distributed architecture allowing several multi-agents systems running at the same time on different computers with different images. All those agents send information to a "multi neural networks system" whose job is to identify the shapes detected by the agents. The name we gave to our project is Jarod.

  6. Effects of different additives on the performance of spray dryer system during incineration process.

    PubMed

    Wey, M Y; Peng, C Y; Wu, H Y; Chiang, B C; Liu, Z S

    2002-06-01

    The spray dryer system was conventionally employed to remove the SOx, NOx, and HCl in the flue gas. However, the removal efficiency of acid gas in the practical incineration flue gas, which contains dust, heavy metals, and acid gas itself, was seldom mentioned in the literature. The alkaline sorbents possess large specific surface that was a main factor on the adsorption of heavy metals and acid gas. Therefore, the primary objective of this study was focused on the effect of different additives on the removal efficiency of acid gas and heavy metals (Cr, Cd and Pb). The mass and element size distribution of heavy metals in fly ash under different additives were also investigated. The results indicated that the removal efficiency of HCl in the spray dryer system was higher than 97.8%. The effects of additives on the removal efficiency of HCl, however, were undistinguished. In the desulfurization process, the highest removal efficiency was 71.3% when the additive of amorphous SiO2 was added in the spray dryer system. The removal efficiency was 66.0% with the additive of CaCl2 and 63.1% without any additives, respectively. It was also found that the spray dryer system could decrease the concentration of metal in fly ash but increase the amount of fly ash. In addition, amorphous SiO2 in the alkaline sorbent tended to increase the adsorption of heavy metal on reactant, because it could enhance the dispersion of alkaline sorbent. PMID:12118621

  7. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  8. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  9. [Image processing method based on prime number factor layer].

    PubMed

    Fan, Yifang; Yuan, Zhirun

    2004-10-01

    In sport games, since the human body movement data are mainly drawn from the sports field with the hues or even interruptions of commercial environment, some difficulties must be surmounted in order to analyze the images. It is obviously not enough just to use the method of grey-image treatment. We have applied the characteristics of the prime number function to the human body movement images and thus introduce a new method of image processing in this article. When trying to deal with certain moving images, we can get a better result. PMID:15553856

  10. Leveraging the Cloud for Robust and Efficient Lunar Image Processing

    NASA Technical Reports Server (NTRS)

    Chang, George; Malhotra, Shan; Wolgast, Paul

    2011-01-01

    The Lunar Mapping and Modeling Project (LMMP) is tasked to aggregate lunar data, from the Apollo era to the latest instruments on the LRO spacecraft, into a central repository accessible by scientists and the general public. A critical function of this task is to provide users with the best solution for browsing the vast amounts of imagery available. The image files LMMP manages range from a few gigabytes to hundreds of gigabytes in size with new data arriving every day. Despite this ever-increasing amount of data, LMMP must make the data readily available in a timely manner for users to view and analyze. This is accomplished by tiling large images into smaller images using Hadoop, a distributed computing software platform implementation of the MapReduce framework, running on a small cluster of machines locally. Additionally, the software is implemented to use Amazon's Elastic Compute Cloud (EC2) facility. We also developed a hybrid solution to serve images to users by leveraging cloud storage using Amazon's Simple Storage Service (S3) for public data while keeping private information on our own data servers. By using Cloud Computing, we improve upon our local solution by reducing the need to manage our own hardware and computing infrastructure, thereby reducing costs. Further, by using a hybrid of local and cloud storage, we are able to provide data to our users more efficiently and securely. 12 This paper examines the use of a distributed approach with Hadoop to tile images, an approach that provides significant improvements in image processing time, from hours to minutes. This paper describes the constraints imposed on the solution and the resulting techniques developed for the hybrid solution of a customized Hadoop infrastructure over local and cloud resources in managing this ever-growing data set. It examines the performance trade-offs of using the more plentiful resources of the cloud, such as those provided by S3, against the bandwidth limitations such use

  11. Image processing software for providing radiometric inputs to land surface climatology models

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Goetz, Scott J.; Strebel, Donald E.; Hall, Forrest G.

    1989-01-01

    During the First International Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), 80 gigabytes of image data were generated from a variety of satellite and airborne sensors in a multidisciplinary attempt to study energy and mass exchange between the land surface and the atmosphere. To make these data readily available to researchers with a range of image data handling experience and capabilities, unique image-processing software was designed to perform a variety of nonstandard image-processing manipulations and to derive a set of standard-format image products. The nonconventional features of the software include: (1) adding new layers of geographic coordinates, and solar and viewing conditions to existing data; (2) providing image polygon extraction and calibration of data to at-sensor radiances; and, (3) generating standard-format derived image products that can be easily incorporated into radiometric or climatology models. The derived image products consist of easily handled ASCII descriptor files, byte image data files, and additional per-pixel integer data files (e.g., geographic coordinates, and sun and viewing conditions). Details of the solutions to the image-processing problems, the conventions adopted for handling a variety of satellite and aircraft image data, and the applicability of the output products to quantitative modeling are presented. They should be of general interest to future experiment and data-handling design considerations.

  12. Image processing and fusion to detect navigation obstacles

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuo; Yamada, Kimio

    1998-07-01

    Helicopters flying at low altitude in the visual flight rules often crash against obstacles such as a power transmission line. This paper describes the image sensors to detect obstacles and the several image processing techniques to derive and enhance the targets in the images. The images including obstacles were collected both on the ground and by air using an infrared (IR) camera and a color video camera in different backgrounds, distances, and weather conditions. Collected results revealed that IR images have an advantage over color images to detect obstacles in many environments. Several image processing techniques have been evaluated to improve the qualities of collected images. For example, fusion of IR and color images, several filters, such as the Median filter or the adaptive filter have been tested. Information that the target is thin and long, which characterizes the shape of power lines, has been introduced to derive power lines. It has been shown that these processes can greatly reduce the noise and enhance the contrast, no matter how the background is. It has also been demonstrated that there is a good prospect that these processes will help develop the algorithm for automatic obstacle detection and warning.

  13. Constraining 3D Process Sedimentological Models to Geophysical Data Using Image Quilting

    NASA Astrophysics Data System (ADS)

    Tahmasebi, P.; Da Pra, A.; Pontiggia, M.; Caers, J.

    2014-12-01

    3D process geological models, whether for carbonate or sedimentological systems, have been proposed for modeling realistic subsurface heterogeneity. The problem with such forward process models is that they are not constrained to any subsurface data whether to wells or geophysical surveys. We propose a new method for realistic geological modeling of complex heterogeneity by hybridizing 3D process modeling of geological deposition with conditioning by means of a novel multiple-point geostatistics (MPS) technique termed image quilting (IQ). Image quilting is a pattern-based techniques that stiches together patterns extracted from training images to generate stochastic realizations that look like the training image. In this paper, we illustrate how 3D process model realizations can be used as training images in image quilting. To constrain the realization to seismic data we first interpret each facies in the geophysical data. These interpretation, while overly smooth and not reflecting finer scale variation are used as auxiliary variables in the generation of the image quilting realizations. To condition to well data, we first perform a kriging of the well data to generate a kriging map and kriging variance. The kriging map is used as additional auxiliary variable while the kriging variance is used as a weight given to the kriging derived auxiliary variable. We present an application to a giant offshore reservoir. Starting from seismic advanced attribute analysis and sedimentological interpretation, we build the 3D sedimentological process based model and use it as non-stationary training image for conditional image quilting.

  14. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  15. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  16. The effects of Na/K additives and flyash on NO reduction in a SNCR process.

    PubMed

    Hao, Jiangtao; Yu, Wei; Lu, Ping; Zhang, Yufei; Zhu, Xiuming

    2015-03-01

    An experimental study of Na/K additives and flyash on NO reduction during the selective non-catalytic reduction (SNCR) process were carried out in an entrained flow reactor (EFR). The effects of reaction temperature (Tr), water vapor, Na/K additives (NaCl, KCl, Na2CO3) and flyash characteristics on NO reduction were analyzed. The results indicated that NO removal efficiency shows a pattern of increasing first and decreasing later with the increase of the temperature at Tr=850-1150°C. Water vapor can improve the performance of NO reduction, and the NO reduction of 70.5% was obtained while the flue gas containing 4% water vapor at 950°C. Na/K additives have a significant promoting effect on NO reduction and widen the SNCR temperature window, the promoting effect of the test additives is ordered as Na2CO3>KCl>NaCl. NO removal efficiency with 125ppm Na2CO3 and 4% water vapor can reach up to 84.9% at the optimal reaction temperature. The additive concentration has no significant effects on NO reduction while its concentration is above 50ppm. Addition of circulating fluidized combustion (CFB) flyash deteriorates NO reduction significantly. However, CFB flyash and Na/K additives will get a coupling effect on NO reduction during the SNCR process, and the best NO reduction can reach 72.3% while feeding Na2CO3-impregnated CFB flyash at 125ppm Na2CO3 and Tr=950°C. PMID:25532766

  17. High performance image processing of SPRINT

    SciTech Connect

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  18. An Image Processing Approach to Linguistic Translation

    NASA Astrophysics Data System (ADS)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  19. Thermographic in-situ process monitoring of the electron-beam melting technology used in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Dehoff, Ryan R.; Lloyd, Peter D.; Lowe, Larry E.; Ulrich, Joe B.

    2013-05-01

    Oak Ridge National Laboratory (ORNL) has been utilizing the ARCAM electron beam melting technology to additively manufacture complex geometric structures directly from powder. Although the technology has demonstrated the ability to decrease costs, decrease manufacturing lead-time and fabricate complex structures that are impossible to fabricate through conventional processing techniques, certification of the component quality can be challenging. Because the process involves the continuous deposition of successive layers of material, each layer can be examined without destructively testing the component. However, in-situ process monitoring is difficult due to metallization on inside surfaces caused by evaporation and condensation of metal from the melt pool. This work describes a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process. Here, the utilization of a continuously moving Mylar film canister is described. Results will be presented related to in-situ process monitoring and how this technique results in improved mechanical properties and reliability of the process.

  20. Hydro-gel environment and solution additives modify calcite growth mechanism to an accretion process of amorphous nanospheres

    NASA Astrophysics Data System (ADS)

    Gal, A.; Kahil, K.; Habraken, W.; Gur, D.; Fratzl, P.; Addadi, L.; Weiner, S.

    2013-12-01

    two factors that may underlie many biomineralization processes in nature: the first formed amorphous mineral phase can transform to a crystalline phase without dissolving if the solution properties of the environment are altered by an additive. And, accretion-based crystal-growth may become dominant when the amorphous precursor is abundant and the competing ion-based process is slowed down. SEM images of: (A) calcite crystal that grew from the transformation of ACC in DDW by ion-by-ion growth mechanism; (B) calcite crystal that grew from the transformation of ACC in 10mM phosphate solution by nanosphere accretion mechanism. Scale bars are 100 nm.

  1. Real-time image processing architecture for robot vision

    NASA Astrophysics Data System (ADS)

    Persa, Stelian; Jonker, Pieter P.

    2000-10-01

    This paper presents a study of the impact of MMX technology and PIII Streaming SIMD (Single Instruction stream, Multiple Data stream). Extensions in image processing and machine vision application, which, because of their hard real time constrains, is an undoubtedly challenging task. A comparison with traditional scalar code and with other parallel SIMD architecture (IMPA-VISION board) is discussed with emphasis of the particular programming strategies for speed optimization. More precisely we discuss the low level and intermediate level image processing algorithms, which are best suited for parallel SIMD implementation. High-level image processing algorithms are more suitable for parallel implementation on MIMD architectures. While the IMAP-VISION system performs better because of the large number of processing elements, the MMX processor and PIII (with Streaming SIMD Extensions) remains a good candidate for low-level image processing.

  2. Evaluation of clinical image processing algorithms used in digital mammography.

    PubMed

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  3. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1972-01-01

    When the first Earth Resources Technology Satellite (ERTS-A) flies in 1972, NASA expects to receive and bulk-process 9,000 images a week. From this deluge of images, a few will be selected for precision processing; that is, about 5 percent will be further treated to improve the geometry of the scene, both in the relative and absolute sense. Control points are required for this processing. This paper describes the control requirements for relating ERTS images to a reference surface of the earth. Enough background on the ERTS-A satellite is included to make the requirements meaningful to the user.

  4. Parallel image processing and image understanding. Final report, April 1985-March 1986

    SciTech Connect

    Rosenfeld, A.

    1986-03-31

    This research was conducted to obtain better methods for image processing. It focused on several aspects of this problem, including parallel algorithms for image processing, knowledge-based techniques for image understanding, and modeling images using shape and texture. Eighteen technical reports produced will also appear as published papers in journals. In the paper Holes and Genus of 3D images, it was shown that certain geometric invariants of a digital image (number of components, number of holes, and number of cavities) do not determine the topology (in the sense of connectivity) of the image refuting the commonly believed assumption that they do. This research lays the groundwork for research on digital and computational geometry of 3D images. In the paper Hough Transform Algorithms for Mesh-Connected SIMD Parallel Processors, several methods of Hough transform computation are studied in terms of suitability for implementation on a parallel processor, providing a valuable tool for straight-line detection.

  5. Parallel perfusion imaging processing using GPGPU

    PubMed Central

    Zhu, Fan; Gonzalez, David Rodriguez; Carpenter, Trevor; Atkinson, Malcolm; Wardlaw, Joanna

    2012-01-01

    Background and purpose The objective of brain perfusion quantification is to generate parametric maps of relevant hemodynamic quantities such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) that can be used in diagnosis of acute stroke. These calculations involve deconvolution operations that can be very computationally expensive when using local Arterial Input Functions (AIF). As time is vitally important in the case of acute stroke, reducing the analysis time will reduce the number of brain cells damaged and increase the potential for recovery. Methods GPUs originated as graphics generation dedicated co-processors, but modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its large number of computing cores and constitutes an affordable high performance computing method. In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. We present the serial and parallel implementations of such algorithms and the evaluation of the performance gains using GPUs. Results Our method has gained a 5.56 and 3.75 speedup for CT and MR images respectively. Conclusions It seems that using GPGPU is a desirable approach in perfusion imaging analysis, which does not harm the quality of cerebral hemodynamic maps but delivers results faster than the traditional computation. PMID:22824549

  6. Utilization Of Spatial Self-Similarity In Medical Image Processing

    NASA Astrophysics Data System (ADS)

    Kuklinski, Walter S.

    1987-01-01

    Many current medical image processing algorithms utilize Fourier Transform techniques that represent images as sums of translationally invariant complex exponential basis functions. Selective removal or enhancement of these translationally invariant components can be used to effect a number of image processing operations such as edge enhancement or noise attenuation. An important characteristic of many natural phenomena, including the structures of interest in medical imaging is spatial self-similarity. In this work a filtering technique that represents images as sums of scale invariant self-similar basis functions will be presented. The decomposition of a signal or image into scale invariant components can be accomplished using the Mellin Transform, which diagonalizes changes of scale in a manner analogous to the way the Fourier Transform diagonalizes translation.

  7. Land image data processing requirements for the EOS era

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Newcomer, Jeffrey A.

    1989-01-01

    Requirements are proposed for a hybrid approach to image analysis that combines the functionality of a general-purpose image processing system with the knowledge representation and manipulation capabilities associated with expert systems to improve the productivity of scientists in extracting information from remotely sensed image data. The overall functional objectives of the proposed system are to: (1) reduce the level of human interaction required on a scene-by-scene basis to perform repetitive image processing tasks; (2) allow the user to experiment with ad hoc rules and procedures for the extraction, description, and identification of the features of interest; and (3) facilitate the derivation, application, and dissemination of expert knowledge for target recognition whose scope of application is not necessarily limited to the image(s) from which it was derived.

  8. Image pre-processing for optimizing automated photogrammetry performances

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Gonizzi, S.; Micoli, L. L.

    2014-05-01

    The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.

  9. The morphometric analysis and recognition an amyloid plaque in microscope images by computer image processing.

    PubMed

    Grams, A; Liberski, P P; Sobów, T; Napieralska, M; Zubert, M; Napieralski, A

    2000-01-01

    This paper presents an approach of the two-dimensional image processing application in recognition of amyloid plaque in microscope images of the brain tissues. The authors propose to create universal amyloid plaque computer pattern and special multivariate image segmentation techniques based on collected images and statistical information. This recognition image procedure is divided into 3-dimensional statistical colour and morphological shape identifications. The developed computer system will collect and store image data and exchange them by network with other collaborated systems. PMID:11693723

  10. Ultrasonic online monitoring of additive manufacturing processes based on selective laser melting

    NASA Astrophysics Data System (ADS)

    Rieder, Hans; Dillhöfer, Alexander; Spies, Martin; Bamberg, Joachim; Hess, Thomas

    2015-03-01

    Additive manufacturing processes have become commercially available and are particularly interesting for the production of free-formed parts. Selective laser melting allows to manufacture components by localized melting of successive layers of metal powder. In order to be able to describe and to understand the complex dynamics of selective laser melting processes more accurately, online measurements using ultrasound have been performed for the first time. In this contribution, we report on the integration of the measurement technique into the manufacturing facility and on a variety of promising monitoring results.

  11. Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof

    SciTech Connect

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2006-01-17

    The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.

  12. Nonlinear coherent optical image processing using logarithmic transmittance of bacteriorhodopsin films

    NASA Astrophysics Data System (ADS)

    Downie, John D.

    1995-08-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  13. Nonlinear Coherent Optical Image Processing Using Logarithmic Transmittance of Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.

  14. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. PMID:25676816

  15. Hand microscopes and image processing for measurement tasks

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen; Knappe, Bernard

    1997-08-01

    Very often in industry and medical applications we meet tasks, where small microscopic structures have to be inspected. The task is not just related to visualization it is also necessary to inspect and measure features of an imaged object in an automated image processing based system. One newly developed handmicroscope, called MESOP, is presented that allows for a high quality optical inspection of workpieces and objects. Due to the concrete realization with a laptop based image processing and computing environment the handmicroscope can be applied in a mobile manner. It is positioned in front of the object. The optical head, supplied with telecentric optics and a small camera, captures the image, transfers it into the PCMCIA-based frame grabber. There the images are stored and further processed due to the necessity of the operator.

  16. Onboard processing for future space-borne imaging systems

    NASA Technical Reports Server (NTRS)

    Wellman, J. B.; Norris, D. D.

    1978-01-01

    There is a strong rationale for increasing the rate of information return from imaging class experiments aboard both terrestrial and planetary spacecraft. Future imaging systems will be designed with increased spatial resolution, broader spectral range and more spectral channels (or higher spectral resolution). The data rate implied by these improved performance characteristics can be expected to grow more rapidly than the projected telecommunications capability. One solution to this dilemma is the use of improved onboard data processing. The use of onboard classification processing in a multispectral imager can result in orders of magnitude increase in information transfer for very specific types of imaging tasks. Several of these processing functions are included in the conceptual design of an Infrared Multispectral Imager which would map the spatial distribution of characteristic geologic features associated with deposits of economic minerals.

  17. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  18. Reactive nanophase oxide additions to melt-processed high-{Tc} superconductors

    SciTech Connect

    Goretta, K.C.; Brandel, B.P.; Lanagan, M.T.; Hu, J.; Miller, D.J.; Sengupta, S.; Parker, J.C.; Ali, M.N.; Chen, Nan

    1994-10-01

    Nanophase TiO{sub 2} and Al{sub 2}O{sub 3} powders were synthesized by a vapor-phase process and mechanically mixed with stoichiometric YBa{sub 2}Cu{sub 3}O{sub x} and TlBa{sub 2}Ca{sub 2}Cu{sub 3}O{sub x} powders in 20 mole % concentrations. Pellets produced from powders with and without nanophase oxides were heated in air or O{sub 2} above the peritectic melt temperature and slow-cooled. At 4.2 K, the intragranular critical current density (J{sub c}) increased dramatically with the oxide additions. At 35--50 K, effects of the oxide additions were positive, but less pronounced. At 77 K, the additions decreased J{sub c}, probably because of inducing a depresion of the transition temperature.

  19. Reactive nanophase oxide additions to melt-processed high-T(sub c) superconductors

    NASA Astrophysics Data System (ADS)

    Goretta, K. C.; Brandel, B. P.; Lanagan, M. T.; Hu, J.; Miller, D. J.; Sengupta, S.; Parker, J. C.; Ali, M. N.; Chen, Nan

    1994-10-01

    Nanophase TiO2 and Al2O3 powders were synthesized by a vapor-phase process and mechanically mixed with stoichiometric YBa2Cu3O(x) and TlBa2Ca2Cu3O(x) powders in 20 mole % concentrations. Pellets produced from powders with and without nanophase oxides were heated in air or O2 above the peritectic melt temperature and slow-cooled. At 4.2 K, the intragranular critical current density J(sub c)) increased dramatically with the oxide additions. At 35-50 K, effects of the oxide additions were positive, but less pronounced. At 77 K, the additions decreased J(sub c), probably because of inducing a depression of the transition temperature.

  20. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  1. Recent advances in imaging subcellular processes

    PubMed Central

    Myers, Kenneth A.; Janetopoulos, Christopher

    2016-01-01

    Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708

  2. Computer tomography imaging of fast plasmachemical processes

    SciTech Connect

    Denisova, N. V.; Katsnelson, S. S.; Pozdnyakov, G. A.

    2007-11-15

    Results are presented from experimental studies of the interaction of a high-enthalpy methane plasma bunch with gaseous methane in a plasmachemical reactor. The interaction of the plasma flow with the rest gas was visualized by using streak imaging and computer tomography. Tomography was applied for the first time to reconstruct the spatial structure and dynamics of the reagent zones in the microsecond range by the maximum entropy method. The reagent zones were identified from the emission of atomic hydrogen (the H{sub {alpha}} line) and molecular carbon (the Swan bands). The spatiotemporal behavior of the reagent zones was determined, and their relation to the shock-wave structure of the plasma flow was examined.

  3. Recent advances in imaging subcellular processes.

    PubMed

    Myers, Kenneth A; Janetopoulos, Christopher

    2016-01-01

    Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708

  4. Monitoring Residual Solvent Additives and Their Effects in Solution Processed Solar Cells

    NASA Astrophysics Data System (ADS)

    Fogel, Derek M.; Basham, James I.; Engmann, Sebastian; Pookpanratana, Sujitra J.; Bittle, Emily G.; Jurchescu, Oana D.; Gundlach, David J.

    2015-03-01

    High boiling point solvent additives are a widely adopted approach for increasing bulk heterojunction (BHJ) solar cell efficiency. However, experiments show residual solvent can persist for hours after film deposition, and certain common additives are unstable or reactive. We report here on the effects of residual 1,8-diiodooctane on the electrical performance of poly(3-hexylthiophene-2,5-diyl) (P3HT): phenyl-C71-butyric acid methyl ester (PC[71]BM) BHJ photovoltaic cells. We optimized our fabrication process for efficiency at an active layer thickness of 220 nm, and all devices were processed in parallel to minimize unintentional variations between test structures. The one variable in this study is the active layer post spin drying time. Immediately following the cathode deposition, we measured the current-voltage characteristics at one sun equivalent illumination intensity, and performed impedance spectroscopy to quantify charge density, lifetime, and recombination process. Spectroscopic ellipsometry, FTIR, and XPS are also used to monitor residual solvent and correlated with electrical performance. We find that residual additive degrades performance by increasing the series resistance and lowering efficiency, fill factor, and free carrier lifetime.

  5. Digital images in the map revision process

    NASA Astrophysics Data System (ADS)

    Newby, P. R. T.

    Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.

  6. Results from PIXON-Processed HRC Images of Pluto

    NASA Astrophysics Data System (ADS)

    Young, E. F.; Buie, M. W.; Young, L. A.

    2005-08-01

    We examine the 384 dithered images of Pluto and Charon taken with the Hubble Space Telescope's High Resolution Camera (HRC) under program GO-9391. We have deconvolved the individual images with synthetic point spread functions (PSF) generated with TinyTim v6.3 using PIXON processing (Puetter and Yahil 1999). We reconstruct a surface albedo map of Pluto using a backprojection algorithm. At present, this algorithm does not include Hapke phase function or backscattering parameters. We compare this albedo map to earlier maps based on HST and mutual event observations (e.g., Stern et al. 1997, Young et al. 2001), looking for changes in albedo distribution and B-V color distribution. Pluto's volatile surface ices are closely tied to its atmospheric column abundance, which has doubled in the interval between 1989 and 2002 (Sicardy et al. 2003, Elliot et al. 2003). A slight rise (1.5 K) in the temperature of nitrogen ice would support the thicker atmosphere. We examine the albedo distribution in the context of Pluto's changing atmosphere. Finally, a side effect of the PIXON processing is that we are better able to search for additional satellites in the Pluto-Charon system. We find no satellites within a 12 arcsec radius of Pluto brighter than a 5-sigma upper limit of B=25.9. In between Pluto and Charon this upper limit is degraded to B=22.8 within one Rp of Pluto's surface, improving to B=25.1 at 10 Rp (Charon's semimajor axis). This research was supported by a grant from NASA's Planetary Astronomy Program (NAG5-12516) and STScI grant GO-9391. Elliot, J.L., and 28 co-authors (2003), ``The recent expansion of Pluto's atmosphere," Nature 424, 165-168. R. C. Puetter and A. Yahil (1999), ``The Pixon Method of Image Reconstruction" in Astronomical Data Analysis Software and Systems VIII, D. M. Mehringer, R. L. Plante & D. A. Roberts, eds., ASP Conference Series, 172, pp. 307-316. Sicardy, B. and 40 co-authors (2003), ``Large changes in Pluto's atmosphere as revealed by recent

  7. The data processing of the temporarily and spatially mixed modulated polarization interference imaging spectrometer.

    PubMed

    Jian, Xiaohua; Zhang, Chunmin; Zhang, Lin; Zhao, Baochang

    2010-03-15

    Based on the basic imaging theory of the temporally and spatially mixed modulated polarization interference imaging spectrometer (TSMPIIS), a method of interferogram obtaining and processing under polychromatic light is presented. Especially, instead of traditional Fourier transform spectroscopy, according to the unique imaging theory and OPD variation of TSMPIIS, the spectrum is reconstructed respectively by wavelength. In addition, the originally experimental interferogram obtained by TSMPIIS is processed in this new way, the satisfying result of interference data and reconstructed spectrum prove that the method is very precise and feasible, which will great improve the performance of TSMPIIS. PMID:20389583

  8. Introduction to computers and digital processing in medical imaging

    SciTech Connect

    Kuni, C.C.

    1988-01-01

    The author provides a nontechnical, nonmathematical explanation of computers, programs, peripheral devices, and imaging applications so that radiologists can more completely control the digital devices they use. There are additional reasons for radiologists to understand computers and computing. First, knowledge of computes allows a fundamental understanding of the next generation of imaging devices and leads to more intelligent interpretation of images. Second, recognition of artifacts and system failures is facilitated. Finally, the radiologist with such knowledge will remain a central figure in imaging departments. This book is organized into three sections. The first series of five chapters is devoted to the fundamentals of computers, image formation, manipulation, and display. The second section is five chapters each on a specific modality, such as computed tomography or magnetic resonance imaging. The final section is a chapter that discusses networks and archiving.

  9. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared. PMID:14535661

  10. From lab to industrial: PZT nanoparticles synthesis and process control for application in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Huang, Hsien-Lin

    Lead Zirconate Titanate (PZT) nanoparticles hold many promising current and future applications, such as PZT ink for 3-D printing or seeds for PZT thick films. One common method is hydrothermal growth, in which temperature, duration time, or mineralizer concentrations are optimized to produce PZT nanoparticles with desired morphology, controlled size and size distribution. A modified hydrothermal process is used to fabricate PZT nanoparticles. The novelty is to employ a high ramping rate (e.g., 20 deg C/min) to generate abrupt supersaturation so as to promote burst nucleation of PZT nanoparticles as well as a fast cooling rate (e.g., 5 deg C/min) with a controlled termination of crystal growth. As a result, PZT nanoparticles with a size distribution ranging from 200 nm to 800 nm are obtained with cubic morphology and good crystallinity. The identification of nanoparticles is confirmed through use of X-ray diffractometer (XRD). XRD patterns are used to compare sample variations in their microstructures such as lattice parameter. A cubic morphology and particle size are also examined via SEM images. The hydrothermal process is further modified with excess lead (from 20% wt. to 80% wt.) to significantly reduce amorphous phase and agglomeration of the PZT nanoparticles. With a modified process, the particle size still remains within the 200 nm to 800 nm. Also, the crystal structures (microstructure) of the samples show little variations. Finally, a semi-continuous hydrothermal manufacturing process was developed to substantially reduce the fabrication time and maintained the same high quality as the nanoparticles prepared in an earlier stage. In this semi-continuous process, a furnace is maintained at the process temperature (200 deg C), whereas autoclaves containing PZT sol are placed in and out of the furnace to control the ramp-up and cooling rates. This setup eliminates an extremely time-consuming step of cooling down the furnace, thus saving tremendous amount of

  11. Imaging Implicit Morphological Processing: Evidence from Hebrew

    ERIC Educational Resources Information Center

    Bick, Atira S.; Frost, Ram; Goelman, Gadi

    2010-01-01

    Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system.…

  12. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  13. Image processing for flight crew enhanced situation awareness

    NASA Technical Reports Server (NTRS)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  14. Application of digital image processing techniques to astronomical imagery 1977

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.; Lynn, D. J.

    1978-01-01

    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  15. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  16. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  17. ELAS: A powerful, general purpose image processing package

    NASA Technical Reports Server (NTRS)

    Walters, David; Rickman, Douglas

    1991-01-01

    ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

  18. Design and tuning of standard additive model based fuzzy PID controllers for multivariable process systems.

    PubMed

    Harinath, Eranda; Mann, George K I

    2008-06-01

    This paper describes a design and two-level tuning method for fuzzy proportional-integral derivative (FPID) controllers for a multivariable process where the fuzzy inference uses the inference of standard additive model. The proposed method can be used for any n x n multi-input-multi-output process and guarantees closed-loop stability. In the two-level tuning scheme, the tuning follows two steps: low-level tuning followed by high-level tuning. The low-level tuning adjusts apparent linear gains, whereas the high-level tuning changes the nonlinearity in the normalized fuzzy output. In this paper, two types of FPID configurations are considered, and their performances are evaluated by using a real-time multizone temperature control problem having a 3 x 3 process system. PMID:18558531

  19. Diffused Matrix Format: A New Storage and Processing Format for Airborne Hyperspectral Sensor Images

    PubMed Central

    Martínez, Pablo; Cristo, Alejandro; Koch, Magaly; Pérez, Rosa Mª.; Schmid, Thomas; Hernández, Luz M.

    2010-01-01

    At present, hyperspectral images are mainly obtained with airborne sensors that are subject to turbulences while the spectrometer is acquiring the data. Therefore, geometric corrections are required to produce spatially correct images for visual interpretation and change detection analysis. This paper analyzes the data acquisition process of airborne sensors. The main objective is to propose a new data format called Diffused Matrix Format (DMF) adapted to the sensor's characteristics including its spectral and spatial information. The second objective is to compare the accuracy of the quantitative maps derived by using the DMF data structure with those obtained from raster images based on traditional data structures. Results show that DMF processing is more accurate and straightforward than conventional image processing of remotely sensed data with the advantage that the DMF file structure requires less storage space than other data formats. In addition the data processing time does not increase when DMF is used. PMID:22399919

  20. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  1. Foreword: Additive Manufacturing: Interrelationships of Fabrication, Constitutive Relationships Targeting Performance, and Feedback to Process Control

    DOE PAGESBeta

    Carpenter, John S.; Beese, Allison M.; Bourell, David L.; Hamilton, Reginald F.; Mishra, Rajiv; Sears, James

    2015-06-26

    Additive manufacturing (AM) offers distinct advantages over conventional manufacturing processes including the capability to both build and repair complex part shapes; to integrate and consolidate parts and thus overcome joining concerns; and to locally tailor material compositions as well as properties. Moreover, a variety of fields such as aerospace, military, automotive, and biomedical are employing this manufacturing technique as a way to decrease costs, increase manufacturing agility, and explore novel geometry/functionalities. In order to increase acceptance of AM as a viable processing method, pathways for qualifying both the material and the process need to be developed and, perhaps, standardized. Thismore » symposium was designed to serve as a venue for the international AM community—including government, academia, and industry—to define the fundamental interrelationships between feedstock, processing, microstructure, shape, mechanical behavior/materials properties, and function/performance. Eventually, insight into the connections between processing, microstructure, property, and performance will be achieved through experimental observations, theoretical advances, and computational modeling of physical processes. Finally, once this insight matures, AM will be able to move from the realm of making parts to making qualified materials that are certified for use with minimal need for post-fabrication characterization.« less

  2. Foreword: Additive Manufacturing: Interrelationships of Fabrication, Constitutive Relationships Targeting Performance, and Feedback to Process Control

    SciTech Connect

    Carpenter, John S.; Beese, Allison M.; Bourell, David L.; Hamilton, Reginald F.; Mishra, Rajiv; Sears, James

    2015-06-26

    Additive manufacturing (AM) offers distinct advantages over conventional manufacturing processes including the capability to both build and repair complex part shapes; to integrate and consolidate parts and thus overcome joining concerns; and to locally tailor material compositions as well as properties. Moreover, a variety of fields such as aerospace, military, automotive, and biomedical are employing this manufacturing technique as a way to decrease costs, increase manufacturing agility, and explore novel geometry/functionalities. In order to increase acceptance of AM as a viable processing method, pathways for qualifying both the material and the process need to be developed and, perhaps, standardized. This symposium was designed to serve as a venue for the international AM community—including government, academia, and industry—to define the fundamental interrelationships between feedstock, processing, microstructure, shape, mechanical behavior/materials properties, and function/performance. Eventually, insight into the connections between processing, microstructure, property, and performance will be achieved through experimental observations, theoretical advances, and computational modeling of physical processes. Finally, once this insight matures, AM will be able to move from the realm of making parts to making qualified materials that are certified for use with minimal need for post-fabrication characterization.

  3. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  4. Multimission image processing and science data visualization

    NASA Technical Reports Server (NTRS)

    Green, William B.

    1993-01-01

    The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.

  5. The design of a distributed image processing and dissemination system

    SciTech Connect

    Rafferty, P.; Hower, L.

    1990-01-01

    The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

  6. The research on image processing technology of the star tracker

    NASA Astrophysics Data System (ADS)

    Li, Yu-ming; Li, Chun-jiang; Zheng, Ran; Li, Xiao; Yang, Jun

    2014-11-01

    As the core of visual sensitivity via imaging, image processing technology, especially for star tracker, is mainly characterized by such items as image exposure, optimal storage, background estimation, feature correction, target extraction, iteration compensation. This paper firstly summarizes the new research on those items at home and abroad, then, according to star tracker's practical engineering, environment in orbit and lifetime information, shows an architecture about rapid fusion between multiple frame images, which can be used to restrain oversaturation of the effective pixels, which means star tracker can be made more precise, more robust and more stable.

  7. Particle sizing in rocket motor studies utilizing hologram image processing

    NASA Technical Reports Server (NTRS)

    Netzer, David; Powers, John

    1987-01-01

    A technique of obtaining particle size information from holograms of combustion products is described. The holograms are obtained with a pulsed ruby laser through windows in a combustion chamber. The reconstruction is done with a krypton laser with the real image being viewed through a microscope. The particle size information is measured with a Quantimet 720 image processing system which can discriminate various features and perform measurements of the portions of interest in the image. Various problems that arise in the technique are discussed, especially those that are a consequence of the speckle due to the diffuse illumination used in the recording process.

  8. Using image processing techniques on proximity probe signals in rotordynamics

    NASA Astrophysics Data System (ADS)

    Diamond, Dawie; Heyns, Stephan; Oberholster, Abrie

    2016-06-01

    This paper proposes a new approach to process proximity probe signals in rotordynamic applications. It is argued that the signal be interpreted as a one dimensional image. Existing image processing techniques can then be used to gain information about the object being measured. Some results from one application is presented. Rotor blade tip deflections can be calculated through localizing phase information in this one dimensional image. It is experimentally shown that the newly proposed method performs more accurately than standard techniques, especially where the sampling rate of the data acquisition system is inadequate by conventional standards.

  9. Live 3D image overlay for arterial duct closure with Amplatzer Duct Occluder II additional size.

    PubMed

    Goreczny, Sebstian; Morgan, Gareth J; Dryzek, Pawel

    2016-03-01

    Despite several reports describing echocardiography for the guidance of ductal closure, two-dimensional angiography remains the mainstay imaging tool; three-dimensional rotational angiography has the potential to overcome some of the drawbacks of standard angiography, and reconstructed image overlay provides reliable guidance for device placement. We describe arterial duct closure solely from venous approach guided by live three-dimensional image overlay. PMID:26358032

  10. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  11. A hybrid pyramid multiprocessor system for image processing. Volumes I and II

    SciTech Connect

    Chen, Inching.

    1989-01-01

    Various multiprocessor architectures have been considered by many researchers to handle the high computational requirements of image processing and analysis application. However, many of these architectures are efficient only for a small class of image processing algorithms. In this research, a multiprocessor system has been proposed, designed and constructed taking into consideration various input-output and other characteristics of image processing applications. It is a hybrid pyramid with five 68020-68881 based processor nodes in the top two layers and sixteen DSP56001 based processor nodes in the third layer. The DSP (RISC) processor nodes at the bottom level are optimized for low-level image processing operations and the CISC (68020) processor nodes handle high-level tasks more efficiently. Experiments using the algorithms that have operations on neighborhoods of different sizes have shown consistent improvement in performance if the FIFO cache is enabled. Larger neighborhoods result in greater saving in time. Preliminary test indicate that the top five processor nodes can execute five times as fast as a single node for many image processing tasks. Finally, the versatile image I/O with the MMU has created a simpler programming environment, while facilitating various I/O structures. The OSU pyramid is a general-purpose image processing system, utilizing pyramidal architecture of hybrid processors, with additional hardware to retain the advantageous features of array processors, as well as to overcome some of the inherent deficiencies of pipeline processors and cellular arrays.

  12. Image processing using smooth ordering of its patches.

    PubMed

    Ram, Idan; Elad, Michael; Cohen, Israel

    2013-07-01

    We propose an image processing scheme based on reordering of its patches. For a given corrupted image, we extract all patches with overlaps, refer to these as coordinates in high-dimensional space, and order them such that they are chained in the "shortest possible path," essentially solving the traveling salesman problem. The obtained ordering applied to the corrupted image implies a permutation of the image pixels to what should be a regular signal. This enables us to obtain good recovery of the clean image by applying relatively simple one-dimensional smoothing operations (such as filtering or interpolation) to the reordered set of pixels. We explore the use of the proposed approach to image denoising and inpainting, and show promising results in both cases. PMID:23591494

  13. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  14. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  15. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  16. Medical Image Processing Using Real-Time Optical Fourier Technique

    NASA Astrophysics Data System (ADS)

    Rao, D. V. G. L. N.; Panchangam, Appaji; Sastry, K. V. L. N.; Material Science Team

    2001-03-01

    Optical Image Processing Techniques are inherently fast in view of parallel processing. A self-adaptive Optical Fourier Processing system using photo induced dichroism in a Bacteriorhodopsin film was experimentally demonstrated for medical image processing. Application of this powerful analog all-optical interactive technique for cancer diagnostics is illustrated with mammograms and Pap smears. Micro calcification clusters buried in surrounding tissue showed up clearly in the processed image. By playing with one knob, which rotates the analyzer in the optical system, either the micro calcification clusters or the surrounding dense tissue can be selectively displayed. Bacteriorhodopsin films are stable up to 140^oC and environmental friendly. As no interference is involved in the experiments, vibration isolation and even a coherent light source are not required. It may be possible to develop a low-cost rugged battery operated portable signal-enhancing magnifier.

  17. Applications of nuclear magnetic resonance imaging in process engineering

    NASA Astrophysics Data System (ADS)

    Gladden, Lynn F.; Alexander, Paul

    1996-03-01

    During the past decade, the application of nuclear magnetic resonance (NMR) imaging techniques to problems of relevance to the process industries has been identified. The particular strengths of NMR techniques are their ability to distinguish between different chemical species and to yield information simultaneously on the structure, concentration distribution and flow processes occurring within a given process unit. In this paper, examples of specific applications in the areas of materials and food processing, transport in reactors and two-phase flow are discussed. One specific study, that of the internal structure of a packed column, is considered in detail. This example is reported to illustrate the extent of new, quantitative information of generic importance to many processing operations that can be obtained using NMR imaging in combination with image analysis.

  18. Integration of Consonant and Pitch Processing as Revealed by the Absence of Additivity in Mismatch Negativity

    PubMed Central

    Gong, Diankun; Chen, Sifan; Kendrick, Keith M.; Yao, Dezhong

    2012-01-01

    Consonants, unlike vowels, are thought to be speech specific and therefore no interactions would be expected between consonants and pitch, a basic element for musical tones. The present study used an electrophysiological approach to investigate whether, contrary to this view, there is integrative processing of consonants and pitch by measuring additivity of changes in the mismatch negativity (MMN) of evoked potentials. The MMN is elicited by discriminable variations occurring in a sequence of repetitive, homogeneous sounds. In the experiment, event-related potentials (ERPs) were recorded while participants heard frequently sung consonant-vowel syllables and rare stimuli deviating in either consonant identity only, pitch only, or in both dimensions. Every type of deviation elicited a reliable MMN. As expected, the two single-deviant MMNs had similar amplitudes, but that of the double-deviant MMN was also not significantly different from them. This absence of additivity in the double-deviant MMN suggests that consonant and pitch variations are processed, at least at a pre-attentive level, in an integrated rather than independent way. Domain-specificity of consonants may depend on higher-level processes in the hierarchy of speech perception. PMID:22693614

  19. Digital image processing: a primer for JVIR authors and readers: part 2: digital image acquisition.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-11-01

    This is the second installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first article of the series, we reviewed the fundamentals of digital image architecture. In this article, we describe the ways that an author can import digital images to the computer desktop. We explore the modern imaging network and explain how to import picture archiving and communications systems (PACS) images to the desktop. Options and techniques for producing digital hard copy film are also presented. PMID:14605101

  20. Vitrification of F006 plating waste sludge by Reactive Additive Stabilization Process (RASP)

    SciTech Connect

    Martin, H.L.; Jantzen, C.M.; Pickett, J.B.

    1994-06-01

    Solidification into glass of nickel-on-uranium plating wastewater treatment plant sludge (F006 Mixed Waste) has been demonstrated at the Savannah River She (SRS). Vitrification using high surface area additives, the Reactive Additive Stabilization Process (RASP), greatly enhanced the solubility and retention of heavy metals In glass. The bench-scale tests using RASP achieved 76 wt% waste loading In both soda-lime-silica and borosilicate glasses. The RASP has been Independently verified by a commercial waste management company, and a contract awarded to vitrify the approximately 500,000 gallons of stored waste sludge. The waste volume reduction of 89% will greatly reduce the disposal costs, and delisting of the glass waste is anticipated. This will be the world`s first commercial-scale vitrification system used for environmental cleanup of Mixed Waste. Its stabilization and volume reduction abilities are expected to set standards for the future of the waste management Industry.

  1. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  2. Statistical image processing in the Virtual Observatory context

    NASA Astrophysics Data System (ADS)

    Louys, M.; Bonnarel, F.; Schaaff, A.; Pestel, C.

    2009-07-01

    In an inter-disciplinary collaborative project, we have designed a framework to execute statistical image analysis techniques for multiwavelength astronomical images. This paper describes an interactive tool, AIDA_WF , which helps the astronomer to design and describe image processing workflows. This tool allows designing and executing processing steps arranged in a workflow. Blocks can be either local or remote distributed computations via web services built according to the UWS (Universal Worker Service) currently defined in the VO domain. Processing blocks are modelled with input and output parameters. Validation of input images content and parameters is included and performed using the VO Characterisation Data model. This allows first checking of inputs prior to sending the job on remote computing nodes in a distributed or grid context. The workflows can be saved and documented, and collected as well for further re-use.

  3. Additional value of biplane transoesophageal imaging in assessment of mitral valve prostheses.

    PubMed Central

    Groundstroem, K; Rittoo, D; Hoffman, P; Bloomfield, P; Sutherland, G R

    1993-01-01

    OBJECTIVES--To determine whether biplane transoesophageal imaging offers advantages in the evaluation of mitral prostheses when compared with standard single transverse plane imaging or the precordial approach in suspected prosthetic dysfunction. DESIGN--Prospective mitral valve prosthesis in situ using precordial and biplane transoesophageal ultrasonography. SETTING--Tertiary cardiac referral centre. SUBJECTS--67 consecutive patients with suspected dysfunction of a mitral valve prosthesis (16 had bioprostheses and 51 mechanical prostheses) who underwent precordial, transverse plane, and biplane transoesophageal echocardiography. Correlative invasive confirmation from surgery or angiography, or both, was available in 44 patients. MAIN OUTCOME MEASURES--Number, type, and site of leak according to the three means of scanning. RESULTS--Transverse plane transoesophageal imaging alone identified all 31 medial/lateral paravalvar leaks but only 24/30 of the anterior/posterior leaks. Combining the information from both imaging planes confirmed that biplane scanning identified all paravalvar leaks. Five of the six patients with prosthetic valve endocarditis, all three with valvar thrombus or obstruction, and all three with mitral annulus rupture were diagnosed from transverse plane imaging alone. Longitudinal plane imaging alone enabled diagnosis of the remaining case of prosthetic endocarditis and a further case of subvalvar pannus formation. CONCLUSIONS--Transverse plane transoesophageal imaging was superior to the longitudinal imaging in identifying medial and lateral lesions around the sewing ring of a mitral valve prosthesis. Longitudinal plane imaging was superior in identifying anterior and posterior lesions. Biplane imaging is therefore an important development in the study of mitral prosthesis function. Images PMID:8398497

  4. Video image processing for nuclear safeguards

    SciTech Connect

    Rodriguez, C.A.; Howell, J.A.; Menlove, H.O.; Brislawn, C.M.; Bradley, J.N.; Chare, P.; Gorten, J.

    1995-09-01

    The field of nuclear safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world`s nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and use of nuclear materials as reported by states that have signed the nuclear Nonproliferation Treaty throughout the world. Nuclear safeguards are measures prescribed by domestic and international regulatory bodies such as DOE, NRC, IAEA, and EURATOM and implemented by the nuclear facility or the regulatory body. These measures include destructive and non destructive analysis of product materials/process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards.

  5. Imaging of inflammatory processes with labeled cells

    SciTech Connect

    Froelich, J.W.; Swanson, D.

    1984-04-01

    Radionuclide techniques for localizing inflammatory processes had relied heavily upon /sup 67/Ga-citrate until McAfee and Thakur described the technique for the radiolabeling of leukocytes with /sup 111/In-oxine. Since their initial description in 1976 there has been continued development of the radiopharmaceutical, as well as clinical efficacy. At present /sup 111/In-labeled leukocytes continue to be handled as an investigational new drug but this has not greatly limited its clinical availability. Indium-/sup 111/ leukocytes are the agent of choice for evaluation of patients with fever of unknown origin, osteomyelitis, and prosthetic graft infections; and preliminary data shows great promise in the area of detecting reoccurrence of inflammatory bowel disease. This article attempts to review currently accepted uses of 111In leukocytes as well as potential areas of application.

  6. Investigation of the effects of short chain processing additives on polymers

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Stclair, T. L.; Pratt, J. R.

    1986-01-01

    The effects of low level concentrations of several short chain processing additives on the properties of the 4,4'-bis(3,4-dicarboxyphenoxy) diphenylsulfide dianhydride (BDSDA)/4,4'-diaminodiphenyl ether (ODA)/1,3'-diaminobenzene (m-phenylene diamine) (MPA) (422) copolyimide were investigated. It was noted that 5 percent MPD/phthalic anhydride (PA) is more effective than 5 percent ODA/PA and BDSDA/aniline (AN) in strengthening the host material. However, the introduction of 10 percent BDSDA/AN produces disproportionately high effects on free volume and free electron density in the host copolyimide.

  7. Application of Tapping-Mode Scanning Probe Electrospray Ionization to Mass Spectrometry Imaging of Additives in Polymer Films

    PubMed Central

    Shimazu, Ryo; Yamoto, Yoshinari; Kosaka, Tomoya; Kawasaki, Hideya; Arakawa, Ryuichi

    2014-01-01

    We report the application of tapping-mode scanning probe electrospray ionization (t-SPESI) to mass spectrometry imaging of industrial materials. The t-SPESI parameters including tapping solvent composition, solvent flow rate, number of tapping at each spot, and step-size were optimized using a quadrupole mass spectrometer to improve mass spectrometry (MS) imaging of thin-layer chromatography (TLC) and additives in polymer films. Spatial resolution of approximately 100 μm was achieved by t-SPESI imaging mass spectrometry using a fused-silica capillary (50 μm i.d., 150 μm o.d.) with the flow rate set at 0.2 μL/min. This allowed us to obtain discriminable MS imaging profiles of three dyes separated by TLC and the additive stripe pattern of a PMMA model film depleted by UV irradiation. PMID:26819894

  8. A review of processable high temperature resistant addition-type laminating resins

    NASA Technical Reports Server (NTRS)

    Serafini, T. T.; Delvigs, P.

    1973-01-01

    An important finding that resulted from research that was conducted to develop improved ablative resins was the discovery of a novel approach to synthesize processable high temperature resistant polymers. Low molecular weight polyimide prepolymers end-capped with norbornene groups were polymerized into thermo-oxidatively stable modified polyimides without the evolution of void producing volatile materials. This paper reviews basic studies that were performed using model compounds to elucidate the polymerization mechanism of the so-called addition-type polyimides. The fabrication and properties of polyimide/graphite fiber composites using A-type polyimide prepolymer as the matrix are described. An alternate method for preparing processable A-type polyimides by means of in situ polymerization of monomeric reactants on the fiber reinforcement is also described. Polyimide/graphite fiber composite performance at elevated temperatures is presented for A-type polyimides.

  9. Scanning laser ultrasound and wavenumber spectroscopy for in-process inspection of additively manufactured parts

    NASA Astrophysics Data System (ADS)

    Koskelo, EliseAnne C.; Flynn, Eric B.

    2016-04-01

    We present a new in-process laser ultrasound inspection technique for additive manufacturing. Ultrasonic energy was introduced to the part by attaching an ultrasonic transducer to the printer build-plate and driving it with a single-tone, harmonic excitation. The full-field response of the part was measured using a scanning laser Doppler vibrometer after each printer layer. For each scan, we analyzed both the local amplitudes and wavenumbers of the response in order to identify defects. For this study, we focused on the detection of delamination between layers in a fused deposition modeling process. Foreign object damage, localized heating damage, and the resulting delamination between layers were detected in using the technique as indicated by increased amplitude and wavenumber responses within the damaged area.

  10. 3D Machine Vision and Additive Manufacturing: Concurrent Product and Process Development

    NASA Astrophysics Data System (ADS)

    Ilyas, Ismet P.

    2013-06-01

    The manufacturing environment rapidly changes in turbulence fashion. Digital manufacturing (DM) plays a significant role and one of the key strategies in setting up vision and strategic planning toward the knowledge based manufacturing. An approach of combining 3D machine vision (3D-MV) and an Additive Manufacturing (AM) may finally be finding its niche in manufacturing. This paper briefly overviews the integration of the 3D machine vision and AM in concurrent product and process development, the challenges and opportunities, the implementation of the 3D-MV and AM at POLMAN Bandung in accelerating product design and process development, and discusses a direct deployment of this approach on a real case from our industrial partners that have placed this as one of the very important and strategic approach in research as well as product/prototype development. The strategic aspects and needs of this combination approach in research, design and development are main concerns of the presentation.

  11. Thermoplastic starch/polyester films: effects of extrusion process and poly (lactic acid) addition.

    PubMed

    Shirai, Marianne Ayumi; Olivato, Juliana Bonametti; Garcia, Patrícia Salomão; Müller, Carmen Maria Olivera; Grossmann, Maria Victória Eiras; Yamashita, Fabio

    2013-10-01

    Biodegradable films were produced using the blown extrusion method from blends that contained cassava thermoplastic starch (TPS), poly(butylene adipate-co-terephthalate) (PBAT) and poly(lactic acid) (PLA) with two different extrusion processes. The choice of extrusion process did not have a significant effect on the mechanical properties, water vapor permeability (WVP) or viscoelasticity of the films, but the addition of PLA decreased the elongation, blow-up ratio (BUR) and opacity and increased the elastic modulus, tensile strength and viscoelastic parameters of the films. The films with 20% PLA exhibited a lower WVP due to the hydrophobic nature of this polymer. Morphological analyses revealed the incompatibility between the polymers used. PMID:23910321

  12. Evaluation of alternative chemical additives for high-level waste vitrification feed preparation processing

    SciTech Connect

    Seymour, R.G.

    1995-06-07

    During the development of the feed processing flowsheet for the Defense Waste Processing Facility (DWPF) at the Savannah River Site (SRS), research had shown that use of formic acid (HCOOH) could accomplish several processing objectives with one chemical addition. These objectives included the decomposition of tetraphenylborate, chemical reduction of mercury, production of acceptable rheological properties in the feed slurry, and controlling the oxidation state of the glass melt pool. However, the DEPF research had not shown that some vitrification slurry feeds had a tendency to evolve hydrogen (H{sub 2}) and ammonia (NH{sub 3}) as the result of catalytic decomposition of CHOOH with noble metals (rhodium, ruthenium, palladium) in the feed. Testing conducted at Pacific Northwest Laboratory and later at the Savannah River Technical Center showed that the H{sub 2} and NH{sub 3} could evolve at appreciable rates and quantities. The explosive nature of H{sub 2} and NH{sub 3} (as ammonium nitrate) warranted significant mitigation control and redesign of both facilities. At the time the explosive gas evolution was discovered, the DWPF was already under construction and an immediate hardware fix in tandem with flowsheet changes was necessary. However, the Hanford Waste Vitrification Plant (HWVP) was in the design phase and could afford to take time to investigate flowsheet manipulations that could solve the problem, rather than a hardware fix. Thus, the HWVP began to investigate alternatives to using HCOOH in the vitrification process. This document describes the selection, evaluation criteria, and strategy used to evaluate the performance of the alternative chemical additives to CHOOH. The status of the evaluation is also discussed.

  13. Remote sensing and image processing for exploration in frontier basins

    SciTech Connect

    Sabins, F.F. )

    1993-02-01

    A variety of remote sensing systems are available to explore the wide range of terrain in Central and South America and Mexico. The remote sensing data are recorded in digital form and must be computer-processed to produce images that are suitable for exploration. Landsat and SPOT images are available for most of the earth, but are restricted by cloud-cover. The broad terrain coverage recorded by Landsat thematic mapper (TM) is well suited for regional exploration. Color images are composited from various combinations of the 6 spectral bands to selectively enhance geologic features in different types of terrain. SPOT images may be acquired as stereo pairs which are valuable for structural interpretations. Radar is an active form of remote sensing that provides its own source of energy at wavelengths of centimeters which penetrate cloud-cover. Radar images are acquired at low depression angles to create shadows and highlights that enhance subtle geologic features. Satellite radar images of earth were recorded from two U.S. space shuttle missions in the 1980s and are currently recorded by the European Remote Sensing satellite and the Japanese Earth Resources Satellite. Mosaics of radar images acquired from aircraft are widely used in oil exploration, especially in cloud-covered regions. Typical images and computer processing method are illustrated with examples from various frontier basins.

  14. Radon-Based Image Processing In A Parallel Pipeline Architecture

    NASA Astrophysics Data System (ADS)

    Hinkle, Eric B.; Sanz, Jorge L. C.; Jain, Anil K.

    1986-04-01

    This paper deals with a novel architecture that makes real-time projection-based algorithms a reality. The design is founded on raster-mode processing, which is exploited in a powerful and flexible pipeline. This architecture, dubbed "P3 E" ( Parallel Pipeline Projection Engine), supports a large variety of image processing and image analysis applications. The image processing applications include: discrete approximations of the Radon and inverse Radon transform, among other projection operators; CT reconstructions; 2-D convolutions; rotations and translations; discrete Fourier transform computations in polar coordinates; autocorrelations; etc. There is also an extensive list of key image analysis algorithms that are supported by P E, thus making it a profound and versatile tool for projection-based computer vision. These include: projections of gray-level images along linear patterns (the Radon transform) and other curved contours; generation of multi-color digital masks; convex hull approximations; Hough transform approximations for line and curve detection; diameter computations; calculations of moments and other principal components; etc. The effectiveness of our approach and the feasibility of the proposed architecture have been demonstrated by running some of these image analysis algorithms in conventional short pipelines, to solve some important automated inspection problems. In the present paper, we will concern ourselves with reconstructing images from their linear projections, and performing convolutions via the Radon transform.

  15. Automating the Photogrammetric Bridging Based on MMS Image Sequence Processing

    NASA Astrophysics Data System (ADS)

    Silva, J. F. C.; Lemes Neto, M. C.; Blasechi, V.

    2014-11-01

    The photogrammetric bridging or traverse is a special bundle block adjustment (BBA) for connecting a sequence of stereo-pairs and of determining the exterior orientation parameters (EOP). An object point must be imaged in more than one stereo-pair. In each stereo-pair the distance ratio between an object and its corresponding image point varies significantly. We propose to automate the photogrammetric bridging based on a fully automatic extraction of homologous points in stereo-pairs and on an arbitrary Cartesian datum to refer the EOP and tie points. The technique uses SIFT algorithm and the keypoint matching is given by similarity descriptors of each keypoint based on the smallest distance. All the matched points are used as tie points. The technique was applied initially to two pairs. The block formed by four images was treated by BBA. The process follows up to the end of the sequence and it is semiautomatic because each block is processed independently and the transition from one block to the next depends on the operator. Besides four image blocks (two pairs), we experimented other arrangements with block sizes of six, eight, and up to twenty images (respectively, three, four, five and up to ten bases). After the whole image pairs sequence had sequentially been adjusted in each experiment, a simultaneous BBA was run so to estimate the EOP set of each image. The results for classical ("normal case") pairs were analyzed based on standard statistics regularly applied to phototriangulation, and they show figures to validate the process.

  16. Standardizing PhenoCam Image Processing and Data Products

    NASA Astrophysics Data System (ADS)

    Milliman, T. E.; Richardson, A. D.; Klosterman, S.; Gray, J. M.; Hufkens, K.; Aubrecht, D.; Chen, M.; Friedl, M. A.

    2014-12-01

    The PhenoCam Network (http://phenocam.unh.edu) contains an archive of imagery from digital webcams to be used for scientific studies of phenological processes of vegetation. The image archive continues to grow and currently has over 4.8 million images representing 850 site-years of data. Time series of broadband reflectance (e.g., red, green, blue, infrared bands) and derivative vegetation indices (e.g. green chromatic coordinate or GCC) are calculated for regions of interest (ROI) within each image series. These time series form the basis for subsequent analysis, such as spring and autumn transition date extraction (using curvature analysis techniques) and modeling the climate-phenology relationship. Processing is relatively straightforward but time consuming, with some sites having more than 100,000 images available. While the PhenoCam Network distributes the original image data, it is our goal to provide higher-level vegetation phenology products, generated in a standardized way, to encourage use of the data without the need to download and analyze individual images. We describe here the details of the standard image processing procedures, and also provide a description of the products that will be available for download. Products currently in development include an "all-image" file, which contains a statistical summary of the red, green and blue bands over the pixels in predefined ROI's for each image from a site. This product is used to generate 1-day and 3-day temporal aggregates with 90th percentile values of GCC for the specified time-periodwith standard image selection/filtering criteria applied. Sample software (in python, R, MATLAB) that can be used to read in and plot these products will also be described.

  17. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  18. Process for producing biodiesel, lubricants, and fuel and lubricant additives in a critical fluid medium

    DOEpatents

    Ginosar, Daniel M.; Fox, Robert V.

    2005-05-03

    A process for producing alkyl esters useful in biofuels and lubricants by transesterifying glyceride- or esterifying free fatty acid-containing substances in a single critical phase medium is disclosed. The critical phase medium provides increased reaction rates, decreases the loss of catalyst or catalyst activity and improves the overall yield of desired product. The process involves the steps of dissolving an input glyceride- or free fatty acid-containing substance with an alcohol or water into a critical fluid medium; reacting the glyceride- or free fatty acid-containing substance with the alcohol or water input over either a solid or liquid acidic or basic catalyst and sequentially separating the products from each other and from the critical fluid medium, which critical fluid medium can then be recycled back in the process. The process significantly reduces the cost of producing additives or alternatives to automotive fuels and lubricants utilizing inexpensive glyceride- or free fatty acid-containing substances, such as animal fats, vegetable oils, rendered fats, and restaurant grease.

  19. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-06-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  20. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  1. Small Interactive Image Processing System (SMIPS) system description

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

  2. IDP: Image and data processing (software) in C++

    SciTech Connect

    Lehman, S.

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  3. System-theoretical approach to multistage image processing

    NASA Astrophysics Data System (ADS)

    Grudin, Maxim A.; Timchenko, Leonid I.; Harvey, David M.; Gel, Vladimir P.

    1996-08-01

    We present a novel three-dimensional network and its application to pattern analysis. This is a multistage architecture which investigates partial correlations between structural image components. Mathematical description of the multistage hierarchical processing is provided together with the network architecture. Initially the image is partitioned to be processed in parallel channels. In each channel, the structural components are transformed and subsequently separated depending on their informational activity, to be mixed with the components from other channels for further processing. This procedure of temporal decomposition creates a flexible processing hierarchy, which reflects structural image complexity. An output result is represented as a pattern vector, whose components are computed one at a time to allow the quickest possible response. While several applications of the multistage network are possible, this paper represents an algorithm applied to image classification. The input gray-scale image is transformed so that each pixel contains information about the spatial structure of its neighborhood. A three-level representation of gray-scale image is used in order for each pixel to contain the maximum amount of structural information. The investigation of spatial regularities at all hierarchical levels provides a unified approach to pattern analysis. The most correlated information is extracted first, making the algorithm tolerant to minor structural changes.

  4. A comparison of polarization image processing across different platforms

    NASA Astrophysics Data System (ADS)

    York, Timothy; Powell, Samuel; Gruev, Viktor

    2011-10-01

    Division-of-focal-plane (DoFP) polarimeters for the visible spectrum hold the promise of being able to capture both the angle and degree of linear polarization in real-time and at high spatial resolution. These sensors are realized by monolithic integration of CCD imaging elements with metallic nanowire polarization filter arrays at the focal plane of the sensor. These sensors capture large amounts of raw polarization data and present unique computational challenges as they aim to provide polarimetric information at high spatial and temporal resolutions. The image processing pipeline in a typical DoFP polarimeter is: per-pixel calibration, interpolation of the four sub-sampled polarization pixels, Stokes parameter estimation, angle and degree of linear polarization estimation, and conversion from polarization domain to color space for display purposes. The entire image processing pipeline must operate at the same frame rate as the CCD polarization imaging sensor (40 frames per second) or higher in order to enable real-time extraction of the polarization properties from the imaged environment. To achieve the necessary frame rate, we have implemented and evaluated the image processing pipeline on three different platforms: general purpose CPU, graphics processing unit (GPU), and an embedded FPGA. The computational throughput, power consumption, precision and physical limitations of the implementations on each platform are described in detail and experimental data is provided.

  5. Real-time image processing of TOF range images using a reconfigurable processor system

    NASA Astrophysics Data System (ADS)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  6. Processing of multi-digit additions in high math-anxious individuals: psychophysiological evidence

    PubMed Central

    Núñez-Peña, María Isabel; Suárez-Pellicioni, Macarena

    2015-01-01

    We investigated the time course of neural processing of multi-digit additions in high- (HMA) and low-math anxious (LMA) individuals. Seventeen HMA and 17 LMA individuals were presented with two-digit additions and were asked to perform a verification task. Behavioral data showed that HMA individuals were slower and more error prone than their LMA peers, and that incorrect solutions were solved more slowly and less accurately than correct ones. Moreover, HMA individuals tended to need more time and commit more errors when having to verify incorrect solutions than correct ones. ERPs time-locked to the presentation of the addends (calculation phase) and to the presentation of the proposed solution (verification phase) were also analyzed. In both phases, a P2 component of larger amplitude was found for HMA individuals than for their LMA peers. Because the P2 component is considered to be a biomarker of the mobilization of attentional resources toward emotionally negative stimuli, these results suggest that HMA individuals may have invested more attentional resources both when processing the addends (calculation phase) and when they had to report whether the proposed solution was correct or not (verification phase), as compared to their LMA peers. Moreover, in the verification phase, LMA individuals showed a larger late positive component (LPC) for incorrect solutions at parietal electrodes than their HMA counterparts. The smaller LPC shown by HMA individuals when verifying incorrect solutions suggests that these solutions may have been appeared more plausible to them than to their LMA counterparts. PMID:26347705

  7. EFFECT OF STARCH ADDITION ON THE PERFORMANCE AND SLUDGE CHARACTERIZATION OF UASB PROCESS TREATING METHANOLIC WASTEWATER

    NASA Astrophysics Data System (ADS)

    Yan, Feng; Kobayashi, Takuro; Takahashi, Shintaro; Li, Yu-You; Omura, Tatsuo

    A mesophilic(35℃) UASB reactor treating synthetic wastewater containing methanol with addition of starch was continuously operated for over 430 days by changing the organic loading rate from 2.5 to 120kg-COD/m3.d. The microbial community structure of the granules was analyzed with the molecular tools and its metabolic characteristics were evaluated using specific methanogenic activity tests. The process was successfully operated with over 98% soluble COD removal efficiency at VLR 30kg-COD/m3.d for approximately 300 days, and granulation satisfactory proceeded. The results of cloning and fluorescence in situ hybridization analysis suggest that groups related the genus Methanomethylovorans and the genus Methanosaeta were predominant in the reactor although only the genus Methanomethylovorans was predominant in the reactor treating methanolic wastewater in the previous study. Abundance of the granules over 0.5 mm in diameter in the reactor treating methanolic wastewater with addition of starch was 3 times larger than that in the reactor treating methanolic wastewater. Specific methanogenic activity tests in this study indicate that the methanol-methane pathway and the methanol-H2/CO2-methane pathway were predominant, and however, there was a certain level of activity for acetate-methane pathway unlike the reactor treating methanolic wastewater. These results suggest addition of starch might be responsible for diversifying the microbial community and encouraging the granulation.

  8. V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.

    2011-09-01

    In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  9. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  10. CT discrimination and image process on damage process of unsaturated compacted loess during triaxial creep

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Jiang, Lihua; Tang, Yichuan

    2010-08-01

    The triaxial creep compression tests of compacted loess samples are conducted with a new set of modified triaxial compression apparatus. With the new apparatus, the loess sample can be scanned with CT machine at the same time during compression process. The different damage process of compacted loess sample is directly observed for the first time with CT images and CT numbers. The initiation mechanisms of loess micro-crack during different creep compression processes are analyzed with CT images.

  11. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  12. Hydration process of cement in the presence of a cellulosic additive. A calorimetric investigation.

    PubMed

    Ridi, Francesca; Fratini, Emiliano; Mannelli, Francesca; Baglioni, Piero

    2005-08-01

    In the cement industry, the extrusion technique is used to produce flat shapes with improved resistance to compression. Extrusion is a plastic-forming process that consists of forcing a highly viscous plastic mixture through a shaped die. The material should be fluid enough to be mixed and to pass through the die, and on the other hand, the extruded specimen should be stiff enough to be handled without changing in shape or cracking. These characteristics are industrially obtained by adding cellulosic polymers to the mixture. The aim of this work is to understand the action mechanism of these additives on the major pure phases constituting a typical Portland cement: tricalcium silicate (C(3)S), dicalcium silicate (C(2)S), tricalcium aluminate (C(3)A), and tetracalcium iron-aluminate (C(4)AF). In particular, a methylhydroxyethyl cellulose (MHEC) was selected from the best-performing polymers for further study. The effect of this additive on the hydration kinetics (rate constants, activation energies, and diffusional constants) was evaluated by means of differential scanning calorimetry (DSC) while the hydration products were studied by using thermogravimetry-differential thermal analysis (TG-DTA), X-ray diffraction (XRD), and scanning electron microscopy (SEM). MHEC addition in calcium silicate pastes produces an increase in the induction time without affecting the nucleation-and-growth period. A less dense CSH gel was deduced from the diffusional constants in the presence of MHEC. Moreover, CSH laminar features and poorly structured hydrates were noted during the first hours of hydration. In the case of the aluminous phases, the additive inhibits the growth of stable cubic hydrated phases (C(3)AH(6)), with the advantage of the metastable hexagonal phases being formed in the earliest minutes of hydration. PMID:16852857

  13. Dehydration process of fish analyzed by neutron beam imaging

    NASA Astrophysics Data System (ADS)

    Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T. M.

    2009-06-01

    Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

  14. Precipitation process in a Mg–Gd–Y alloy grain-refined by Al addition

    SciTech Connect

    Dai, Jichun; Zhu, Suming; Easton, Mark A.; Xu, Wenfan; Wu, Guohua; Ding, Wenjiang

    2014-02-15

    The precipitation process in Mg–10Gd–3Y (wt.%) alloy grain-refined by 0.8 wt.% Al addition has been investigated by transmission electron microscopy. The alloy was given a solution treatment at 520 °C for 6 h plus 550 °C for 7 h before ageing at 250 °C. Plate-shaped intermetallic particles with the 18R-type long-period stacking ordered structure were observed in the solution-treated state. Upon isothermal ageing at 250 °C, the following precipitation sequence was identified for the α-Mg supersaturated solution: β″ (D0{sub 19}) → β′ (bco) → β{sub 1} (fcc) → β (fcc). The observed precipitation process and age hardening response in the Al grain-refined Mg–10Gd–3Y alloy are compared with those reported in the Zr grain-refined counterpart. - Highlights: • The precipitation process in Mg–10Gd–3Y–0.8Al (wt.%) alloy has been investigated. • Particles with the 18R-type LPSO structure were observed in the solution state. • Upon ageing at 250 °C, the precipitation sequence is: β″ → β′ → β1 (fcc) → β. • The Al grain-refined alloy has a lower hardness than the Zr refined counterpart.

  15. IMPACTS OF ANTIFOAM ADDITIONS AND ARGON BUBBLING ON DEFENSE WASTE PROCESSING FACILITY REDUCTION/OXIDATION

    SciTech Connect

    Jantzen, C.; Johnson, F.

    2012-06-05

    During melting of HLW glass, the REDOX of the melt pool cannot be measured. Therefore, the Fe{sup +2}/{Sigma}Fe ratio in the glass poured from the melter must be related to melter feed organic and oxidant concentrations to ensure production of a high quality glass without impacting production rate (e.g., foaming) or melter life (e.g., metal formation and accumulation). A production facility such as the Defense Waste Processing Facility (DWPF) cannot wait until the melt or waste glass has been made to assess its acceptability, since by then no further changes to the glass composition and acceptability are possible. therefore, the acceptability decision is made on the upstream process, rather than on the downstream melt or glass product. That is, it is based on 'feed foward' statistical process control (SPC) rather than statistical quality control (SQC). In SPC, the feed composition to the melter is controlled prior to vitrification. Use of the DWPF REDOX model has controlled the balanjce of feed reductants and oxidants in the Sludge Receipt and Adjustment Tank (SRAT). Once the alkali/alkaline earth salts (both reduced and oxidized) are formed during reflux in the SRAT, the REDOX can only change if (1) additional reductants or oxidants are added to the SRAT, the Slurry Mix Evaporator (SME), or the Melter Feed Tank (MFT) or (2) if the melt pool is bubble dwith an oxidizing gas or sparging gas that imposes a different REDOX target than the chemical balance set during reflux in the SRAT.

  16. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    NASA Astrophysics Data System (ADS)

    Devès, G.; Daudin, L.; Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V.; Michelet, C.; Seznec, H.; Barberet, P.

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  17. High Performance Image Processing And Laser Beam Recording System

    NASA Astrophysics Data System (ADS)

    Fanelli, Anthony R.

    1980-09-01

    The article is meant to provide the digital image recording community with an overview of digital image processing, and recording. The Digital Interactive Image Processing System (DIIPS) was assembled by ESL for Air Force Systems Command under ROME AIR DEVELOPMENT CENTER's guidance. The system provides the capability of mensuration and exploitation of digital imagery with both mono and stereo digital images as inputs. This development provided for system design, basic hardware, software and operational procedures to enable the Air Force's System Command photo analyst to perform digital mensuration and exploitation of stereo digital images as inputs. The engineering model was based on state-of-the-art technology and to the extent possible off-the-shelf hardware and software. A LASER RECORDER was also developed for the DIIPS Systems and is known as the Ultra High Resolution Image Recorder (UHRIR). The UHRIR is a prototype model that will enable the Air Force Systems Command to record computer enhanced digital image data on photographic film at high resolution with geometric and radiometric distortion minimized.

  18. Object silhouettes and surface directions through stereo matching image processing

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kumagai, Hideo

    2015-09-01

    We have studied the object silhouettes and surface direction through the stereo matching image processing to recognize the position, size and surface direction of the object. For this study we construct the pixel number change distribution of the HSI color component level, the binary component level image by the standard deviation threshold, the 4 directional pixels connectivity filter, the surface elements correspondence by the stereo matching and the projection rule relation. We note that the HSI color component level change tendency of the object image near the focus position is more stable than the HSI color component level change tendency of the object image over the unfocused range. We use the HSI color component level images near the fine focused position to extract the object silhouette. We extract the object silhouette properly. We find the surface direction of the object by the pixel numbers of the correspondence surface areas and the projection cosine rule after the stereo matching image processing by the characteristic areas and the synthesized colors. The epipolar geometry is used in this study because a pair of imager is arranged on the same epipolar plane. The surface direction detection results in the proper angle calculation. The construction of the object silhouettes and the surface direction detection of the object are realized.

  19. SENTINEL-2 Level 1 Products and Image Processing Performances

    NASA Astrophysics Data System (ADS)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  20. Evalution of a DE-Identification Process for Ocular Imaging

    NASA Technical Reports Server (NTRS)

    LaPelusa, Michael B.; Mason, Sara S.; Taiym, Wafa F.; Sargsyan, Ashot; Lee, Lesley R.; Wear, Mary L.; Van Baalen, Mary

    2015-01-01

    Medical privacy of NASA astronauts requires an organized and comprehensive approach when data are being made available outside NASA systems. A combination of factors, including the uniquely small patient population, the extensive medical testing done on these individuals, and the relative cultural popularity of the astronauts puts them at a far greater risk to potential exposure of personal information than the general public. Therefore, care must be taken to ensure that the astronauts' identities are concealed. Magnetic Resonance Imaging (MRI) medical data is a recent source of interest to researchers concerned with the development of Visual Impairment due to Intracranial Pressure (VIIP) in the astronaut population. Each vision MRI scan of an astronaut includes 176 separate sagittal images that are saved as an "image series" for clinical use. In addition to the medical information these image sets provide, they also inherently contain a substantial amount of non-medical personally identifiable information (PII) such as-name, date of birth, and date of exam. We have shown that an image set of this type can be rendered, using free software, to give an accurate representation of the patient's face. This currently restricts NASA from dispensing MRI data to researchers in a deidentified format. Automated software programs, such as the Brain Extraction Tool, are available to researchers who wish to de-identify MRI sagittal brain images by "erasing" identifying characteristics such as the nose and jaw on the image sets. However, this software is not useful to NASA for vision research because it removes the portion of the images around the eye orbits, which is the main area of interest to researchers studying the VIIP syndrome. The Lifetime Surveillance of Astronaut Health program has resolved this issue by developing a protocol to de-identify MRI sagittal brain images using Showcase Premier, a DICOM (Digital Imaging and Communications in Medicine) software package. The

  1. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  2. Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program

    NASA Technical Reports Server (NTRS)

    Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

  3. Improvement of pattern collapse issue by additive-added D.I. water rinse process

    NASA Astrophysics Data System (ADS)

    Tanaka, Keiichi; Naito, Ryoichiro; Kitada, Tomohiro; Kiba, Yukio; Yamada, Yoshiaki; Kobayashi, Masakazu; Ichikawa, Hiroyuki

    2003-06-01

    Reduction of critical dimension in lithography technology is aggressively promoted. At the same time, further resist thickness reduction is being pursued to increase the resolution capabilities of resist. However, thin film has its limitation because of etch requirements etc. As that result, the promotion of reduction results in increasing the aspect ratio, which leads to pattern collapse. It is well known that at drying step in developing process the capillary effect operates the photoresist pattern. If the force of the capillary effect is greater than the aggregation force of the resist pattern, the pattern collapse is generated. And the key parameters of the capillary effect are the space width between patterns, the aspect ratio, the contact angle of the D.I water rinse and the surface tension of rinse solution. Among these parameters the surface tension of rinse solution can be controlled by us. On the other hand, we've already reported that the penetration of TMAH and D.I water into the resist plays an important role on the lithographic latitude. For example, when we use the resist which TMA ion can be easily diffuse into, D.I water and TMA ion which are penetrated in the resist decreases the aggregation force of resist pattern and causes the pattern collapse even by the weak force against resist pattern. These results indicate that the swelling of photoresist by TMA ion and water is very important factor for controlling the pattern collapse. Currently, two methods are mainly tried to reduce the surface tension of rinse solution: SCF (Super Critical Fluid) and addition of additive to D.I water rinse. We used the latter method this time, because this technique has retrofittability and not special tool. And in this evaluation, we found that the degree of suppressing pattern collapse depends on the additive chemistry or formulation. With consideration given to process factors such as above, we investigated what factors contribute to suppressing pattern collapse

  4. Brain responses strongly correlate with Weibull image statistics when processing natural images.

    PubMed

    Scholte, H Steven; Ghebreab, Sennay; Waldorp, Lourens; Smeulders, Arnold W M; Lamme, Victor A F

    2009-01-01

    The visual appearance of natural scenes is governed by a surprisingly simple hidden structure. The distributions of contrast values in natural images generally follow a Weibull distribution, with beta and gamma as free parameters. Beta and gamma seem to structure the space of natural images in an ecologically meaningful way, in particular with respect to the fragmentation and texture similarity within an image. Since it is often assumed that the brain exploits structural regularities in natural image statistics to efficiently encode and analyze visual input, we here ask ourselves whether the brain approximates the beta and gamma values underlying the contrast distributions of natural images. We present a model that shows that beta and gamma can be easily estimated from the outputs of X-cells and Y-cells. In addition, we covaried the EEG responses of subjects viewing natural images with the beta and gamma values of those images. We show that beta and gamma explain up to 71% of the variance of the early ERP signal, substantially outperforming other tested contrast measurements. This suggests that the brain is strongly tuned to the image's beta and gamma values, potentially providing the visual system with an efficient way to rapidly classify incoming images on the basis of omnipresent low-level natural image statistics. PMID:19757938

  5. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  6. A spatial planetary image database in the context of processing

    NASA Astrophysics Data System (ADS)

    Willner, K.; Tasdelen, E.

    2015-10-01

    Planetary image data is collected and archived by e.g. the European Planetary Science Archive (PSA) or its US counterpart the Planetary Data System (PDS). These archives usually organize the data according to missions and their respective instruments. Search queries can be posted to retrieve data of interest for a specific instrument data set. In the context of processing data of a number of sensors and missions this is not practical. In the scope of the EU FP7 project PRoViDE meta-data from imaging sensors were collected from PSA as well as PDS and were rearranged and restructured according to the processing needs. Exemplary image data gathered from rover and lander missions operated on the Martian surface was organized into a new unique data base. The data base is a core component of the PRoViDE processing and visualization system as it enables multi-mission and -sensor searches to fully exploit the collected data.

  7. Automated Processing of Zebrafish Imaging Data: A Survey

    PubMed Central

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  8. Natural language processing and visualization in the molecular imaging domain.

    PubMed

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol

    2007-06-01

    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information. PMID:17084109

  9. Discrete wavelet transform core for image processing applications

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas E.; Carbone, Richard

    2005-02-01

    This paper presents a flexible hardware architecture for performing the Discrete Wavelet Transform (DWT) on a digital image. The proposed architecture uses a variation of the lifting scheme technique and provides advantages that include small memory requirements, fixed-point arithmetic implementation, and a small number of arithmetic computations. The DWT core may be used for image processing operations, such as denoising and image compression. For example, the JPEG2000 still image compression standard uses the Cohen-Daubechies-Favreau (CDF) 5/3 and CDF 9/7 DWT for lossless and lossy image compression respectively. Simple wavelet image denoising techniques resulted in improved images up to 27 dB PSNR. The DWT core is modeled using MATLAB and VHDL. The VHDL model is synthesized to a Xilinx FPGA to demonstrate hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons. The execution time for performing both DWTs is nearly identical at approximately 14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is around 15,000 gates using only 5% of the Xilinx FPGA hardware area, at 2.185 MHz max clock speed and 24 mW power consumption.

  10. Real-time microstructural and functional imaging and image processing in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Westphal, Volker

    Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new

  11. Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-12-01

    This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored. PMID:14654480

  12. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  13. An Automated Image Processing System for Concrete Evaluation

    SciTech Connect

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-11-23

    AlliedSignal Federal Manufacturing & Technologies (FM&T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of "pixels" which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented.

  14. A methodology for evaluation of an interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.

    1987-01-01

    Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.

  15. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation

  16. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  17. Processing of New Materials by Additive Manufacturing: Iron-Based Alloys Containing Silver for Biomedical Applications

    NASA Astrophysics Data System (ADS)

    Niendorf, Thomas; Brenne, Florian; Hoyer, Peter; Schwarze, Dieter; Schaper, Mirko; Grothe, Richard; Wiesener, Markus; Grundmeier, Guido; Maier, Hans Jürgen

    2015-07-01

    In the biomedical sector, production of bioresorbable implants remains challenging due to improper dissolution rates or deficient strength of many candidate alloys. Promising materials for overcoming the prevalent drawbacks are iron-based alloys containing silver. However, due to immiscibility of iron and silver these alloys cannot be manufactured based on conventional processing routes. In this study, iron-manganese-silver alloys were for the first time synthesized by means of additive manufacturing. Based on combined mechanical, microscopic, and electrochemical studies, it is shown that silver particles well distributed in the matrix can be obtained, leading to cathodic sites in the composite material. Eventually, this results in an increased dissolution rate of the alloy. Stress-strain curves showed that the incorporation of silver barely affects the mechanical properties.

  18. Transition metal-catalyzed process for addition of amines to carbon-carbon double bonds

    DOEpatents

    Hartwig, John F.; Kawatsura, Motoi; Loeber, Oliver

    2002-01-01

    The present invention is directed to a process for addition of amines to carbon-carbon double bonds in a substrate, comprising: reacting an amine with a compound containing at least one carbon-carbon double bond in the presence a transition metal catalyst under reaction conditions effective to form a product having a covalent bond between the amine and a carbon atom of the former carbon-carbon double bond. The transition metal catalyst comprises a Group 8 metal and a ligand containing one or more 2-electron donor atoms. The present invention is also directed to enantioselective reactions of amine compounds with compounds containing carbon-carbon double bonds, and a calorimetric assay to evaluate potential catalysts in these reactions.

  19. Reactive Additive Stabilization Process (RASP) for hazardous and mixed waste vitrification

    SciTech Connect

    Jantzen, C.M.; Pickett, J.B.; Ramsey, W.G.

    1993-07-01

    Solidification of hazardous/mixed wastes into glass is being examined at the Savannah River Site (SRS) for (1) nickel plating line (F006) sludges and (2) incinerator wastes. Vitrification of these wastes using high surface area additives, the Reactive Additive Stabilization Process (RASP), has been determined to greatly enhance the dissolution and retention of hazardous, mixed, and heavy metal species in glass. RASP lowers melt temperatures (typically 1050-- 1150{degrees}C), thereby minimizing volatility concerns during vitrification. RASP maximizes waste loading (typically 50--75 wt% on a dry oxide basis) by taking advantage of the glass forming potential of the waste. RASP vitrification thereby minimizes waste disposal volume (typically 86--97 vol. %), and maximizes cost savings. Solidification of the F006 plating line sludges containing depleted uranium has been achieved in both soda-lime-silica (SLS) and borosilicate glasses at 1150{degrees}C up to waste loadings of 75 wt%. Solidification of incinerator blowdown and mixtures of incinerator blowdown and bottom kiln ash have been achieved in SLS glass at 1150{degrees}C up to waste loadings of 50% using RASP. These waste loadings correspond to volume reductions of 86 and 94 volume %, respectively, with large associated savings in storage costs.

  20. [Adaptive reactions of dehydrogenation processes in root voles during additional impacts of the physical nature].

    PubMed

    Kudiasheva, A G; Taskaev, A I

    2011-01-01

    Variations of the dehydrogenation enzyme activity (succinate dehydrogenase, pyruvate dehydrogenase, lactate dehydrogenase) in the heart muscle, liver and brain of root voles (Microtus oeconomus Pall.) and their progeny associated with additional stress effects (chronic low-level gamma-irradiation, short-term exposure to cold) have been studied. Root voles (parents) were caught in the areas with a normal and high-level natural radioactivity in the Republic of Komi. It has been revealed that the direction of shifts of the dehydrogenation enzyme activity in response to the factors of the physical nature is determined by the initial level of the oxidation process in tissues of root voles and their progeny that haven't been subjected to these actions. The reaction of root voles and their progeny (1-3 generations) from the radium zone has lower reserve functional possibilities in relation to the additional exposure as compared with the animals from the control zone. In some cases, chronic low-level irradiation and short-term cooling lead to leveling of differences between groups of animals which initially varied from each other in biochemical indexes. PMID:22279768