Science.gov

Sample records for acquisition image processing

  1. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  2. Acquisition and Post-Processing of Immunohistochemical Images.

    PubMed

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  3. PET/CT for radiotherapy: image acquisition and data processing.

    PubMed

    Bettinardi, V; Picchio, M; Di Muzio, N; Gianolli, L; Messa, C; Gilardi, M C

    2010-10-01

    This paper focuses on acquisition and processing methods in positron emission tomography/computed tomography (PET/CT) for radiotherapy (RT) applications. The recent technological evolutions of PET/CT systems are described. Particular emphasis is dedicated to the tools needed for the patient positioning and immobilization, to be used in PET/CT studies as well as during RT treatment sessions. The effect of organ and lesion motion due to patient's respiration on PET/CT imaging is discussed. Breathing protocols proposed to minimize PET/CT spatial mismatches in relation to respiratory movements are illustrated. The respiratory gated (RG) 4D-PET/CT techniques, developed to measure and compensate for organ and lesion motion, are then introduced. Finally a description is provided of different acquisition and data processing techniques, implemented with the aim at improving: i) image quality and quantitative accuracy of PET images, and ii) target volume definition and treatment planning in RT, by using specific and personalised motion information.

  4. System of acquisition and processing of images of dynamic speckle

    NASA Astrophysics Data System (ADS)

    Vega, F.; >C Torres,

    2015-01-01

    In this paper we show the design and implementation of a system to capture and analysis of dynamic speckle. The device consists of a USB camera, an isolated system lights for imaging, a laser pointer 633 nm 10 mw as coherent light source, a diffuser and a laptop for processing video. The equipment enables the acquisition and storage of video, also calculated of different descriptors of statistical analysis (vector global accumulation of activity, activity matrix accumulation, cross-correlation vector, autocorrelation coefficient, matrix Fujji etc.). The equipment is designed so that it can be taken directly to the site where the sample for biological study and is currently being used in research projects within the group.

  5. High resolution x-ray medical sequential image acquisition and processing system based on PCI interface

    NASA Astrophysics Data System (ADS)

    Lu, Dongming; Chen, Qian; Gu, Guohua

    2003-11-01

    In the field of medical application, it is of great importance to adopt digital image processing technique. Based on the characteristics of medical image, we introduced the digital image processing method to the X-ray imaging system, and developed a high resolution x-ray medical sequential image acquisition and processing system that employs image enhancer and CCD. This system consists of three basic modules, namely sequential image acquisition, data transfer and system control, and image processing. Under the control of FPGA (Field Programmable Gate Array), images acquired by the front-end circuit are transmitted to a PC through high speed PCI bus, and then optimized by the image processing program. The software kits, which include PCI Device Driver and Image Processing Package, are developed with Visual C++ Language based on Windows OS. In this paper, we present a general introduction to the principle and the operating procedure of X-ray Sequential Image Acquisition and Processing System, with special emphasis on the key issues of the hardware design. In addition, the context, principle, status quo and the digitizing trend of X-ray Imaging are explained succinctly. Finally, the preliminary experimental results are shown to demonstrate that the system is capable of achieving high quality X-ray sequential images.

  6. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  7. Image and Sensor Data Processing for Target Acquisition and Recognition.

    DTIC Science & Technology

    1980-11-01

    traitements sont effectuds sur des images fournies par des capteurs fonctionrnant A des longueurs d’onde voisines de 4 u et 10 p Ces images diff~rent des...bruit, doe aux limites physiques et technologiques des capteurs - d’autre part, une qualitd essentielle pour les applications militaires. qui provient de...diminuer Ileffet de certains ddfauts qui affectent l’image originale et qui sont dus aux imperfections du capteur (chapitre 1) - La d~tection des cibles

  8. A review of breast tomosynthesis. Part I. The image acquisition process

    SciTech Connect

    Sechopoulos, Ioannis

    2013-01-15

    Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process.

  9. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing

    PubMed Central

    2014-01-01

    Introduction Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator’s (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. Material and method The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. Results For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient’s back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects – error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18

  10. Automated system for acquisition and image processing for the control and monitoring boned nopal

    NASA Astrophysics Data System (ADS)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  11. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  12. Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter.

    PubMed

    Latorre-Carmona, Pedro; Sánchez-Ortiga, Emilio; Xiao, Xiao; Pla, Filiberto; Martínez-Corral, Manuel; Navarro, Héctor; Saavedra, Genaro; Javidi, Bahram

    2012-11-05

    This paper presents an acquisition system and a procedure to capture 3D scenes in different spectral bands. The acquisition system is formed by a monochrome camera, and a Liquid Crystal Tunable Filter (LCTF) that allows to acquire images at different spectral bands in the [480, 680]nm wavelength interval. The Synthetic Aperture Integral Imaging acquisition technique is used to obtain the elemental images for each wavelength. These elemental images are used to computationally obtain the reconstruction planes of the 3D scene at different depth planes. The 3D profile of the acquired scene is also obtained using a minimization of the variance of the contribution of the elemental images at each image pixel. Experimental results show the viability to recover the 3D multispectral information of the scene. Integration of 3D and multispectral information could have important benefits in different areas, including skin cancer detection, remote sensing and pattern recognition, among others.

  13. Data acquisition and processing system of the electron cyclotron emission imaging system of the KSTAR tokamak.

    PubMed

    Kim, J B; Lee, W; Yun, G S; Park, H K; Domier, C W; Luhmann, N C

    2010-10-01

    A new innovative electron cyclotron emission imaging (ECEI) diagnostic system for the Korean Superconducting Tokamak Advanced Research (KSTAR) produces a large amount of data. The design of the data acquisition and processing system of the ECEI diagnostic system should consider covering the large data production and flow. The system design is based on the layered structure scalable to the future extension to accommodate increasing data demands. Software architecture that allows a web-based monitoring of the operation status, remote experiment, and data analysis is discussed. The operating software will help machine operators and users validate the acquired data promptly, prepare next discharge, and enhance the experiment performance and data analysis in a distributed environment.

  14. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  15. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    NASA Astrophysics Data System (ADS)

    Schickert, Martin

    2015-03-01

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants.

  16. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    SciTech Connect

    Schickert, Martin

    2015-03-31

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants.

  17. Micro-MRI-based image acquisition and processing system for assessing the response to therapeutic intervention

    NASA Astrophysics Data System (ADS)

    Vasilić, B.; Ladinsky, G. A.; Saha, P. K.; Wehrli, F. W.

    2006-03-01

    Osteoporosis is the cause of over 1.5 million bone fractures annually. Most of these fractures occur in sites rich in trabecular bone, a complex network of bony struts and plates found throughout the skeleton. The three-dimensional structure of the trabecular bone network significantly determines mechanical strength and thus fracture resistance. Here we present a data acquisition and processing system that allows efficient noninvasive assessment of trabecular bone structure through a "virtual bone biopsy". High-resolution MR images are acquired from which the trabecular bone network is extracted by estimating the partial bone occupancy of each voxel. A heuristic voxel subdivision increases the effective resolution of the bone volume fraction map and serves a basis for subsequent analysis of topological and orientational parameters. Semi-automated registration and segmentation ensure selection of the same anatomical location in subjects imaged at different time points during treatment. It is shown with excerpts from an ongoing clinical study of early post-menopausal women, that significant reduction in network connectivity occurs in the control group while the structural integrity is maintained in the hormone replacement group. The system described should be suited for large-scale studies designed to evaluate the efficacy of therapeutic intervention in subjects with metabolic bone disease.

  18. High-throughput data acquisition and processing for real-time x-ray imaging

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Matthias; Rota, Lorenzo; Ardila Perez, Luis E.; Caselle, Michele; Chilingaryan, Suren; Kopmann, Andreas

    2016-10-01

    With ever-increasing data rates due to stronger light sources and better detectors, X-ray imaging experiments conducted at synchrotron beamlines face bandwidth and processing limitations that inhibit efficient workflows and prevent real-time operations. We propose an experiment platform comprised of programmable hardware and optimized software to lift these limitations and make beamline setups future-proof. The hardware consists of an FPGA-based data acquisition system with custom logic for data pre-processing and a PCIe data connection for transmission of currently up to 6.6 GB/s. Moreover, the accompanying firmware supports pushing data directly into GPU memory using AMD's DirectGMA technology without crossing system memory first. The GPUs are used to pre-process projection data and reconstruct final volumetric data with OpenCL faster than possible with CPUs alone. Besides, more efficient use of resources this enables a real-time preview of a reconstruction for early quality assessment of both experiment setup and the investigated sample. The entire system is designed in a modular way and allows swapping all components, e.g. replacing our custom FPGA camera with a commercial system but keep reconstructing data with GPUs. Moreover, every component is accessible using a low-level C library or using a high-level Python interface in order to integrate these components in any legacy environment.

  19. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  20. Thermal Imaging of the Waccasassa Bay Preserve: Image Acquisition and Processing

    USGS Publications Warehouse

    Raabe, Ellen A.; Bialkowska-Jelinska, Elzbieta

    2010-01-01

    Thermal infrared (TIR) imagery was acquired along coastal Levy County, Florida, in March 2009 with the goal of identifying groundwater-discharge locations in Waccasassa Bay Preserve State Park (WBPSP). Groundwater discharge is thermally distinct in winter when Floridan aquifer temperature, 71-72 degrees F, contrasts with the surrounding cold surface waters. Calibrated imagery was analyzed to assess temperature anomalies and related thermal traces. The influence of warm Gulf water and image artifacts on small features was successfully constrained by image evaluation in three separate zones: Creeks, Bay, and Gulf. Four levels of significant water-temperature anomalies were identified, and 488 sites of interest were mapped. Among the sites identified, at least 80 were determined to be associated with image artifacts and human activity, such as excavation pits and the Florida Barge Canal. Sites of interest were evaluated for geographic concentration and isolation. High site densities, indicating interconnectivity and prevailing flow, were located at Corrigan Reef, No. 4 Channel, Winzy Creek, Cow Creek, Withlacoochee River, and at excavation sites. In other areas, low to moderate site density indicates the presence of independent vents and unique flow paths. A directional distribution assessment of natural seep features produced a northwest trend closely matching the strike direction of regional faults. Naturally occurring seeps were located in karst ponds and tidal creeks, and several submerged sites were detected in Waccasassa River and Bay, representing the first documentation of submarine vents in the Waccasassa region. Drought conditions throughout the region placed constraints on positive feature identification. Low discharge or displacement by landward movement of saltwater may have reduced or reversed flow during this season. Approximately two-thirds of seep locations in the overlap between 2009 and 2005 TIR night imagery were positively re-identified in 2009

  1. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  2. TU-D-BRB-01: Dual-Energy CT: Techniques in Acquisition and Image Processing.

    PubMed

    Pelc, N

    2016-06-01

    Dual-energy CT technology is becoming increasingly available to the medical imaging community. In addition, several models of CT simulators sold for use in radiation therapy departments now feature dual-energy technology. The images provided by dual-energy CT scanners add new information to the radiation treatment planning process; multiple spectral components can be used to separate and identify material composition as well as generate virtual monoenergetic images. In turn, this information could be used to investigate pathologic processes, separate the properties of contrast agents from soft tissues, assess tissue response to therapy, and other applications of therapeutic interest. Additionally, the decomposition of materials in images could directly integrate with and impact the accuracy of dose calculation algorithms. This symposium will explore methods of generating dual-energy CT images, spectral and image analysis algorithms, current and future applications of interest in oncologic imaging, and unique considerations when using dualenergy CT images in the radiation treatment planning process.

  3. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  4. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  5. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  6. Multilevel adaptive process control of acquisition and post-processing of computed radiographic images in picture archiving and communication system environment.

    PubMed

    Zhang, J; Huang, H K

    1998-01-01

    Computed radiography (CR) has become a widely used imaging modality replacing the conventional screen/film procedure in diagnostic radiology. After a latent image is captured in a CR imaging plate, there are seven key processes required before a CR image can be reliably archived and displayed in a picture archiving and communication system (PACS) environment. Human error, computational bottlenecks, software bugs, and CR system errors often crash the CR acquisition and post-processing computers which results in a delay of transmitting CR images for proper viewing at the workstation. In this paper, we present a control theory and a fault tolerance algorithm, as well as their implementation in the PACS environment to circumvent such problems. The software implementation of the control theory and the algorithm is based on the event-driven, multilevel adaptive processing structure. The automated software has been used to provide real-time monitoring and control of CR image acquisition and post-processing in the intensive care unit module of the PACS operation at the University of California, San Francisco. Results demonstrate that the multilevel adaptive process control structure improves CR post-processing time, increases the reliability of the CR images delivery, minimizes user intervention, and speeds up the previously time-consuming quality assurance procedure.

  7. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  8. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  9. Graphics processing unit (GPU) implementation of image processing algorithms to improve system performance of the control acquisition, processing, and image display system (CAPIDS) of the micro-angiographic fluoroscope (MAF)

    NASA Astrophysics Data System (ADS)

    Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.

    2012-03-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  10. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  11. Processes Asunder: Acquisition & Planning Misfits

    DTIC Science & Technology

    2009-03-26

    St ra te gy Re se ar ch Pr oj ec t PROCESSES ASUNDER: ACQUISITION & PLANNING MISFITS BY CHÉRIE A. SMITH Department of Army Civilian DISTRIBUTION...Asunder: Acquisition & Planning Misfits 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Chérie A. Smith 5d. PROJECT NUMBER 5e...include area code) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 USAWC STRATEGY RESEARCH PROJECT PROCESSES ASUNDER: ACQUISITION & PLANNING

  12. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  13. Information Acquisition & Processing in Scanning Probe Microscopy

    SciTech Connect

    Kalinin, Sergei V; Jesse, Stephen; Proksch, Roger

    2008-01-01

    Much of the imaging and spectroscopy capabilities of the existing 20,000+ scanning probe microscopes worldwide relies on specialized data processing that links the microsecond (and sometimes faster) time scale of cantilever motion to the millisecond (and sometimes slower) time scale of image acquisition and feedback. In most SPMs, the cantilever is excited to oscillate sinusoidally and the time-averaged amplitude and/or phase are used as imaging or control signals. Traditionally, the step of converting the rapid motion of the cantilever into an amplitude or phase is performed by phase sensitive homodyne or phase-locked loop detection. The emergence of fast configurable data processing electronics in last several years has allowed the development of non-sinusoidal data acquisition and processing methods. Here, we briefly review the principles and limitations of phase sensitive detectors and discuss some of the emergent technologies based on rapid spectroscopic measurements in frequency- and time domains.

  14. Integral imaging acquisition and processing for visualization of photon counting images in the mid-wave infrared range

    NASA Astrophysics Data System (ADS)

    Latorre-Carmona, P.; Pla, F.; Javidi, B.

    2016-06-01

    In this paper, we present an overview of our previously published work on the application of the maximum likelihood (ML) reconstruction method to integral images acquired with a mid-wave infrared detector on two different types of scenes: one of them consisting of a road, a group of trees and a vehicle just behind one of the trees (being the car at a distance of more than 200m from the camera), and another one consisting of a view of the Wright Air Force Base airfield, with several hangars and different other types of installations (including warehouses) at distances ranging from 600m to more than 2km. Dark current noise is considered taking into account the particular features this type of sensors have. Results show that this methodology allows to improve visualization in the photon counting domain.

  15. A new and practical method to obtain grain size measurements in sandy shores based on digital image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Baptista, P.; Cunha, T. R.; Gama, C.; Bernardes, C.

    2012-12-01

    Modern methods for the automated evaluation of sediment size in sandy shores relay on digital image processing algorithms as an alternative to time-consuming traditional sieving methodologies. However, the requirements necessary to guarantee that the considered image processing algorithm has a good grain identification success rate impose the need for dedicated hardware setups to capture the sand surface images. Examples are specially designed camera housings that maintain a constant distance between the camera lens and the sand surface, tripods to fix and maintain the camera angle orthogonal to the sand surface, external illumination systems that guarantee the light level necessary for the image processing algorithms, and special lenses and focusing systems for close proximity image capturing. In some cases, controlled image-capturing conditions can make the fieldwork more laborious which incurs in significant costs for monitoring campaigns considering large areas. To circumvent this problem, it is proposed a new automated image-processing algorithm that identifies sand grains in digital images acquired with a standard digital camera without any extra hardware attached to it. The accuracy and robustness of the proposed algorithm are evaluated in this work by means of a laboratory test on previously controlled grain samples, field tests where 64 samples (spread over a beach stretch of 65 km and with grain size ranging from 0.5 mm to 1.9 mm) were processed by both the proposed method and by sieving and finally by manual point count on all acquired images. The calculated root-mean-square (RMS) error between mean grain sizes obtained from the proposed image processing method and the sieve method (for the 64 samples) was 0.33 mm, and for the image processing method versus manual point counts comparison, with the same images, was 0.12 mm. The achieved correlation coefficients (r) were 0.91 and 0.96, respectively.

  16. Effective GPR Data Acquisition and Imaging

    NASA Astrophysics Data System (ADS)

    Sato, M.

    2014-12-01

    We have demonstrated that dense GPR data acquisition typically antenna step increment less than 1/10 wave length can provide clear 3-dimeantiona subsurface images, and we created 3DGPR images. Now we are interested in developing GPR survey methodologies which required less data acquisition time. In order to speed up the data acquisition, we are studying efficient antenna positioning for GPR survey and 3-D imaging algorithm. For example, we have developed a dual sensor "ALIS", which combines GPR with metal detector (Electromagnetic Induction sensor) for humanitarian demining, which acquires GPR data by hand scanning. ALIS is a pulse radar system, which has a frequency range 0.5-3GHz.The sensor position tracking system has accuracy about a few cm, and the data spacing is typically more than a few cm, but it can visualize the mines, which has a diameter about 8cm. 2 systems of ALIS have been deployed by Cambodian Mine Action Center (CMAC) in mine fields in Cambodia since 2009 and have detected more than 80 buried land mines. We are now developing signal processing for an array type GPR "Yakumo". Yakumo is a SFCW radar system which is a multi-static radar, consisted of 8 transmitter antennas and 8 receiver antennas. We have demonstrated that the multi-static data acquisition is not only effective in data acquisition, but at the same time, it can increase the quality of GPR images. Archaeological survey by Yakumo in large areas, which are more than 100m by 100m have been conducted, for promoting recovery from Tsunami attacked East Japan in March 2011. With a conventional GPR system, we are developing an interpolation method of radar signals, and demonstrated that it can increase the quality of the radar images, without increasing the data acquisition points. When we acquire one dimensional GPR profile along a survey line, we can acquire relatively high density data sets. However, when we need to relocate the data sets along a "virtual" survey line, for example a

  17. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  18. Acquisition hardware for digital imaging.

    PubMed

    Widmer, William R

    2008-01-01

    Use of digital radiography is growing rapidly in veterinary medicine. Two basic digital imaging systems are available, computed radiography (CR) and direct digital radiography (DDR). Computed radiographic detectors use a two-step process for image capture and processing. Image capture is by X-ray sensitive phosphors in the image plate. The image plate reader transforms the latent phosphor image to light photons that are converted to an analog electrical signal. An analog to digital converter is used to digitize the electrical signal before computer analysis. Direct digital detectors provide digital data by direct readout after image capture--a reader unnecessary. Types of DDR detectors are flat panel detectors and charge coupled device (CCD) detectors. Flat panel detectors are composed of layers of semiconductors for image capture with transistor and microscopic circuitry embedded in a pixel array. Direct converting flat panel detectors convert incident X-rays directly into electrical charges. Indirect detectors convert X-rays to visible light, then to electrical charges. All flat panel detectors send a digitized electrical signal to a computer using a direct link. Charge coupled device detectors have a small chip similar to those used in digital cameras. A scintillator first converts X-rays to a light signal that is minified by an optical system before reaching the chip. The chip sends a digital signal directly to a computer. Both CR and DDR provide quality diagnostic images. CR is a mature technology while DDR is an emerging technology.

  19. Peruvian Weapon System Acquisition Process

    DTIC Science & Technology

    1990-12-01

    process for a major program. The United States DOD Directive 5000.1 defines four distinct phases of the acquisition process: concept exploration , demon...Unified or Specified Command. 1. Concept Exploration Phase The first phase for a major system is the concept exploration phase. During this phase... exploration phase proreses. Premature introduction of operating and support details may have a negative effect by dosing out promising alternatives [Ref

  20. X-ray beam modulation, image acquisition and real-time processing in region-of-interest fluoroscopy

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying Joseph

    2000-07-01

    Region of interest (ROI) fluoroscopy is a technique whereby a partially attenuating filter with an aperture in the center is placed in the x-ray beam between the source and the patient The part of the x-ray beam going through the filter aperture un-attenuated is used to project the main features of interest in the patient to form the ROI in each fluoroscopic image. The periphery of the image is formed by the projection of the features needed only for reference using the part of the attenuated x-ray beam passing through the filter. This technique can substantially reduce patient and staff dose and improve the image quality in the ROI of the image. By using Gd for the filter material, it is even possible to improve the x-ray attenuation contrast in the periphery. However, real-time image processing is needed to compensate for the x-ray intensity attenuation in the periphery so that the brightness in the two parts of the fluoroscopic image is linearity is restored. Based on the method of binary masks, a system was developed to perform the real-time image processing with the flexibility to accommodate both the horizontal and vertical movement of the imaging chain relative to the patient. A binary mask is a binary image used to define those regions in the fluoroscopic image which should be processed and those which should not. A method of binary mask generation was proposed so the region defined as not to be processed in the binary mask maintains as close a resemblance as possible to the ROI of the fluoroscopic image. The construction method for the look-up table used for the processing of the periphery and its dependence on physical quantities were described and studied. An algorithm for constantly tracking the change of the ROI in the fluoroscopic images and selecting the proper corresponding binary mask was developed. The quality of the processed ROI fluoroscopic images such as brightness, contrast and noise were evaluated and compared using test phantoms. The test

  1. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  2. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  3. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  4. Image Acquisition in Real Time

    NASA Technical Reports Server (NTRS)

    2003-01-01

    In 1995, Carlos Jorquera left NASA s Jet Propulsion Laboratory (JPL) to focus on erasing the growing void between high-performance cameras and the requisite software to capture and process the resulting digital images. Since his departure from NASA, Jorquera s efforts have not only satisfied the private industry's cravings for faster, more flexible, and more favorable software applications, but have blossomed into a successful entrepreneurship that is making its mark with improvements in fields such as medicine, weather forecasting, and X-ray inspection. Formerly a JPL engineer who constructed imaging systems for spacecraft and ground-based astronomy projects, Jorquera is the founder and president of the three-person firm, Boulder Imaging Inc., based in Louisville, Colorado. Joining Jorquera to round out the Boulder Imaging staff are Chief Operations Engineer Susan Downey, who also gained experience at JPL working on space-bound projects including Galileo and the Hubble Space Telescope, and Vice President of Engineering and Machine Vision Specialist Jie Zhu Kulbida, who has extensive industrial and research and development experience within the private sector.

  5. Optimisation of acquisition time in bioluminescence imaging

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie; Cobbold, Mark; Styles, Iain B.; Dehghani, Hamid

    2015-03-01

    Decreasing the acquisition time in bioluminescence imaging (BLI) and bioluminescence tomography (BLT) will enable animals to be imaged within the window of stable emission of the bioluminescent source, a higher imaging throughput and minimisation of the time which an animal is anaesthetised. This work investigates, through simulation using a heterogeneous mouse model, two methods of decreasing acquisition time: 1. Imaging at fewer wavelengths (a reduction from five to three); and 2. Increasing the bandwidth of filters used for imaging. The results indicate that both methods are viable ways of decreasing the acquisition time without a loss in quantitative accuracy. Importantly, when choosing imaging wavelengths, the spectral attenuation of tissue and emission spectrum of the source must be considered, in order to choose wavelengths at which a high signal can be achieved. Additionally, when increasing the bandwidth of the filters used for imaging, the bandwidth must be accounted for in the reconstruction algorithm.

  6. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic

  7. Image Processing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Images are prepared from data acquired by the multispectral scanner aboard Landsat, which views Earth in four ranges of the electromagnetic spectrum, two visible bands and two infrared. Scanner picks up radiation from ground objects and converts the radiation signatures to digital signals, which are relayed to Earth and recorded on tape. Each tape contains "pixels" or picture elements covering a ground area; computerized equipment processes the tapes and plots each pixel, line be line to produce the basic image. Image can be further processed to correct sensor errors, to heighten contrast for feature emphasis or to enhance the end product in other ways. Key factor in conversion of digital data to visual form is precision of processing equipment. Jet Propulsion Laboratory prepared a digital mosaic that was plotted and enhanced by Optronics International, Inc. by use of the company's C-4300 Colorwrite, a high precision, high speed system which manipulates and analyzes digital data and presents it in visual form on film. Optronics manufactures a complete family of image enhancement processing systems to meet all users' needs. Enhanced imagery is useful to geologists, hydrologists, land use planners, agricultural specialists geographers and others.

  8. Acquisition, Image and Data Compression.

    DTIC Science & Technology

    1983-04-30

    In Block 20, If different from Report) 1. SUPPLEMENTARY NOTES Presentations made at MILCOM󈨖 Optical Society 󈨗 SPIE󈨗 Optical Computing...Conference󈨗. 19. KEY WORDS (Continue on reverse aide It necessary end identify by block number) Spread Spectrum, Optical Transforms, Acquisition, Simulation...Tracking, PN Direct Sequence Frequency Hopping 20. ABSTRACT (Continue on reer. aide it nece-iary end Identify hy block nomber) This report discusses

  9. Image acquisition system for a hospital enterprise

    NASA Astrophysics Data System (ADS)

    Moore, Stephen M.; Beecher, David E.

    1998-07-01

    Hospital enterprises are being created through mergers and acquisitions of existing hospitals. One area of interest in the PACS literature has been the integration of information systems and imaging systems. Hospital enterprises with multiple information and imaging systems provide new challenges to the integration task. This paper describes the requirements at the BJC Health System and a testbed system that is designed to acquire images from a number of different modalities and hospitals. This testbed system is integrated with Project Spectrum at BJC which is designed to provide a centralized clinical repository and a single desktop application for physician review of the patient chart (text, lab values, images).

  10. High speed image acquisition system of absolute encoder

    NASA Astrophysics Data System (ADS)

    Liao, Jianxiang; Chen, Xin; Chen, Xindu; Zhang, Fangjian; Wang, Han

    2017-01-01

    Absolute optical encoder as a product of optical, mechanical and electronic integration has been widely used in displacement measuring fields. However, how to improve the measurement velocity and reduce the manufacturing cost of absolute optical encoder is the key problem to be solved. To improve the measurement speed, a novel absolute optical encoder image acquisition system is proposed. The proposed acquisition system includes a linear CCD sensor is applied for capturing coding pattern images, an optical magnifying system is used for enlarging the grating stripes, an analog-digital conversion(ADC) module is used for processing the CCD analogy signal, a field programmable gate array(FPGA) device and other peripherals perform driving task. An absolute position measurement experiment was set up to verify and evaluate the proposed image acquisition system. The experimental result indicates that the proposed absolute optical encoder image acquisition system has the image acquisition speed of more than 9500fp/s with well reliability and lower manufacture cost.

  11. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  12. SU-C-18C-06: Radiation Dose Reduction in Body Interventional Radiology: Clinical Results Utilizing a New Imaging Acquisition and Processing Platform

    SciTech Connect

    Kohlbrenner, R; Kolli, KP; Taylor, A; Kohi, M; Fidelman, N; LaBerge, J; Kerlan, R; Gould, R

    2014-06-01

    Purpose: To quantify the patient radiation dose reduction achieved during transarterial chemoembolization (TACE) procedures performed in a body interventional radiology suite equipped with the Philips Allura Clarity imaging acquisition and processing platform, compared to TACE procedures performed in the same suite equipped with the Philips Allura Xper platform. Methods: Total fluoroscopy time, cumulative dose area product, and cumulative air kerma were recorded for the first 25 TACE procedures performed to treat hepatocellular carcinoma (HCC) in a Philips body interventional radiology suite equipped with Philips Allura Clarity. The same data were collected for the prior 85 TACE procedures performed to treat HCC in the same suite equipped with Philips Allura Xper. Mean values from these cohorts were compared using two-tailed t tests. Results: Following installation of the Philips Allura Clarity platform, a 42.8% reduction in mean cumulative dose area product (3033.2 versus 1733.6 mGycm∧2, p < 0.0001) and a 31.2% reduction in mean cumulative air kerma (1445.4 versus 994.2 mGy, p < 0.001) was achieved compared to similar procedures performed in the same suite equipped with the Philips Allura Xper platform. Mean total fluoroscopy time was not significantly different between the two cohorts (1679.3 versus 1791.3 seconds, p = 0.41). Conclusion: This study demonstrates a significant patient radiation dose reduction during TACE procedures performed to treat HCC after a body interventional radiology suite was converted to the Philips Allura Clarity platform from the Philips Allura Xper platform. Future work will focus on evaluation of patient dose reduction in a larger cohort of patients across a broader range of procedures and in specific populations, including obese patients and pediatric patients, and comparison of image quality between the two platforms. Funding for this study was provided by Philips Healthcare, with 5% salary support provided to authors K. Pallav

  13. Digital data acquisition and processing.

    PubMed

    Naivar, Mark A; Galbraith, David W

    2015-01-05

    A flow cytometer is made up of many different subsystems that work together to measure the optical properties of individual cells within a sample. The data acquisition system (also called the data system) is one of these subsystems, and it is responsible for converting the electrical signals from the optical detectors into list-mode data. This unit describes the inner workings of the data system, and provides insight into how the instrument functions as a whole. Some of the information provided in this unit is applicable to everyday use of these instruments, and, at minimum, should make it easier for the reader to assemble a specific data system. With the considerable advancement of electronics technology, it becomes possible to build an entirely functional data system using inexpensive hobbyist-level electronics. This unit covers both analog and digital data systems, but the primary focus is on the more prevalent digital data systems of modern flow cytometric instrumentation.

  14. Functional MRI using regularized parallel imaging acquisition.

    PubMed

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M; Belliveau, John W; Wald, Lawrence L; Kwong, Kenneth K

    2005-08-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions.

  15. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  16. Material appearance acquisition from a single image

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Cui, Shulin; Cui, Hanwen; Yang, Lin; Wu, Tao

    2017-01-01

    The scope of this paper is to present a method of material appearance acquisition(MAA) from a single image. In this paper, material appearance is represented by spatially varying bidirectional reflectance distribution function(SVBRDF). Therefore, MAA can be reduced to the problem of recovery of each pixel's BRDF parameters from an original input image, which include diffuse coefficient, specular coefficient, normal and glossiness based on the Blinn-Phone model. In our method, the workflow of MAA includes five main phases: highlight removal, estimation of intrinsic images, shape from shading(SFS), initialization of glossiness and refining SVBRDF parameters based on IPOPT. The results indicate that the proposed technique can effectively extract the material appearance from a single image.

  17. Optimization of EFTEM image acquisition by using elastically filtered images for drift correction.

    PubMed

    Heil, Tobias; Kohl, Helmut

    2010-06-01

    Because of its high spatial resolution, energy-filtering transmission electron microscopy (EFTEM) has become widely used for the analysis of the chemical composition of nanostructures. To obtain the best spatial resolution, the precise correction of instrumental influences and the optimization of the data acquisition procedure are very important. In this publication, we discuss a modified image acquisition procedure that optimizes the acquisition process of the EFTEM images, especially for long exposure times and measurements that are affected by large spatial drift. To alleviate the blurring of the image caused by the spatial drift, we propose to take several EFTEM images with a shorter exposure time (sub-images) and merge these sub-images afterwards. To correct for the drift between these sub-images, elastically filtered images are acquired between two subsequent sub-images. These elastically filtered images are highly suitable for spatial drift correction based on the cross-correlation method. The use of the drift information between two elastically filtered images permits to merge the drift-corrected sub-images automatically and with high accuracy, resulting in sharper edges and an improved signal intensity in the final EFTEM image. Artefacts that are caused by prominent noise-peaks in the dark reference image have been suppressed by calculating the dark reference image from three images. Furthermore, using the information given by the elastically filtered images, it is possible to drift-correct a set of EFTEM images already during the acquisition. This simplifies the post-processing for elemental mapping and offers the possibility for active drift correction using the image shift function of the microscope, leading to an increased field of view.

  18. Real-Time Protein Crystallization Image Acquisition and Classification System.

    PubMed

    Sigdel, Madhav; Pusey, Marc L; Aygun, Ramazan S

    2013-07-03

    In this paper, we describe the design and implementation of a stand-alone real-time system for protein crystallization image acquisition and classification with a goal to assist crystallographers in scoring crystallization trials. In-house assembled fluorescence microscopy system is built for image acquisition. The images are classified into three categories as non-crystals, likely leads, and crystals. Image classification consists of two main steps - image feature extraction and application of classification based on multilayer perceptron (MLP) neural networks. Our feature extraction involves applying multiple thresholding techniques, identifying high intensity regions (blobs), and generating intensity and blob features to obtain a 45-dimensional feature vector per image. To reduce the risk of missing crystals, we introduce a max-class ensemble classifier which applies multiple classifiers and chooses the highest score (or class). We performed our experiments on 2250 images consisting 67% non-crystal, 18% likely leads, and 15% clear crystal images and tested our results using 10-fold cross validation. Our results demonstrate that the method is very efficient (< 3 seconds to process and classify an image) and has comparatively high accuracy. Our system only misses 1.2% of the crystals (classified as non-crystals) most likely due to low illumination or out of focus image capture and has an overall accuracy of 88%.

  19. A Joint Acquisition-Estimation Framework for MR Phase Imaging

    PubMed Central

    Dagher, Joseph

    2015-01-01

    Measuring the phase of the MR signal is faced with fundamental challenges such as phase aliasing, noise and unknown offsets of the coil array. There is a paucity of acquisition, reconstruction and estimation methods that rigorously address these challenges. This reduces the reliability of information processing in phase domain. We propose a joint acquisition-processing framework that addresses the challenges of MR phase imaging using a rigorous theoretical treatment. Our proposed solution acquires the multi-coil complex data without any increase in acquisition time. Our corresponding estimation algorithm is applied optimally voxel-per-voxel. Results show that our framework achieves performance gains up to an order of magnitude compared to existing methods. PMID:26221666

  20. Rock fracture image acquisition and analysis

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zongpu, Jia; Chen, Liwan

    2007-12-01

    As a cooperation project between Sweden and China, this paper presents: rock fracture image acquisition and analysis. Rock fracture images are acquired by using UV light illumination and visible optical illumination. To present fracture network reasonable, we set up some models to characterize the network, based on the models, we used Best fit Ferret method to auto-determine fracture zone, then, through skeleton fractures to obtain endpoints, junctions, holes, particles, and branches. Based on the new parameters and a part of common parameters, the fracture network density, porosity, connectivity and complexities can be obtained, and the fracture network is characterized. In the following, we first present a basic consideration and basic parameters for fractures (Primary study of characteristics of rock fractures), then, set up a model for fracture network analysis (Fracture network analysis), consequently to use the model to analyze fracture network with different images (Two dimensional fracture network analysis based on slices), and finally give conclusions and suggestions.

  1. Auditory Processing Disorder and Foreign Language Acquisition

    ERIC Educational Resources Information Center

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  2. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists.

  3. New automated iris image acquisition method.

    PubMed

    Park, Kang Ryoung

    2005-02-10

    I propose a new iris image acquisition method based on wide- and narrow-view iris cameras. The narrow-view camera has the functionalities of automatic zooming, focusing, panning, and tilting based on the two-dimensional and three-dimensional eye positions detected from the wide- and narrow-view stereo cameras. By using the wide- and narrow-view iris cameras, I compute the user's gaze position, which is used for aligning the X-Y position of the user's eye, and I use the visible-light illuminator for fake-eye detection.

  4. Applications Of Digital Image Acquisition In Anthropometry

    NASA Astrophysics Data System (ADS)

    Woolford, Barbara; Lewis, James L.

    1981-10-01

    Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.

  5. Coronary CTA: image acquisition and interpretation.

    PubMed

    Kerl, Josef Matthias; Hofmann, Lars K; Thilo, Christian; Vogl, Thomas J; Costello, Philip; Schoepf, U Joseph

    2007-02-01

    Computed tomography (CT) of the heart, because of ongoing technical refinement and intense scientific and clinical evaluation, has left the research realm and has matured into a clinical application that is about to fulfill its promise to replace invasive cardiac catheterization in some patient populations. By nature of its target, the continuously moving heart, CT coronary angiography is technically more challenging than other CT applications. Also, rapid technical development requires constant adaptation of acquisition protocols. Those challenges, however, are in no way insurmountable for users with knowledge of general CT technique. The intent of this communication is to provide for those interested in and involved with coronary CT angiography a step-by-step manual, introducing our approach to performing coronary CT angiography. Included are considerations regarding appropriate patient selection, patient medication, radiation protection, contrast enhancement, acquisition and reconstruction parameters, image display and analysis techniques and also the radiology report. Our recommendations are based on our experience which spans the evolution of multidetector-row CT for cardiac applications from its beginnings to the most current iterations of advanced acquisition modalities, which we believe herald the entrance of this test into routine clinical practice.

  6. Image processing in astronomy

    NASA Astrophysics Data System (ADS)

    Berry, Richard

    1994-04-01

    Today's personal computers are more powerful than the mainframes that processed images during the early days of space exploration. We have entered an age in which anyone can do image processing. Topics covering the following aspects of image processing are discussed: digital-imaging basics, image calibration, image analysis, scaling, spatial enhancements, and compositing.

  7. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    ERIC Educational Resources Information Center

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  8. Processes Involved in Acquisition of Cognitive Skills.

    ERIC Educational Resources Information Center

    Christensen, Carol A.; Bain, John

    Processes involved in the acquisition of cognitive skills were studied through an investigation of the efficacy of initially encoding knowledge of a cognitive skill in either declarative or procedural form. Subjects were 80 university students. The cognitive skill, learning the steps to program a simulated video cassette recorder (VCR), was taught…

  9. Rock fracture image acquisition with both visible and ultraviolet illuminations

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Hakami, Eva

    2006-02-01

    Swedish Nuclear Fuel and Waste Management Company (SKB) have identified the need for a better understanding of radionuclide transport and retention processes in fractured rock since 1994. In the study, the first hard problem is to obtain rock fracture images of a good quality, since rock surface is very rough, and composed of complicated and multiple fractures, as a result, image acquisition is the first important. As a cooperation project between Sweden and China, we sampled a number of rock specimens for analyzing rock fracture network by visible and ultraviolet image technique, in the field. The samples are resin injected, in which way; opened fractures can be seen clearly by means of UV light illumination, and the rock surface information can be obtained by using visible optical illumination. We used different digital cameras and microscope to take images by two illuminations. From the same samples; we found that UV illumination image gives the clear information of fracture opening or closing, and the visible optical illumination gives the information of the rock surface (e.g. filling materials inside of fractures). By applying this technique, the minimum width of rock fracture 0.01 mm can be analyzed. This paper presents: (1) Rock fracture image acquiring techniques; (2) Rock fracture image acquisition by using UV light illumination and visible optical illumination; and (3) Conclusions. The studied method can be used both in the field and a laboratory.

  10. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  11. Image post-processing in dental practice.

    PubMed

    Gormez, Ozlem; Yilmaz, Hasan Huseyin

    2009-10-01

    Image post-processing of dental digital radiographs, a function which used commonly in dental practice is presented in this article. Digital radiography has been available in dentistry for more than 25 years and its use by dental practitioners is steadily increasing. Digital acquisition of radiographs enables computer-based image post-processing to enhance image quality and increase the accuracy of interpretation. Image post-processing applications can easily be practiced in dental office by a computer and image processing programs. In this article, image post-processing operations such as image restoration, image enhancement, image analysis, image synthesis, and image compression, and their diagnostic efficacy is described. In addition this article provides general dental practitioners with a broad overview of the benefits of the different image post-processing operations to help them understand the role of that the technology can play in their practices.

  12. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  13. Major system acquisitions process (A-109)

    NASA Technical Reports Server (NTRS)

    Saric, C.

    1991-01-01

    The Major System examined is a combination of elements (hardware, software, facilities, and services) that function together to produce capabilities required to fulfill a mission need. The system acquisition process is a sequence of activities beginning with documentation of mission need and ending with introduction of major system into operational use or otherwise successful achievement of program objectives. It is concluded that the A-109 process makes sense and provides a systematic, integrated management approach along with appropriate management level involvement and innovative and 'best ideas' from private sector in satisfying mission needs.

  14. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  15. Mosaic acquisition and processing for optical-resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Shao, Peng; Shi, Wei; Chee, Ryan K. W.; Zemp, Roger J.

    2012-08-01

    In optical-resolution photo-acoustic microscopy (OR-PAM), data acquisition time is limited by both laser pulse repetition rate (PRR) and scanning speed. Optical-scanning offers high speed, but limited, field of view determined by ultrasound transducer sensitivity. In this paper, we propose a hybrid optical and mechanical-scanning OR-PAM system with mosaic data acquisition and processing. The system employs fast-scanning mirrors and a diode-pumped, nanosecond-pulsed, Ytterbium-doped, 532-nm fiber laser with PRR up to 600 kHz. Data from a sequence of image mosaic patches is acquired systematically, at predetermined mechanical scanning locations, with optical scanning. After all imaging locations are covered, a large panoramic scene is generated by stitching the mosaic patches together. Our proposed system is proven to be at least 20 times faster than previous reported OR-PAM systems.

  16. Reproducible high-resolution multispectral image acquisition in dermatology

    NASA Astrophysics Data System (ADS)

    Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir

    2015-07-01

    Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.

  17. Age of Acquisition and Imageability: A Cross-Task Comparison

    ERIC Educational Resources Information Center

    Ploetz, Danielle M.; Yates, Mark

    2016-01-01

    Previous research has reported an imageability effect on visual word recognition. Words that are high in imageability are recognised more rapidly than are those lower in imageability. However, later researchers argued that imageability was confounded with age of acquisition. In the current research, these two factors were manipulated in a…

  18. Star sensor image acquisition and preprocessing hardware system based on CMOS image sensor and FGPA

    NASA Astrophysics Data System (ADS)

    Hao, Xuetao; Jiang, Jie; Zhang, Guangjun

    2003-09-01

    Star Sensor is an avionics instrument used to provide the absolute 3-axis attitude of a spacecraft utilizing star observations. It consists of an electronic camera and associated processing electronics. As outcome of advancing state-of-the-art, new generation star sensor features faster, lower cost, power dissipation and size than the first generation star sensor. This paper describes a star sensor anterior image acquisition and pre-processing hardware system based on CMOS image-sensor and FPGA technology. Practically, star images are produced by a simple simulator on PC, acquired by CMOS image sensor, pre-processed by FPGA, saved in SRAM, read out by EPP protocol and validated by an image process software on PC. The hardware part of system acquires images thought CMOS image-sensor controlled by FPGA, then processes image data by a circuit module of FPGA, and save images to SRAM for test. Basic image data for star recognition and attitude determination of spacecrafts are provided by it. As an important reference for developing star sensor prototype, the system validates the performance advantages of new generation star sensor.

  19. 28. Perimeter acquisition radar building room #302, signal process and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. Perimeter acquisition radar building room #302, signal process and analog receiver room - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  20. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  1. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  2. A study on the effect of CT imaging acquisition parameters on lung nodule image interpretation

    NASA Astrophysics Data System (ADS)

    Yu, Shirley J.; Wantroba, Joseph S.; Raicu, Daniela S.; Furst, Jacob D.; Channin, David S.; Armato, Samuel G., III

    2009-02-01

    Most Computer-Aided Diagnosis (CAD) research studies are performed using a single type of Computer Tomography (CT) scanner and therefore, do not take into account the effect of differences in the imaging acquisition scanner parameters. In this paper, we present a study on the effect of the CT parameters on the low-level image features automatically extracted from CT images for lung nodule interpretation. The study is an extension of our previous study where we showed that image features can be used to predict semantic characteristics of lung nodules such as margin, lobulation, spiculation, and texture. Using the Lung Image Data Consortium (LIDC) dataset, we propose to integrate the imaging acquisition parameters with the low-level image features to generate classification models for the nodules' semantic characteristics. Our preliminary results identify seven CT parameters (convolution kernel, reconstruction diameter, exposure, nodule location along the z-axis, distance source to patient, slice thickness, and kVp) as influential in producing classification rules for the LIDC semantic characteristics. Further post-processing analysis, which included running box plots and binning of values, identified four CT parameters: distance source to patient, kVp, nodule location, and rescale intercept. The identification of these parameters will create the premises to normalize the image features across different scanners and, in the long run, generate automatic rules for lung nodules interpretation independently of the CT scanner types.

  3. Motion-gated acquisition for in vivo optical imaging

    PubMed Central

    Gioux, Sylvain; Ashitate, Yoshitomo; Hutteman, Merlijn; Frangioni, John V.

    2009-01-01

    Wide-field continuous wave fluorescence imaging, fluorescence lifetime imaging, frequency domain photon migration, and spatially modulated imaging have the potential to provide quantitative measurements in vivo. However, most of these techniques have not yet been successfully translated to the clinic due to challenging environmental constraints. In many circumstances, cardiac and respiratory motion greatly impair image quality and∕or quantitative processing. To address this fundamental problem, we have developed a low-cost, field-programmable gate array–based, hardware-only gating device that delivers a phase-locked acquisition window of arbitrary delay and width that is derived from an unlimited number of pseudo-periodic and nonperiodic input signals. All device features can be controlled manually or via USB serial commands. The working range of the device spans the extremes of mouse electrocardiogram (1000 beats per minute) to human respiration (4 breaths per minute), with timing resolution ⩽0.06%, and jitter ⩽0.008%, of the input signal period. We demonstrate the performance of the gating device, including dramatic improvements in quantitative measurements, in vitro using a motion simulator and in vivo using near-infrared fluorescence angiography of beating pig heart. This gating device should help to enable the clinical translation of promising new optical imaging technologies. PMID:20059276

  4. Biomedical image processing.

    PubMed

    Huang, H K

    1981-01-01

    Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed. Two current federally funded research projects in heart imaging, the dynamic spatial reconstructor and the dynamic cardiac three-dimensional densitometer, should bring some fruitful results in the near future. Miscrosopic imaging technique is very different from the radiological imaging technique in the sense that

  5. Acquisition by Processing Theory: A Theory of Everything?

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    2004-01-01

    Truscott and Sharwood Smith (henceforth T&SS) propose a novel theory of language acquisition, "Acquisition by Processing Theory" (APT), designed to account for both first and second language acquisition, monolingual and bilingual speech perception and parsing, and speech production. This is a tall order. Like any theoretically ambitious…

  6. A Risk Management Model for the Federal Acquisition Process.

    DTIC Science & Technology

    1999-06-01

    risk management in the acquisition process. This research explains the Federal Acquisition Process and each of the 78 tasks to be completed by the CO...and examines the concepts of risk and risk management . This research culminates in the development of a model that identifies prevalent risks in the...contracting professionals is used to gather opinions, ideas, and practical applications of risk management in the acquisition process, and refine the model

  7. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  8. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  9. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  10. Acquisition by Processing: A Modular Perspective on Language Development

    ERIC Educational Resources Information Center

    Truscott, John; Smith, Mike Sharwood

    2004-01-01

    The paper offers a model of language development, first and second, within a processing perspective. We first sketch a modular view of language, in which competence is embodied in the processing mechanisms. We then propose a novel approach to language acquisition (Acquisition by Processing Theory, or APT), in which development of the module occurs…

  11. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  12. Chemical Applications of a Programmable Image Acquisition System

    NASA Astrophysics Data System (ADS)

    Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian

    2003-06-01

    Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.

  13. 29. Perimeter acquisition radar building room #318, data processing system ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. Perimeter acquisition radar building room #318, data processing system area; data processor maintenance and operations center, showing data processing consoles - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  14. Image Processing Software

    NASA Astrophysics Data System (ADS)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  15. An evaluation on CT image acquisition method for medical VR applications

    NASA Astrophysics Data System (ADS)

    Jang, Seong-wook; Ko, Junho; Yoo, Yon-sik; Kim, Yoonsang

    2017-02-01

    Recent medical virtual reality (VR) applications to minimize re-operations are being studied for improvements in surgical efficiency and reduction of operation error. The CT image acquisition method considering three-dimensional (3D) modeling for medical VR applications is important, because the realistic model is required for the actual human organ. However, the research for medical VR applications has focused on 3D modeling techniques and utilized 3D models. In addition, research on a CT image acquisition method considering 3D modeling has never been reported. The conventional CT image acquisition method involves scanning a limited area of the lesion for the diagnosis of doctors once or twice. However, the medical VR application is required to acquire the CT image considering patients' various postures and a wider area than the lesion. A wider area than the lesion is required because of the necessary process of comparing bilateral sides for dyskinesia diagnosis of the shoulder, pelvis, and leg. Moreover, patients' various postures are required due to the different effects on the musculoskeletal system. Therefore, in this paper, we perform a comparative experiment on the acquired CT images considering image area (unilateral/bilateral) and patients' postures (neutral/abducted). CT images are acquired from 10 patients for the experiments, and the acquired CT images are evaluated based on the length per pixel and the morphological deviation. Finally, by comparing the experiment results, we evaluate the CT image acquisition method for medical VR applications.

  16. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  17. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  18. Electro-Optic Data Acquisition and Processing.

    DTIC Science & Technology

    Methods for the analysis of electro - optic relaxation data are discussed. Emphasis is on numerical methods using high speed computers. A data acquisition system using a minicomputer for data manipulation is described. Relationship of the results obtained here to other possible uses is given. (Author)

  19. Dual Learning Processes in Interactive Skill Acquisition

    ERIC Educational Resources Information Center

    Fu, Wai-Tat; Anderson, John R.

    2008-01-01

    Acquisition of interactive skills involves the use of internal and external cues. Experiment 1 showed that when actions were interdependent, learning was effective with and without external cues in the single-task condition but was effective only with the presence of external cues in the dual-task condition. In the dual-task condition, actions…

  20. Implicit and Explicit Cognitive Processes in Incidental Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Ender, Andrea

    2016-01-01

    Studies on vocabulary acquisition in second language learning have revealed that a large amount of vocabulary is learned without an overt intention, in other words, incidentally. This article investigates the relevance of different lexical processing strategies for vocabulary acquisition when reading a text for comprehension among 24 advanced…

  1. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  2. Collaborative Research and Development (CR&D) III Task Order 0090: Image Processing Framework: From Acquisition and Analysis to Archival Storage

    DTIC Science & Technology

    2013-05-01

    loy in three and two dimensions using the Saltykov method .’ ’ Scripta Materialia. Acta Materialia Inc., 66(8), 554-557. 15 Approved for public...images and data. 15. SUBJECT TERMS DREAM3D, Finite Element Method (FEM), Integrated Computational Materials Science (ICME), Graphical Interface...Finite E lement Method algorithms. data acqui sition sensitivity studi es and linking material properties to material structure. T he software is s

  3. Quantum image processing?

    NASA Astrophysics Data System (ADS)

    Mastriani, Mario

    2017-01-01

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as quantum image processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: (1) a classical algorithm that uses Dirac's notation and then it is coded in MATLAB does not constitute a quantum algorithm, (2) the literature emphasizes the internal representation of the image but says nothing about the classical-to-quantum and quantum-to-classical interfaces and how these are affected by decoherence, (3) the literature does not mention how to implement in a practical way (at the laboratory) these proposals internal representations, (4) given that quantum image processing works with generic qubits, this requires measurements in all axes of the Bloch sphere, logically, and (5) among others. In return, the technique known as quantum Boolean image processing is mentioned, which works with computational basis states (CBS), exclusively. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too.

  4. Image-processing pipelines: applications in magnetic resonance histology

    NASA Astrophysics Data System (ADS)

    Johnson, G. Allan; Anderson, Robert J.; Cook, James J.; Long, Christopher; Badea, Alexandra

    2016-03-01

    Image processing has become ubiquitous in imaging research—so ubiquitous that it is easy to loose track of how diverse this processing has become. The Duke Center for In Vivo Microscopy has pioneered the development of Magnetic Resonance Histology (MRH), which generates large multidimensional data sets that can easily reach into the tens of gigabytes. A series of dedicated image-processing workstations and associated software have been assembled to optimize each step of acquisition, reconstruction, post-processing, registration, visualization, and dissemination. This talk will describe the image-processing pipelines from acquisition to dissemination that have become critical to our everyday work.

  5. Effects of Orientation and Anisometry of Magnetic Resonance Imaging Acquisitions on Diffusion Tensor Imaging and Structural Connectomes

    PubMed Central

    Muñoz-Moreno, Emma; López-Gil, Xavier; Soria, Guadalupe

    2017-01-01

    Diffusion-weighted imaging (DWI) quantifies water molecule diffusion within tissues and is becoming an increasingly used technique. However, it is very challenging as correct quantification depends on many different factors, ranging from acquisition parameters to a long pipeline of image processing. In this work, we investigated the influence of voxel geometry on diffusion analysis, comparing different acquisition orientations as well as isometric and anisometric voxels. Diffusion-weighted images of one rat brain were acquired with four different voxel geometries (one isometric and three anisometric in different directions) and three different encoding orientations (coronal, axial and sagittal). Diffusion tensor scalar measurements, tractography and the brain structural connectome were analyzed for each of the 12 acquisitions. The acquisition direction with respect to the main magnetic field orientation affected the diffusion results. When the acquisition slice-encoding direction was not aligned with the main magnetic field, there were more artifacts and a lower signal-to-noise ratio that led to less anisotropic tensors (lower fractional anisotropic values), producing poorer quality results. The use of anisometric voxels generated statistically significant differences in the values of diffusion metrics in specific regions. It also elicited differences in tract reconstruction and in different graph metric values describing the brain networks. Our results highlight the importance of taking into account the geometric aspects of acquisitions, especially when comparing diffusion data acquired using different geometries. PMID:28118397

  6. Effects of Orientation and Anisometry of Magnetic Resonance Imaging Acquisitions on Diffusion Tensor Imaging and Structural Connectomes.

    PubMed

    Tudela, Raúl; Muñoz-Moreno, Emma; López-Gil, Xavier; Soria, Guadalupe

    2017-01-01

    Diffusion-weighted imaging (DWI) quantifies water molecule diffusion within tissues and is becoming an increasingly used technique. However, it is very challenging as correct quantification depends on many different factors, ranging from acquisition parameters to a long pipeline of image processing. In this work, we investigated the influence of voxel geometry on diffusion analysis, comparing different acquisition orientations as well as isometric and anisometric voxels. Diffusion-weighted images of one rat brain were acquired with four different voxel geometries (one isometric and three anisometric in different directions) and three different encoding orientations (coronal, axial and sagittal). Diffusion tensor scalar measurements, tractography and the brain structural connectome were analyzed for each of the 12 acquisitions. The acquisition direction with respect to the main magnetic field orientation affected the diffusion results. When the acquisition slice-encoding direction was not aligned with the main magnetic field, there were more artifacts and a lower signal-to-noise ratio that led to less anisotropic tensors (lower fractional anisotropic values), producing poorer quality results. The use of anisometric voxels generated statistically significant differences in the values of diffusion metrics in specific regions. It also elicited differences in tract reconstruction and in different graph metric values describing the brain networks. Our results highlight the importance of taking into account the geometric aspects of acquisitions, especially when comparing diffusion data acquired using different geometries.

  7. Automatic image acquisition processor and method

    DOEpatents

    Stone, W.J.

    1984-01-16

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  8. Automatic image acquisition processor and method

    DOEpatents

    Stone, William J.

    1986-01-01

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  9. The solar-image acquisition system at Tor Vergata University.

    NASA Astrophysics Data System (ADS)

    Berrilli, F.; Cantarano, S.; Egidi, A.

    1995-06-01

    Describes an image acquisition system realized as a part of an apparatus built in collaboration with the Arcetri Astrophysical Observatory in Florence designed to record high-spectral-resolution solar images in the visible part of the spectrum. The system is based on a 512×512 Thomson CCD type THX31159 and on a 486 CPU personal computer running under MS-DOS. The electronics for driving the sensor and for the amplification and conditioning of the video signal has been designed and built in the laboratory while the signal A/D conversion and image presentation is performed using commercial boards.

  10. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  11. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  12. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  13. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  14. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  15. Image reconstruction for synchronous data acquisition in fluorescence molecular tomography.

    PubMed

    Zhang, Xuanxuan; Liu, Fei; Zuo, Siming; Bai, Jing; Luo, Jianwen

    2015-01-01

    The present full-angle, free-space fluorescence molecular tomography (FMT) system uses a step-by-step strategy to acquire measurements, which consumes time for both the rotation of the object and the integration of the charge-coupled device (CCD) camera. Completing the integration during the rotation is a more time-efficient strategy called synchronous data acquisition. However, the positions of sources and detectors in this strategy are not stationary, which is not taken into account in the conventional reconstruction algorithm. In this paper we propose a reconstruction algorithm based on the finite element method (FEM) to overcome this problem. Phantom experiments were carried out to validate the performance of the algorithm. The results show that, compared with the conventional reconstruction algorithm used in the step-by-step data acquisition strategy, the proposed algorithm can reconstruct images with more accurate location data and lower relative errors when used with the synchronous data acquisition strategy.

  16. Acquisition, Sharing and Processing of Large Datasets for Strain Imaging: an Example of an Indented Ni3Al/Mo Composite

    SciTech Connect

    McIntyre, Stewart; Barabash, Rozaliya; Qin, Jinhui; Kunz, Martin; Tamura, Nobumichi; Bei, Hongbin

    2013-01-01

    The local effects of stress from a mechanical indentation have been studied on a Ni3 Al single crystal containing submicron inclusions of molybdenum fibers. Polychromatic X ray Microscopy (PXM) was used to measure elastic and plastic deformations near the indents. Analysis of freshly acquired massive sets of PXM data has been carried out over the Science Studio network using parallel processing software FOXMAS. This network and the FOXMAS software have greatly improved the efficiency of the data processing task.. The analysis was successfully applied to study lattice orientation distribution and strain tensor components for both the Ni3 Al and the Mo phases, particularly around 8 indents patterned at the longitudinal section of the alloy.

  17. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    Technical Journal, Vol. 36, pp. 653-709, May 1957. -50- 4. Image Restoration anJ Enhdikcement Projects Imaje restoration ani image enhancement are...n (9K =--i_ (9) -sn =0- 2. where o is the noise energy ani I is an identity matrix. n Color Imaje Scanner Calibration: A common problem in the...line of the imaje , and >at. The statistics cf the process N(k) can now be given in terms of the statistics of m , 8 2 , and the sequence W= (cLe (5

  18. Image processing techniques for acoustic images

    NASA Astrophysics Data System (ADS)

    Murphy, Brian P.

    1991-06-01

    The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

  19. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  20. Retinomorphic image processing.

    PubMed

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  1. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  2. FY 79 Software Acquisition Process Model Task. Revision 1

    DTIC Science & Technology

    1980-07-01

    This final report on the FY 79 Project 5220 Software Acquisition Process Model Task (522F) presents the approach taken to process model definition...plan for their incorporation and application in successive process model versions. The report contains diagrams that represent the Full-Scale

  3. Input and Input Processing in Second Language Acquisition.

    ERIC Educational Resources Information Center

    Alcon, Eva

    1998-01-01

    Analyzes second-language learners' processing of linguistic data within the target language, focusing on input and intake in second-language acquisition and factors and cognitive processes that affect input processing. Input factors include input simplification, input enhancement, and interactional modifications. Individual learner differences…

  4. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  5. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    PubMed

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children.

  6. Research of aerial imaging spectrometer data acquisition technology based on USB 3.0

    NASA Astrophysics Data System (ADS)

    Huang, Junze; Wang, Yueming; He, Daogang; Yu, Yanan

    2016-11-01

    With the emergence of UAV (unmanned aerial vehicle) platform for aerial imaging spectrometer, research of aerial imaging spectrometer DAS(data acquisition system) faces new challenges. Due to the limitation of platform and other factors, the aerial imaging spectrometer DAS requires small-light, low-cost and universal. Traditional aerial imaging spectrometer DAS system is expensive, bulky, non-universal and unsupported plug-and-play based on PCIe. So that has been unable to meet promotion and application of the aerial imaging spectrometer. In order to solve these problems, the new data acquisition scheme bases on USB3.0 interface.USB3.0 can provide guarantee of small-light, low-cost and universal relying on the forward-looking technology advantage. USB3.0 transmission theory is up to 5Gbps.And the GPIF programming interface achieves 3.2Gbps of the effective theoretical data bandwidth.USB3.0 can fully meet the needs of the aerial imaging spectrometer data transmission rate. The scheme uses the slave FIFO asynchronous data transmission mode between FPGA and USB3014 interface chip. Firstly system collects spectral data from TLK2711 of high-speed serial interface chip. Then FPGA receives data in DDR2 cache after ping-pong data processing. Finally USB3014 interface chip transmits data via automatic-dma approach and uploads to PC by USB3.0 cable. During the manufacture of aerial imaging spectrometer, the DAS can achieve image acquisition, transmission, storage and display. All functions can provide the necessary test detection for aerial imaging spectrometer. The test shows that system performs stable and no data lose. Average transmission speed and storage speed of writing SSD can stabilize at 1.28Gbps. Consequently ,this data acquisition system can meet application requirements for aerial imaging spectrometer.

  7. Finding Discipline in an Agile Acquisition Process

    DTIC Science & Technology

    2011-05-18

    of technology to deployment Documentation of processes with compliance audits i th t f ll d• ensur ng a processes are o owe Financial performance...deployment Deltas • use case deferrals, shortfalls, test deficiencies are in domain-relevant language of end users and decisions makers – avoids...Bottom Line When we speak of discipline, we are advocating the creation of a more disciplined mechanism (structures + processes) to: • describe user

  8. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  9. Compressive Video Acquisition, Fusion and Processing

    DTIC Science & Technology

    2010-12-14

    different views of the independent motions of 2 toy koalas along individual 1-D paths, yielding a 2-D combined parameter space. This data suffers...from real-world artifacts such as fluctuations in illumination conditions and variations in the pose of the koalas ; further, the koalas occlude one...1 Camera 2 Camera 3 Camera 4 Joint manifold R aw im a g es R a n d o m p ro je ct io n s Figure 38: (top) Sample images of 2 koalas moving along

  10. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  11. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  12. Image processing in planetology

    NASA Astrophysics Data System (ADS)

    Fulchignoni, M.; Picchiotti, A.

    The authors summarize the state of art in the field of planetary image processing in terms of available data, required procedures and possible improvements. More than a technical description of the adopted algorithms, that are considered as the normal background of any research activity dealing with interpretation of planetary data, the authors outline the advances in planetology achieved as a consequence of the availability of better data and more sophisticated hardware. An overview of the available data base and of the organizational efforts to make the data accessible and updated constitutes a valuable reference for those people interested in getting information. A short description of the processing sequence, illustrated by an example which shows the quality of the obtained products and the improvement in each successive step of the processing procedure gives an idea of the possible use of this kind of information.

  13. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  14. New developments in electron microscopy for serial image acquisition of neuronal profiles.

    PubMed

    Kubota, Yoshiyuki

    2015-02-01

    Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed.

  15. Auditory Processing Disorders: Acquisition and Treatment

    ERIC Educational Resources Information Center

    Moore, David R.

    2007-01-01

    Auditory processing disorder (APD) describes a mixed and poorly understood listening problem characterised by poor speech perception, especially in challenging environments. APD may include an inherited component, and this may be major, but studies reviewed here of children with long-term otitis media with effusion (OME) provide strong evidence…

  16. Acquisitions. ERIC Processing Manual, Section II.

    ERIC Educational Resources Information Center

    Sundstrom, Grace, Ed.

    Rules and guidelines are provided for the process of acquiring documents to be considered for inclusion in the ERIC database. The differing responsibilities of the Government, the ERIC Clearinghouses, and the ERIC Facility are delineated. The various methods by which documentary material can be obtained are described and preferences outlined.…

  17. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  18. An Auditable Performance Based Software Acquisition Process

    DTIC Science & Technology

    2010-04-28

    All Rights Reserved SPG SSTC 2010 Inspections - Peer Reviews • Over time, each term has become ambiguous • Many times the two terms are used...interchangeably Stewart-Priven believe: • Inspections are a rigorous form of Peer Reviews • Peer Reviews are not necessarily Inspections – Peer ...Inspection tool reports for process conformance and Computerized Inspection Tools 2 Perform gap analysis and map project’s Inspection (or Peer - Review ) capabilities

  19. Budgeting and Acquisition Business Process Reform

    DTIC Science & Technology

    2007-11-07

    reform issues . He has authored more than one hundred journal articles and book chapters on topics including national defense budgeting and policy...programming and budgeting cycles, while still preserving the decisions made in the on-year cycle through the off-year by limiting reconsideration of...POM were eliminated and replaced by a process of longer-term budgeting. In traditional budgeting, budget submitting offices (BSOs) have to answer

  20. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  1. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  2. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target.

  3. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  4. Digital acquisition system for high-speed 3-D imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji

    1997-11-01

    High-speed digital three-dimensional (3-D) imagery is possible using multiple independent charge-coupled device (CCD) cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras, providing versatility in configuration and image acquisition. By aligning the cameras in nearly coincident lines-of-sight, a sixteen frame two-dimensional (2-D) sequence can be captured. The delays can be individually adjusted lo yield a greater number of acquired frames during the more rapid segments of the event. Additionally, individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. An alternative alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images may be captured. In the first alignment scheme the camera lines-of-sight cannot be made precisely coincident. Thus representation of the data as a monocular sequence introduces the issue of independent camera coordinate registration with the real scene. This issue arises more significantly using the stereo pair method to reconstruct quantitative 3-D spatial information of the event as a function of time. The principal development here will be the derivation and evaluation of a solution transform and its inverse for the digital data which will yield a 3-D spatial mapping as a function of time.

  5. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  6. A flexible high-rate USB2 data acquisition system for PET and SPECT imaging

    SciTech Connect

    J. Proffitt, W. Hammond, S. Majewski, V. Popov, R.R. Raylman, A.G. Weisenberger, R. Wojcik

    2006-02-01

    A new flexible data acquisition system has been developed to instrument gamma-ray imaging detectors designed by the Jefferson Lab Detector and Imaging Group. Hardware consists of 16-channel data acquisition modules installed on USB2 carrier boards. Carriers have been designed to accept one, two, and four modules. Application trigger rate and channel density determines the number of acquisition boards and readout computers used. Each channel has an independent trigger, gated integrator and a 2.5 MHz 12-bit ADC. Each module has an FPGA for analog control and signal processing. Processing includes a 5 ns 40-bit trigger time stamp and programmable triggering, gating, ADC timing, offset and gain correction, charge and pulse-width discrimination, sparsification, event counting, and event assembly. The carrier manages global triggering and transfers module data to a USB buffer. High-granularity time-stamped triggering is suitable for modular detectors. Time stamped events permit dynamic studies, complex offline event assembly, and high-rate distributed data acquisition. A sustained USB data rate of 20 Mbytes/s, a sustained trigger rate of 300 kHz for 32 channels, and a peak trigger rate of 2.5 MHz to FIFO memory were achieved. Different trigger, gating, processing, and event assembly techniques were explored. Target applications include >100 kHz coincidence rate PET detectors, dynamic SPECT detectors, miniature and portable gamma detectors for small-animal and clinical use.

  7. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  8. Infrared image acquisition system for vein pattern analysis

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Solís-Villarreal, J.

    2016-09-01

    The physical shape of the hand vascular distribution contains useful information that can be used for identifying and authenticating purposes; which provide a high level of security as a biometric. Furthermore, this pattern can be used widely in health field such as venography and venipuncture. In this paper, we analyze different IR imaging systems in order to obtain high visibility images of the hand vein pattern. The images are acquired in the range of 400 nm to 1300 nm, using infrared and thermal cameras. For the first image acquisition system, we use a CCD camera and a light source with peak emission in the 880 nm obtaining the images by reflection. A second system consists only of a ThermaCAM P65 camera acquiring the naturally emanating infrared light from the hand. A method of digital image analysis is implemented using Contrast Limited Adaptive Histogram Equalization (CLAHE) to remove noise. Subsequently, adaptive thresholding and mathematical morphology operations are implemented to get the vein pattern distribution.

  9. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  10. 75 FR 62069 - Federal Acquisition Regulation; Sudan Waiver Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... Federal Acquisition Regulation; Sudan Waiver Process AGENCIES: Department of Defense (DoD), General..., Prohibition on contracting with entities that conduct restricted business operations in Sudan, to add specific... on awarding a contract to a contractor that conducts business in Sudan should be waived....

  11. Reading Acquisition Enhances an Early Visual Process of Contour Integration

    ERIC Educational Resources Information Center

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched…

  12. Low Cost Coherent Doppler Lidar Data Acquisition and Processing

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce W.; Koch, Grady J.

    2003-01-01

    The work described in this paper details the development of a low-cost, short-development time data acquisition and processing system for a coherent Doppler lidar. This was done using common laboratory equipment and a small software investment. This system provides near real-time wind profile measurements. Coding flexibility created a very useful test bed for new techniques.

  13. Status of RAISE, the Rapid Acquisition Imaging Spectrograph Experiment

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, D. M.; DeForest, C.; Ayres, T. R.; Davis, M.; De Pontieu, B.; Schuehle, U.; Warren, H.

    2013-07-01

    The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100 ms, with 1 arcsec spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a new class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. The telescope and grating are coated with B4C to enhance short wavelength (2nd order) reflectance, enabling the instrument to record the brightest lines between 602-622Å and 761-780Å at the same time. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. We present an overview of the project, a summary of the maiden flight results, and an update on instrument status.Abstract (2,250 Maximum Characters): The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar

  14. Filter for biomedical imaging and image processing.

    PubMed

    Mondal, Partha P; Rajan, K; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  15. MO-FG-303-01: FEATURED PRESENTATION and BEST IN PHYSICS (THERAPY): Automating LINAC QA: Design and Testing of An Image Acquisition and Processing System Utilizing a Combination of Radioluminescent Phosphors, Embedded X-Ray Markers and Optical Measurements

    SciTech Connect

    Jenkins, C; Naczynski, D; Yu, S; Xing, L

    2015-06-15

    Purpose: The recent development of phosphors to visualize radiation beams from linear accelerators (LINAC) offers a unique opportunity for evaluating radiation fields within the context of the treatment space. The purpose of this study was to establish an automated, self-calibrating prototype system for performing quality assurance (QA) measurements. Methods: A thin layer of Gd{sub 2}O{sub 2}S:Tb phosphor and fiducial markers were embedded on several planar faces of a custom-designed phantom. The phantom was arbitrarily placed near iso-center on the couch of a LINAC equipped with on-board megavoltage (MV) and kilovoltage (kV) imagers. A plan consisting of several beams and integrated image acquisitions was delivered. Images of the phantom were collected throughout the delivery. Salient features, such as fiducials, crosshairs and beam edges were then extracted from these images used to calibrate the system, adjust for variations in phantom placement, and perform measurements. Beam edges were visualized by imaging the light generated by the phosphor on the phantom enabling direct comparison with the light field and laser locations. Registration of MV, kV and optical image data was performed using the embedded fiducial markers, enabling comparison of imaging center locations. Measurements specified by TG-142 were calculated and compared with those obtained from a commercially available QA system. Results: The system was able to automatically extract the location of the fiducials, lasers, light field and radiation field from the acquired images regardless of phantom positioning. It was also able to automatically identify the locations of fiducial markers on kV and MV images. All collected measurements were within TG-142 guidelines. The difference between the prototype and commercially available system were less than 0.2 mm. Conclusion: The prototype system demonstrated the capability of accurately and autonomously evaluating various TG-142 parameters independent of

  16. Simultaneous acquisition of spatial harmonics (SMASH): fast imaging with radiofrequency coil arrays.

    PubMed

    Sodickson, D K; Manning, W J

    1997-10-01

    SiMultaneous Acquisition of Spatial Harmonics (SMASH) is a new fast-imaging technique that increases MR image acquisition speed by an integer factor over existing fast-imaging methods, without significant sacrifices in spatial resolution or signal-to-noise ratio. Image acquisition time is reduced by exploiting spatial information inherent in the geometry of a surface coil array to substitute for some of the phase encoding usually produced by magnetic field gradients. This allows for partially parallel image acquisitions using many of the existing fast-imaging sequences. Unlike the data combination algorithms of prior proposals for parallel imaging, SMASH reconstruction involves a small set of MR signal combinations prior to Fourier transformation, which can be advantageous for artifact handling and practical implementation. A twofold savings in image acquisition time is demonstrated here using commercial phased array coils on two different MR-imaging systems. Larger time savings factors can be expected for appropriate coil designs.

  17. Data acquisition system for harmonic motion microwave Doppler imaging.

    PubMed

    Tafreshi, Azadeh Kamali; Karadaş, Mürsel; Top, Can Barış; Gençer, Nevzat Güneri

    2014-01-01

    Harmonic Motion Microwave Doppler Imaging (HMMDI) is a hybrid method proposed for breast tumor detection, which images the coupled dielectric and elastic properties of the tissue. In this paper, the performance of a data acquisition system for HMMDI method is evaluated on breast phantom materials. A breast fat phantom including fibro-glandular and tumor phantom regions is produced. The phantom is excited using a focused ultrasound probe and a microwave transmitter. The received microwave signal level is measured on three different points inside the phantom (fat, fibro-glandular, and tumor regions). The experimental results using the designed homodyne receiver proved the effectiveness of the proposed setup. In tumor phantom region, the signal level decreased about 3 dB compared to the signal level obtained from the fibro-glandular phantom area, whereas this signal was about 4 dB higher than the received signal from the fat phantom.

  18. Least Squares Time-Series Synchronization in Image Acquisition Systems.

    PubMed

    Piazzo, Lorenzo; Raguso, Maria Carmela; Calzoletti, Luca; Seu, Roberto; Altieri, Bruno

    2016-07-18

    We consider an acquisition system constituted by an array of sensors scanning an image. Each sensor produces a sequence of readouts, called a time-series. In this framework, we discuss the image estimation problem when the time-series are affected by noise and by a time shift. In particular, we introduce an appropriate data model and consider the Least Squares (LS) estimate, showing that it has no closed form. However, the LS problem has a structure that can be exploited to simplify the solution. In particular, based on two known techniques, namely Separable Nonlinear Least Squares (SNLS) and Alternating Least Squares (ALS), we propose and analyze several practical estimation methods. As an additional contribution, we discuss the application of these methods to the data of the Photodetector Array Camera and Spectrometer (PACS), which is an infrared photometer onboard the Herschel satellite. In this context, we investigate the accuracy and the computational complexity of the methods, using both true and simulated data.

  19. RAISE (Rapid Acquisition Imaging Spectrograph Experiment): Results and Instrument Status

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald; DeForest, Craig; Ayres, Tom; Davis, Michael; DePontieu, Bart; Diller, Jed; Graham, Roy; Schule, Udo; Warren, Harry

    2015-04-01

    We present initial results from the successful November 2014 launch of the RAISE (Rapid Acquisition Imaging Spectrograph Experiment) sounding rocket program, including intensity maps, high-speed spectroheliograms and dopplergrams, as well as an update on instrument status. The RAISE sounding rocket payload is the fastest high-speed scanning-slit imaging spectrograph flown to date and is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to study small-scale multithermal dynamics in active region (AR) loops, explore the strength, spectrum and location of high frequency waves in the solar atmosphere, and investigate the nature of transient brightenings in the chromospheric network.

  20. Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.

    PubMed

    Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F

    2012-04-01

    This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.

  1. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  2. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.

  3. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials

    NASA Astrophysics Data System (ADS)

    Fada, Justin S.; Wheeler, Nicholas R.; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J.; French, Roger H.

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  4. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    PubMed

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  5. The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) Sounding Rocket Investigation

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald M.; Deforest, Craig; Slater, David D.; Thomas, Roger J.; Ayres, Thomas; Davis, Michael; de Pontieu, Bart; Diller, Jed; Graham, Roy; Michaelis, Harald; Schuele, Udo; Warren, Harry

    2016-03-01

    We present a summary of the solar observing Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket program including an overview of the design and calibration of the instrument, flight performance, and preliminary chromospheric results from the successful November 2014 launch of the RAISE instrument. The RAISE sounding rocket payload is the fastest scanning-slit solar ultraviolet imaging spectrograph flown to date. RAISE is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2km/s. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (first-order 1205-1251Å and 1524-1569Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10Hz, recording up to 1800 complete spectra (per detector) in a single 6-min rocket flight. This opens up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to observe small-scale multithermal dynamics in Active Region (AR) and quiet Sun loops, identify the strength, spectrum and location of high frequency waves in the solar atmosphere, and determine the nature of energy release in the chromospheric network.

  6. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  7. Evidence on the Effect of DoD Acquisition Policy and Process on Cost Growth of Major Defense Acquisition Programs

    DTIC Science & Technology

    2015-05-13

    Evidence on the Effect of DoD Acquisition Policy and Process on Cost Growth of Major Defense Acquisition Programs Naval Postgraduate School 12th...TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Evidence on the Effect of DoD Acquisition Policy and Process on Cost Growth...There are no obvious candidates.  The paper (Appendix B) provides evidence that PAUC growth is not systematically influenced by changes in budget

  8. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    PubMed Central

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-01-01

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570

  9. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models.

    PubMed

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-10-19

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors.

  10. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  11. Payload Configurations for Efficient Image Acquisition - Indian Perspective

    NASA Astrophysics Data System (ADS)

    Samudraiah, D. R. M.; Saxena, M.; Paul, S.; Narayanababu, P.; Kuriakose, S.; Kiran Kumar, A. S.

    2014-11-01

    The world is increasingly depending on remotely sensed data. The data is regularly used for monitoring the earth resources and also for solving problems of the world like disasters, climate degradation, etc. Remotely sensed data has changed our perspective of understanding of other planets. With innovative approaches in data utilization, the demands of remote sensing data are ever increasing. More and more research and developments are taken up for data utilization. The satellite resources are scarce and each launch costs heavily. Each launch is also associated with large effort for developing the hardware prior to launch. It is also associated with large number of software elements and mathematical algorithms post-launch. The proliferation of low-earth and geostationary satellites has led to increased scarcity in the available orbital slots for the newer satellites. Indian Space Research Organization has always tried to maximize the utility of satellites. Multiple sensors are flown on each satellite. In each of the satellites, sensors are designed to cater to various spectral bands/frequencies, spatial and temporal resolutions. Bhaskara-1, the first experimental satellite started with 2 bands in electro-optical spectrum and 3 bands in microwave spectrum. The recent Resourcesat-2 incorporates very efficient image acquisition approach with multi-resolution (3 types of spatial resolution) multi-band (4 spectral bands) electro-optical sensors (LISS-4, LISS-3* and AWiFS). The system has been designed to provide data globally with various data reception stations and onboard data storage capabilities. Oceansat-2 satellite has unique sensor combination with 8 band electro-optical high sensitive ocean colour monitor (catering to ocean and land) along with Ku band scatterometer to acquire information on ocean winds. INSAT- 3D launched recently provides high resolution 6 band image data in visible, short-wave, mid-wave and long-wave infrared spectrum. It also has 19 band

  12. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  13. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  14. Probability of acquisition of three-dimensional imaging laser radar

    NASA Astrophysics Data System (ADS)

    Dong, Li-jun; Zhu, Shao-lan; Sun, Chuan-dong; Gao, Cun-xiao; Song, Zhi-yuan

    2011-06-01

    Three-dimensional imaging laser radar (3-D ladar) is widely used in area of modern military, scientific research, agriculture and industry. Because of its many features such as angle-angle-range capturing, high resolution, anti-jamming ability and no multipath effect ,but it has to scan for target searching, acquiring and tracking. This paper presents a novel probability model of target acquiring which provides a theoretical basis for optimizing the scanning mechanism. The model combines space and time, target moving velocity and ladar scanning velocity together. Then the optimum scanning mechanism to obtain the maximum probability of acquisition and associated with different targets can be gained. The result shows that this model provides a method to optimize parameter for designing of the scanner.

  15. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    SciTech Connect

    Castillo, S; Castillo, R; Castillo, E; Pan, T; Ibbott, G; Balter, P; Hobbs, B; Dai, J; Guerrero, T

    2014-06-15

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phase sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase

  16. The CINCS (Commanders-in-Chief) and the Acquisition Process

    DTIC Science & Technology

    1988-09-01

    weight to joint views in the PPB and acquisition processes. The current Joint Program Analysis Memorandum (JPAM) is considered in OSD as a pulling...the senior warfighters who can best judge the ultimate use of weapons. In the final analysis . that acceptance should rest on the capabilities of the...program could be a restructured Joint Program Analysis Memorandum which could serve as the Chairman’s Program Assessment Memorandum. Short of this

  17. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  18. SPARTACUS - A new system of data acquisition and processing for ultrasonic examination

    NASA Astrophysics Data System (ADS)

    Benoist, Ph.; Cartier, F.; Chapius, N.; Pincemaille, G.

    SPARTACUS, a novel data acquisition and processing system for ultrasonic examination, was developed in order to overcome the problem in which all the techniques of characterization, sizing, or of improving the SNR making use of information processing cannot be employed because the complete form of the HF signal and hence its frequency content are not accessible. In acquisition mode, SPARTACUS helps to record all the waveforms continuously in numerical form, and at a rate compatible with industrial requirements. In processing mode, SPARTACUS offers vast processing and imaging possibilities, which makes it possible to set up the analytical method adapted to a specific problem, so that the industrial operator has a tool capable of diagnostic automation in complex testing situations.

  19. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  20. [Imaging center - optimization of the imaging process].

    PubMed

    Busch, H-P

    2013-04-01

    Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams.

  1. Syntactic Algorithms for Image Segmentation and a Special Computer Architecture for Image Processing

    DTIC Science & Technology

    1977-12-01

    Experimental Results of image Segmentation from FLIR ( Forword Looking Infrared) Images . ...... . . . . . . . 1115 4.3.1 Data Acquisition System of...of a picture. Concerning the computer processing time in- volved In image segmentation, the grey level histogram thresholding approach is quite fast ...computer storage and the CPU time for each matching operation. The syntax- controlled method has the advantage of fast computer processing time for

  2. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  3. Image processing with COSMOS

    NASA Astrophysics Data System (ADS)

    Stobie, R. S.; Dodd, R. J.; MacGillivray, H. T.

    1981-12-01

    It is noted that astronomers have for some time been fascinated by the possibility of automatic plate measurement and that measuring engines have been constructed with an ever increasing degree of automation. A description is given of the COSMOS (CoOrdinates, Sizes, Magnitudes, Orientations, and Shapes) system at the Royal Observatory in Edinburgh. An automatic high-speed microdensitometer controlled by a minicomputer is linked to a very fast microcomputer that performs immediate image analysis. The movable carriage, whose position in two coordinates is controlled digitally to an accuracy of 0.5 micron (0.0005 mm) will take plates as large as 356 mm on a side. It is noted that currently the machine operates primarily in the Image Analysis Mode, in which COSMOS must first detect the presence of an image. It does this by scanning and digitizing the photograph in 'raster' fashion and then searching for local enhancements in the density of the exposed emulsion.

  4. Statistical Image Processing.

    DTIC Science & Technology

    1982-11-16

    spectral analysist texture image analysis and classification, __ image software package, automatic spatial clustering.ITWA domenit hi ba apa for...ICOLOR(256),IBW(256) 1502 FORMATO (30( CNO(N): fF12.1)) 1503 FORMAT(o *FMINo DMRGE:0f2E20.8) 1504 FORMAT(/o IMRGE:or15) 1505 FOR14ATV FIRST SUBIMAGE:v...1506 FORMATO ’ JOIN CLUSTER NL:0) 1507 FORMAT( NEW CLUSTER:O) 1508 FORMAT( LLBS.GE.600) 1532 FORMAT(15XoTHETA ,7X, SIGMA-SQUAREr3Xe MERGING-DISTANCE

  5. Multi-channel pre-beamformed data acquisition system for research on advanced ultrasound imaging methods.

    PubMed

    Cheung, Chris C P; Yu, Alfred C H; Salimi, Nazila; Yiu, Billy Y S; Tsang, Ivan K H; Kerby, Benjamin; Azar, Reza Zahiri; Dickie, Kris

    2012-02-01

    The lack of open access to the pre-beamformed data of an ultrasound scanner has limited the research of novel imaging methods to a few privileged laboratories. To address this need, we have developed a pre-beamformed data acquisition (DAQ) system that can collect data over 128 array elements in parallel from the Ultrasonix series of research-purpose ultrasound scanners. Our DAQ system comprises three system-level blocks: 1) a connector board that interfaces with the array probe and the scanner through a probe connector port; 2) a main board that triggers DAQ and controls data transfer to a computer; and 3) four receiver boards that are each responsible for acquiring 32 channels of digitized raw data and storing them to the on-board memory. This system can acquire pre-beamformed data with 12-bit resolution when using a 40-MHz sampling rate. It houses a 16 GB RAM buffer that is sufficient to store 128 channels of pre-beamformed data for 8000 to 25 000 transmit firings, depending on imaging depth; corresponding to nearly a 2-s period in typical imaging setups. Following the acquisition, the data can be transferred through a USB 2.0 link to a computer for offline processing and analysis. To evaluate the feasibility of using the DAQ system for advanced imaging research, two proof-of-concept investigations have been conducted on beamforming and plane-wave B-flow imaging. Results show that adaptive beamforming algorithms such as the minimum variance approach can generate sharper images of a wire cross-section whose diameter is equal to the imaging wavelength (150 μm in our example). Also, planewave B-flow imaging can provide more consistent visualization of blood speckle movement given the higher temporal resolution of this imaging approach (2500 fps in our example).

  6. Industrial applications of process imaging and image processing

    NASA Astrophysics Data System (ADS)

    Scott, David M.; Sunshine, Gregg; Rosen, Lou; Jochen, Ed

    2001-02-01

    Process imaging is the art of visualizing events inside closed industrial processes. Image processing is the art of mathematically manipulating digitized images to extract quantitative information about such processes. Ongoing advances in camera and computer technology have made it feasible to apply these abilities to measurement needs in the chemical industry. To illustrate the point, this paper describes several applications developed at DuPont, where a variety of measurements are based on in-line, at-line, and off-line imaging. Application areas include compounding, melt extrusion, crystallization, granulation, media milling, and particle characterization. Polymer compounded with glass fiber is evaluated by a patented radioscopic (real-time X-ray imaging) technique to measure concentration and dispersion uniformity of the glass. Contamination detection in molten polymer (important for extruder operations) is provided by both proprietary and commercial on-line systems. Crystallization in production reactors is monitored using in-line probes and flow cells. Granulation is controlled by at-line measurements of granule size obtained from image processing. Tomographic imaging provides feedback for improved operation of media mills. Finally, particle characterization is provided by a robotic system that measures individual size and shape for thousands of particles without human supervision. Most of these measurements could not be accomplished with other (non-imaging) techniques.

  7. Acquisition method improvement for Bossa Nova Technologies' full Stokes, passive polarization imaging camera SALSA

    NASA Astrophysics Data System (ADS)

    El Ketara, M.; Vedel, M.; Breugnot, S.

    2016-05-01

    For some applications, the need for fast polarization acquisition is essential (if the scene observed is moving or changing quickly). In this paper, we present a new acquisition method for Bossa Nova Technologies' full Stokes passive polarization imaging camera, the SALSA. This polarization imaging camera is based on "Division of Time polarimetry" architecture. The use of this technique presents the advantage of preserving the full resolution of the image observed all the while reducing the speed acquisition time. The goal of this new acquisition method is to overcome the limitations associated with Division of Time acquisition technique as well as to obtain high-speed polarization imaging while maintaining the image resolution. The efficiency of this new method is demonstrated in this paper through different experiments.

  8. Optimized list-mode acquisition and data processing procedures for ACS2 based PET systems.

    PubMed

    Langner, Jens; Bühler, Paul; Just, Uwe; Pötzsch, Christian; Will, Edmund; van den Hoff, Jörg

    2006-01-01

    PET systems using the acquisition control system version 2 (ACS2), e.g. the ECAT Exact HR PET scanner series, offer a rather restricted list-mode functionality. For instance, typical transfers of acquisition data consume a considerable amount of time. This represents a severe obstacle to the utilization of potential advantages of list-mode acquisition. In our study, we have developed hardware and software solutions which do not only allow for the integration of list-mode into routine procedures, but also improve the overall runtime stability of the system. We show that our methods are able to speed up the transfer of the acquired data to the image reconstruction and processing workstations by a factor of up to 140. We discuss how this improvement allows for the integration of list-mode-based post-processing methods such as an event-driven movement correction into the data processing environment, and how list-mode is able to improve the overall flexibility of PET investigations in general. Furthermore, we show that our methods are also attractive for conventional histogram-mode acquisition, due to the improved stability of the ACS2 system.

  9. Instant super-resolution imaging in live cells and embryos via analog image processing

    PubMed Central

    York, Andrew G.; Chandris, Panagiotis; Nogare, Damian Dalle; Head, Jeffrey; Wawrzusin, Peter; Fischer, Robert S.; Chitnis, Ajay; Shroff, Hari

    2013-01-01

    Existing super-resolution fluorescence microscopes compromise acquisition speed to provide subdiffractive sample information. We report an analog implementation of structured illumination microscopy that enables 3D super-resolution imaging with 145 nm lateral and 350 nm axial resolution, at acquisition speeds up to 100 Hz. By performing image processing operations optically instead of digitally, we removed the need to capture, store, and combine multiple camera exposures, increasing data acquisition rates 10–100x over other super-resolution microscopes and acquiring and displaying super-resolution images in real-time. Low excitation intensities allow imaging over hundreds of 2D sections, and combined physical and computational sectioning allow similar depth penetration to confocal microscopy. We demonstrate the capability of our system by imaging fine, rapidly moving structures including motor-driven organelles in human lung fibroblasts and the cytoskeleton of flowing blood cells within developing zebrafish embryos. PMID:24097271

  10. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  11. The logical syntax of number words: theory, acquisition and processing.

    PubMed

    Musolino, Julien

    2009-04-01

    Recent work on the acquisition of number words has emphasized the importance of integrating linguistic and developmental perspectives [Musolino, J. (2004). The semantics and acquisition of number words: Integrating linguistic and developmental perspectives. Cognition93, 1-41; Papafragou, A., Musolino, J. (2003). Scalar implicatures: Scalar implicatures: Experiments at the semantics-pragmatics interface. Cognition, 86, 253-282; Hurewitz, F., Papafragou, A., Gleitman, L., Gelman, R. (2006). Asymmetries in the acquisition of numbers and quantifiers. Language Learning and Development, 2, 76-97; Huang, Y. T., Snedeker, J., Spelke, L. (submitted for publication). What exactly do numbers mean?]. Specifically, these studies have shown that data from experimental investigations of child language can be used to illuminate core theoretical issues in the semantic and pragmatic analysis of number terms. In this article, I extend this approach to the logico-syntactic properties of number words, focusing on the way numerals interact with each other (e.g. Three boys are holding two balloons) as well as with other quantified expressions (e.g. Three boys are holding each balloon). On the basis of their intuitions, linguists have claimed that such sentences give rise to at least four different interpretations, reflecting the complexity of the linguistic structure and syntactic operations involved. Using psycholinguistic experimentation with preschoolers (n=32) and adult speakers of English (n=32), I show that (a) for adults, the intuitions of linguists can be verified experimentally, (b) by the age of 5, children have knowledge of the core aspects of the logical syntax of number words, (c) in spite of this knowledge, children nevertheless differ from adults in systematic ways, (d) the differences observed between children and adults can be accounted for on the basis of an independently motivated, linguistically-based processing model [Geurts, B. (2003). Quantifying kids. Language

  12. Image processing: some challenging problems.

    PubMed Central

    Huang, T S; Aizawa, K

    1993-01-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing. PMID:8234312

  13. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  14. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  15. High-speed image acquisition technology in quality detection of workpiece surface

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Jin, Zexuan; Wang, Wenjie; Chen, Nian

    2016-11-01

    High-speed image acquisition technology has a great significance to improve the effciency of the workpiece surface quality detection, image quality directly affects the final test results. Aiming at the high-speed image acquisition of workpiece surface quality online detection, a workpiece image high-speed online acquisition method was produced. A high-speed online image acquisition sequence was designed. The quantitative relationship between the positioning accuracy in the high speed online image acquisition, motion blur, exposure time and the speed of workpiece was analyzed. The effect between the vibration between transfer mechanism and workpiece was analyzed. Fast trigger was implemented by photoelectric sensor. The accurate positioning was implemented by using the high accuracy time delay module. The motion blur was controlled by reducing the exposure time. A high-speed image acquisition system was designed based on the high-speed image acquisition method. The positioning accuracy was less than 0.1 mm, and the motion blur was less than one pixel.

  16. Image Processing REST Web Services

    DTIC Science & Technology

    2013-03-01

    collections, deblurring, contrast enhancement, and super resolution. 2 1. Original Image with Target Chip to Super Resolve 2. Unenhanced...extracted target chip 3. Super-resolved target chip 4. Super-resolved, deblurred target chip 5. Super-resolved, deblurred and contrast...enhanced target chip Image 1. Chaining the image processing algorithms. 3 2. Resources There are two types of resources associated with these

  17. SOFT-1: Imaging Processing Software

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Five levels of image processing software are enumerated and discussed: (1) logging and formatting; (2) radiometric correction; (3) correction for geometric camera distortion; (4) geometric/navigational corrections; and (5) general software tools. Specific concerns about access to and analysis of digital imaging data within the Planetary Data System are listed.

  18. Photographic image enhancement and processing

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1975-01-01

    Image processing techniques (computer and photographic) are described which are used within the JSC Photographic Technology Division. Two purely photographic techniques used for specific subject isolation are discussed in detail. Sample imagery is included.

  19. Reading acquisition enhances an early visual process of contour integration.

    PubMed

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched in age, socioeconomic and cultural characteristics, on a contour integration task known to depend on early visual processing. Stimuli consisted of a closed egg-shaped contour made of disconnected Gabor patches, within a background of randomly oriented Gabor stimuli. Subjects had to decide whether the egg was pointing left or right. Difficulty was varied by jittering the orientation of the Gabor patches forming the contour. Contour integration performance was lower in illiterates than in both ex-illiterate and literate controls. We argue that this difference in contour perception must reflect a genuine difference in visual function. According to this view, the intensive perceptual training that accompanies reading acquisition also improves early visual abilities, suggesting that the impact of literacy on the visual system is more widespread than originally proposed.

  20. Sgraffito simulation through image processing

    NASA Astrophysics Data System (ADS)

    Guerrero, Roberto A.; Serón Arbeloa, Francisco J.

    2011-10-01

    This paper presents a tool for simulating the traditional Sgraffito technique through digital image processing. The tool is based on a digital image pile and a set of attributes recovered from the image at the bottom of the pile using the Streit and Buchanan multiresolution image pyramid. This technique tries to preserve the principles of artistic composition by means of the attributes of color, luminance and shape recovered from the foundation image. A couple of simulated scratching objects will establish how the recovered attributes have to be painted. Different attributes can be painted by using different scratching primitives. The resulting image will be a colorimetric composition reached from the image on the top of the pile, the color of the images revealed by scratching and the inner characteristics of each scratching primitive. The technique combines elements of image processing, art and computer graphics allowing users to make their own free compositions and providing a means for the development of visual communication skills within the user-observer relationship. The technique enables the application of the given concepts in non artistic fields with specific subject tools.

  1. Dual-energy imaging of the chest: Optimization of image acquisition techniques for the 'bone-only' image

    SciTech Connect

    Shkumat, N. A.; Siewerdsen, J. H.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2008-02-15

    Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp{sup L}/kVp{sup H}]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5--i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis)

  2. Dual-energy imaging of the chest: optimization of image acquisition techniques for the 'bone-only' image.

    PubMed

    Shkumat, N A; Siewerdsen, J H; Richard, S; Paul, N S; Yorkston, J; Van Metter, R

    2008-02-01

    Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp(L)/kVp(H)]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5-i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis).

  3. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  4. Predator Acquisition Program Transition from Rapid to Standard Processes

    DTIC Science & Technology

    2012-06-08

    the first Advanced Concept Technology Demonstration to transition into the Defense Acquisition System. When it did, it operated within the Air Force’s...Predator became the first Advanced Concept Technology Demonstration to transition into the Defense Acquisition System. When it did, it operated within...89 viii ACRONYMS ACTD Advanced Concept Technology Demonstration AF Air Force DAB Defense Acquisition Board DARO Defense

  5. Contractor relationships and inter-organizational strategies in NASA's R and D acquisition process

    NASA Technical Reports Server (NTRS)

    Guiltinan, J.

    1976-01-01

    Interorganizational analysis of NASA's acquisition process for research and development systems is discussed. The importance of understanding the contractor environment, constraints, and motives in selecting an acquisition strategy is demonstrated. By articulating clear project goals, by utilizing information about the contractor and his needs at each stage in the acquisition process, and by thorough analysis of the inter-organizational relationship, improved selection of acquisition strategies and business practices is possible.

  6. Image processing using reconfigurable FPGAs

    NASA Astrophysics Data System (ADS)

    Ferguson, Lee

    1996-10-01

    The use of reconfigurable field-programmable gate arrays (FPGAs) for imaging applications show considerable promise to fill the gap that often occurs when digital signal processor chips fail to meet performance specifications. Single chip DSPs do not have the overall performance to meet the needs of many imaging applications, particularly in real-time designs. Using multiple DSPs to boost performance often presents major design challenges in maintaining data alignment and process synchronization. These challenges can impose serious cost, power consumption and board space penalties. Image processing requires manipulating massive amounts of data at high-speed. Although DSP chips can process data at high-speeds, their architectures can inhibit overall system performance in real-time imaging. The rate of operations can be increased when they are performed in dedicated hardware, such as special-purpose imaging devices and FPGAs, which provides the horsepower necessary to implement real-time image processing products successfully and cost-effectively. For many fixed applications, non-SRAM- based (antifuse or flash-based) FPGAs provide the raw speed to accomplish standard high-speed functions. However, in applications where algorithms are continuously changing and compute operations must be modified, only SRAM-based FPGAs give enough flexibility. The addition of reconfigurable FPGAs as a flexible hardware facility enables DSP chips to perform optimally. The benefits primarily stem from optimizing the hardware for the algorithms or the use of reconfigurable hardware to enhance the product architecture. And with SRAM-based FPGAs that are capable of partial dynamic reconfiguration, such as the Cache-Logic FPGAs from Atmel, continuous modification of data and logic is not only possible, it is practical as well. First we review the particular demands of image processing. Then we present various applications and discuss strategies for exploiting the capabilities of

  7. Integrating image processing in PACS.

    PubMed

    Faggioni, Lorenzo; Neri, Emanuele; Cerri, Francesca; Turini, Francesca; Bartolozzi, Carlo

    2011-05-01

    Integration of RIS and PACS services into a single solution has become a widespread reality in daily radiological practice, allowing substantial acceleration of workflow with greater ease of work compared with older generation film-based radiological activity. In particular, the fast and spectacular recent evolution of digital radiology (with special reference to cross-sectional imaging modalities, such as CT and MRI) has been paralleled by the development of integrated RIS--PACS systems with advanced image processing tools (either two- and/or three-dimensional) that were an exclusive task of costly dedicated workstations until a few years ago. This new scenario is likely to further improve productivity in the radiology department with reduction of the time needed for image interpretation and reporting, as well as to cut costs for the purchase of dedicated standalone image processing workstations. In this paper, a general description of typical integrated RIS--PACS architecture with image processing capabilities will be provided, and the main available image processing tools will be illustrated.

  8. Autonomous Closed-Loop Tasking, Acquisition, Processing, and Evaluation for Situational Awareness Feedback

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Dan; Cappelaere, Pat

    2016-01-01

    This presentation describes the closed loop satellite autonomy methods used to connect users and the assets on Earth Orbiter- 1 (EO-1) and similar satellites. The base layer is a distributed architecture based on Goddard Mission Services Evolution Concept (GMSEC) thus each asset still under independent control. Situational awareness is provided by a middleware layer through common Application Programmer Interface (API) to GMSEC components developed at GSFC. Users setup their own tasking requests, receive views into immediate past acquisitions in their area of interest, and into future feasibilities for acquisition across all assets. Automated notifications via pubsub feeds are returned to users containing published links to image footprints, algorithm results, and full data sets. Theme-based algorithms are available on-demand for processing.

  9. Enhanced imaging process for xeroradiography

    NASA Astrophysics Data System (ADS)

    Fender, William D.; Zanrosso, Eddie M.

    1993-09-01

    An enhanced mammographic imaging process has been developed which is based on the conventional powder-toner selenium technology used in the Xerox 125/126 x-ray imaging system. The process is derived from improvements in the amorphous selenium x-ray photoconductor, the blue powder toner and the aerosol powder dispersion process. Comparisons of image quality and x-ray dose using the Xerox aluminum-wedge breast phantom and the Radiation Measurements Model 152D breast phantom have been made between the new Enhanced Process, the standard Xerox 125/126 System and screen-film at mammographic x-ray exposure parameters typical for each modality. When comparing the Enhanced Xeromammographic Process with the standard 125/126 System, a distinct advantage is seen for the Enhanced equivalent mass detection and superior fiber and speck detection. The broader imaging latitude of enhanced and standard Xeroradiography, in comparison to film, is illustrated in images made using the aluminum-wedge breast phantom.

  10. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  11. Image Processing Language. Phase 2.

    DTIC Science & Technology

    1988-11-01

    knowledge engineering of coherent collections of methodological tools as they appear in the literature, and the implementation of expert knowledge in...knowledge representation becomes even more desirable. The role of morphology ( Reference 30 as a knowledge formalization tool is another area which is...sets of image processing algorithms. These analyses are to be carried out in several modes including a complete translation to image algebra machine

  12. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  13. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  14. Image processing of galaxy photographs

    NASA Technical Reports Server (NTRS)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  15. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  16. Phase in Optical Image Processing

    NASA Astrophysics Data System (ADS)

    Naughton, Thomas J.

    2010-04-01

    The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.

  17. Real-time multi-camera video acquisition and processing platform for ADAS

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  18. Using image processing techniques on proximity probe signals in rotordynamics

    NASA Astrophysics Data System (ADS)

    Diamond, Dawie; Heyns, Stephan; Oberholster, Abrie

    2016-06-01

    This paper proposes a new approach to process proximity probe signals in rotordynamic applications. It is argued that the signal be interpreted as a one dimensional image. Existing image processing techniques can then be used to gain information about the object being measured. Some results from one application is presented. Rotor blade tip deflections can be calculated through localizing phase information in this one dimensional image. It is experimentally shown that the newly proposed method performs more accurately than standard techniques, especially where the sampling rate of the data acquisition system is inadequate by conventional standards.

  19. Light sheet microscopes: Novel imaging toolbox for visualizing life's processes.

    PubMed

    Heddleston, John M; Chew, Teng-Leong

    2016-11-01

    Capturing dynamic processes in live samples is a nontrivial task in biological imaging. Although fluorescence provides high specificity and contrast compared to other light microscopy techniques, the photophysical principles of this method can have a harmful effect on the sample. Current advances in light sheet microscopy have created a novel imaging toolbox that allows for rapid acquisition of high-resolution fluorescent images with minimal perturbation of the processes of interest. Each unique design has its own advantages and limitations. In this review, we describe several cutting edge light sheet microscopes and their optimal applications.

  20. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  1. KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process

    NASA Technical Reports Server (NTRS)

    Gettig, Gary A.

    1988-01-01

    Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.

  2. A Spartan 6 FPGA-based data acquisition system for dedicated imagers in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, E.; Loudos, G.; Georgiou, M.; David, S.; Matsopoulos, G.

    2012-12-01

    We present the development of a four-channel low-cost hardware system for data acquisition, with application in dedicated nuclear medicine imagers. A 12 bit octal channel high-speed analogue to digital converter, with up to 65 Msps sampling rate, was used for the digitization of analogue signals. The digitized data are fed into a field programmable gate array (FPGA), which contains an interface to a bank of double data rate 2 (DDR2)-type memory. The FPGA processes the digitized data and stores the results into the DDR2. An ethernet link was used for data transmission to a personal computer. The embedded system was designed using Xilinx's embedded development kit (EDK) and was based on Xilinx's Microblaze soft-core processor. The system has been evaluated using two different discrete optical detector arrays (a position-sensitive photomultiplier tube and a silicon photomultiplier) with two different pixelated scintillator arrays (BGO, LSO:Ce). The energy resolution for both detectors was approximately 25%. A clear identification of all crystal elements was achieved in all cases. The data rate of the system with this implementation can reach 60 Mbits s-1. The results have shown that this FPGA data acquisition system is a compact and flexible solution for single-photon-detection applications. This paper was originally submitted for inclusion in the special feature on Imaging Systems and Techniques 2011.

  3. Linear algebra and image processing

    NASA Astrophysics Data System (ADS)

    Allali, Mohamed

    2010-09-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.

  4. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  5. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  6. Progress in visualizing turbulent flow using single-echo acquisition imaging.

    PubMed

    Wright, Steven M; McDougall, Mary Preston; Bosshard, John C

    2006-01-01

    MRI of flow remains a challenging problem despite significant improvements in imaging speeds. For periodic flow the acquisition can be gated, synchronizing data acquisition with the flow. However, this method fails to work if the flow is sufficiently fast that turbulence occurs, or when it is sufficiently fast that blurring occurs during the excitation of the spins or the acquisition of the signal. This paper describes recent progress in employing a very fast MR imaging technique, Single Echo Acquisition Imaging (SEA-MRI) and spin-tagging to visualize very rapid and turbulent flow patterns. Demonstrations are done on a separating channel phantom with input flow rates ranging from zero to over 100 cm/sec. Spin-tagging enables a "texture" to be placed on the spins, enabling clear visualization of the complex flow patterns, and in some cases measurement of the flow velocity.

  7. Fault recognition depending on seismic acquisition and processing for application to geothermal exploration

    NASA Astrophysics Data System (ADS)

    Buness, H.; von Hartmann, H.; Rumpel, H.; Krawczyk, C. M.; Schulz, R.

    2011-12-01

    Fault systems offer a large potential for deep hydrothermal energy extraction. Most of the existing and planned projects rely on enhanced permeability assumed to be connected with them. Target depth of hydrothermal exploration in Germany is in the order of 3 -5 km to ensure an economic operation due to moderate temperature gradients. 3D seismics is the most appropriate geophysical method to image fault systems at these depth, but also one of the most expensive ones. It constitutes a significant part of the total project costs, so its application was (and is) discussed. Cost reduction in principle can be achieved by sparse acquisition. However, the decreased fold inevitably leads to a decreased S/N ratio. To overcome this problem, the application of the CRS (Common Reflection Surface) method has been proposed. The stacking operator of the CRS method inherently includes more traces than the conventional NMO/DMO stacking operator and hence a better S/N ratio can be achieved. We tested this approach using exiting 3D seismic datasets of the two most important hydrothermal provinces in Germany, the Upper Rhine Graben (URG) and the German Molasse Basin (GMB). To simulate a sparse acquisition, we reduced the amount of data to a quarter respectively a half and did a reprocessing of the data, including new velocity analysis and residual static corrections. In the URG, the utilization of the variance cube as basis for a horizon bound window amplitude analysis has been successful for the detection of small faults, which would hardly be recognized in seismic sections. In both regions, CRS processing undoubtedly improved the imaging of small faults in the complete as well as in the reduced versions of the datasets. However, CRS processing could not compensate the loss of resolution due to the reduction associated with the simulated sparse acquisition, and hence smaller faults became undetectable. The decision for a sparse acquisition of course depends on the scope of the survey

  8. The Logical Syntax of Number Words: Theory, Acquisition and Processing

    ERIC Educational Resources Information Center

    Musolino, Julien

    2009-01-01

    Recent work on the acquisition of number words has emphasized the importance of integrating linguistic and developmental perspectives [Musolino, J. (2004). The semantics and acquisition of number words: Integrating linguistic and developmental perspectives. "Cognition 93", 1-41; Papafragou, A., Musolino, J. (2003). Scalar implicatures: Scalar…

  9. The magic of image processing

    NASA Astrophysics Data System (ADS)

    Sulentic, J. W.

    1984-05-01

    Digital technology has been used to improve enhancement techniques in astronomical image processing. Continuous tone variations in photographs are assigned density number (DN) values which are arranged in an array. DN locations are processed by computer and turned into pixels which form a reconstruction of the original scene on a television monitor. Digitized data can be manipulated to enhance contrast and filter out gross patterns of light and dark which obscure small scale features. Separate black and white frames exposed at different wavelengths can be digitized and processed individually, then recombined to produce a final image in color. Several examples of the use of the technique are provided, including photographs of spiral galaxy M33; four galaxies in Coma Berenices (NGC 4169, 4173, 4174, and 4175); and Stephens Quintet.

  10. DICOM 3.0 image display and processing tool for teleradiology

    NASA Astrophysics Data System (ADS)

    Wang, Cliff X.; Chimiak, William J.; Hamilton, Craig A.

    1995-05-01

    A typical teleradiology system consists of four sub-systems: (1) image acquisition, (2) image transmission, (3) image viewing, and (4) teleconferencing. An image viewing and processing tool is a very important part of the system. A successful teleradiology system would require an effective image display and processing tool based on a standard user friendly GUI. DICOM 3.0 defines vender independent data formats and data transfers for digital medical images. It has been widely supported by industry since the first draft of DICOM 3.0 standard. In this paper, we present a DICOM 3.0 image display and processing tool for teleradiology and teleconsulting. The system provides the user a flexible image display format and a powerful set of image processing tools. A DICOM panel displays the grouped information of patient, study, result, and acquisition setting. This display and processing tool is designed for both clinical and research use.

  11. Breast image pre-processing for mammographic tissue segmentation.

    PubMed

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  12. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  13. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  14. Imageability predicts the age of acquisition of verbs in Chinese children.

    PubMed

    Ma, Weiyi; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy; McDonough, Colleen; Tardif, Twila

    2009-03-01

    Verbs are harder to learn than nouns in English and in many other languages, but are relatively easy to learn in Chinese. This paper evaluates one potential explanation for these findings by examining the construct of imageability, or the ability of a word to produce a mental image. Chinese adults rated the imageability of Chinese words from the Chinese Communicative Development Inventory (Tardif et al., in press). Imageability ratings were a reliable predictor of age of acquisition in Chinese for both nouns and verbs. Furthermore, whereas early Chinese and English nouns do not differ in imageability, verbs receive higher imageability ratings in Chinese than in English. Compared with input frequency, imageability independently accounts for a portion of the variance in age of acquisition (AoA) of verb learning in Chinese and English.

  15. Design of a Remote Infrared Images and Other Data Acquisition Station for outdoor applications

    NASA Astrophysics Data System (ADS)

    Béland, M.-A.; Djupkep, F. B. D.; Bendada, A.; Maldague, X.; Ferrarini, G.; Bison, P.; Grinzato, E.

    2013-05-01

    The Infrared Images and Other Data Acquisition Station enables a user, who is located inside a laboratory, to acquire visible and infrared images and distances in an outdoor environment with the help of an Internet connection. This station can acquire data using an infrared camera, a visible camera, and a rangefinder. The system can be used through a web page or through Python functions.

  16. Data Acquisition and Image Reconstruction Systems from the miniPET Scanners to the CARDIOTOM Camera

    NASA Astrophysics Data System (ADS)

    Valastván, I.; Imrek, J.; Hegyesi, G.; Molnár, J.; Novák, D.; Bone, D.; Kerek, A.

    2007-11-01

    Nuclear imaging devices play an important role in medical diagnosis as well as drug research. The first and second generation data acquisition systems and the image reconstruction library developed provide a unified hardware and software platform for the miniPET-I, miniPET-II small animal PET scanners and for the CARDIOTOM™.

  17. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 360°×60° FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  18. Imaging study of a phase-sensitive breast-CT system in continuous acquisition mode

    NASA Astrophysics Data System (ADS)

    Delogu, P.; Golosio, B.; Fedon, C.; Arfelli, F.; Bellazzini, R.; Brez, A.; Brun, F.; Di Lillo, F.; Dreossi, D.; Mettivier, G.; Minuti, M.; Oliva, P.; Pichera, M.; Rigon, L.; Russo, P.; Sarno, A.; Spandre, G.; Tromba, G.; Longo, R.

    2017-01-01

    The SYRMA-CT project aims to set-up the first clinical trial of phase-contrast breast Computed Tomography with synchrotron radiation at the SYRMEP beamline of Elettra, the Italian synchrotron light source. The challenge in a dedicated breast CT is to match a high spatial resolution with a low dose level. In order to fulfil these requirements, the SYRMA-CT project uses a large area CdTe single photon counting detector (Pixirad-8), simultaneous algebraic reconstruction technique (SART) and phase retrieval pre-processing. This work investigates the imaging performances of the system in a continuous acquisition mode and with a low dose level towards the clinical application. A custom test object and a large surgical sample have been studied.

  19. A dedicated hardware architecture for data acquisition and processing in a time-of-flight emission tomography system (Super PETT)

    SciTech Connect

    Holmes, T.J.; Blaine, G.J.; Fiche, D.C.; Hitchens, R.E.; Snyder, D.L.

    1983-02-01

    The authors present the architecture, implementation and performance aspects of a dedicated processor for use in Super-PETT. The micro-coded machine accepts event data from the acquisition circuitry and constructs in real time any of three pre-image types, including time-of-flight arrays. Pre-images are later backloaded to perform high speed reconstructions. One such processor is assigned to each image slice of the Super-PETT. Event rates and processing times are given for various application modalities.

  20. Rapid acquisition of in vivo biological images by use of optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Tearney, G. J.; Bouma, B. E.; Boppart, S. A.; Golubovic, B.; Swanson, E. A.; Fujimoto, J. G.

    1996-09-01

    The development of techniques for high-speed image acquisition in optical coherence tomography (OCT) systems is essential for suppressing motion artifacts when one is imaging living systems. We describe a new OCT system for performing micrometer-scale, cross-sectional optical imaging at four images / s. To achieve OCT image-acquisition times of less than 1 s, we use a piezoelectric fiber stretcher to vary the reference arm delay. A Kerr-lens mode-locked chromium-doped forsterite laser is employed as the low-coherence source for the high-speed OCT system. Dynamic, motion-artifact-free in vivo imaging of a beating Xenopus laevis (African frog) heart is demonstrated.

  1. Improving Image Quality and Reducing Drift Problems via Automated Data Acquisition and Averaging in Cs-corrected TEM

    SciTech Connect

    Voelkl, E; Jiang, B; Dai, Z R; Bradley, J P

    2008-08-29

    Image acquisition with a CCD camera is a single-press-button activity: after selecting exposure time and adjusting illumination, a button is pressed and the acquired image is perceived as the final, unmodified proof of what was seen in the microscope. Thus it is generally assumed that the image processing steps of e.g., 'dark-current correction' and 'gain normalization' do not alter the information content of the image, but rather eliminate unwanted artifacts. Image quality therefore is, among a long list of other parameters, defined by the dynamic range of the CCD camera as well as the maximum allowable exposure time depending on sample drift (ignoring sample damage). Despite the fact that most microscopists are satisfied with present, standard image quality we found that it is a relatively easy to improve on existing routines in at least two aspects: (1) Suppression of lateral image drift during acquisition by using significantly shorter exposure times with a plurality of exposures (3D-data set); and (2) Improvement in the Signal/Noise ratio by averaging over a given data set by exceeding the dynamic range of the camera.

  2. Reengineering the Acquisition/Procurement Process: A Methodology for Requirements Collection

    NASA Technical Reports Server (NTRS)

    Taylor, Randall; Vanek, Thomas

    2011-01-01

    This paper captures the systematic approach taken by JPL's Acquisition Reengineering Project team, the methodology used, challenges faced, and lessons learned. It provides pragmatic "how-to" techniques and tools for collecting requirements and for identifying areas of improvement in an acquisition/procurement process or other core process of interest.

  3. 77 FR 2682 - Defense Federal Acquisition Regulation Supplement; DoD Voucher Processing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-19

    ... Regulation Supplement; DoD Voucher Processing AGENCY: Defense Acquisition Regulations System, Department of Defense (DoD). ACTION: Proposed rule. SUMMARY: DoD is proposing to amend the Defense Federal Acquisition Regulation Supplement (DFARS) to update DoD's voucher processing procedures and better accommodate the use...

  4. Fringe image processing based on structured light series

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Li, Hongyan

    2009-11-01

    The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.

  5. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  6. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  7. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  8. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  9. Biomedical signal and image processing.

    PubMed

    Cerutti, Sergio; Baselli, Giuseppe; Bianchi, Anna; Caiani, Enrico; Contini, Davide; Cubeddu, Rinaldo; Dercole, Fabio; Rienzo, Luca; Liberati, Diego; Mainardi, Luca; Ravazzani, Paolo; Rinaldi, Sergio; Signorini, Maria; Torricelli, Alessandro

    2011-01-01

    Generally, physiological modeling and biomedical signal processing constitute two important paradigms of biomedical engineering (BME): their fundamental concepts are taught starting from undergraduate studies and are more completely dealt with in the last years of graduate curricula, as well as in Ph.D. courses. Traditionally, these two cultural aspects were separated, with the first one more oriented to physiological issues and how to model them and the second one more dedicated to the development of processing tools or algorithms to enhance useful information from clinical data. A practical consequence was that those who did models did not do signal processing and vice versa. However, in recent years,the need for closer integration between signal processing and modeling of the relevant biological systems emerged very clearly [1], [2]. This is not only true for training purposes(i.e., to properly prepare the new professional members of BME) but also for the development of newly conceived research projects in which the integration between biomedical signal and image processing (BSIP) and modeling plays a crucial role. Just to give simple examples, topics such as brain–computer machine or interfaces,neuroengineering, nonlinear dynamical analysis of the cardiovascular (CV) system,integration of sensory-motor characteristics aimed at the building of advanced prostheses and rehabilitation tools, and wearable devices for vital sign monitoring and others do require an intelligent fusion of modeling and signal processing competences that are certainly peculiar of our discipline of BME.

  10. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  11. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  12. Framelet lifting in image processing

    NASA Astrophysics Data System (ADS)

    Lu, Da-Yong; Feng, Tie-Yong

    2010-08-01

    To obtain appropriate framelets in image processing, we often need to lift existing framelets. For this purpose the paper presents some methods which allow us to modify existing framelets or filters to construct new ones. The relationships of matrices and their eigenvalues which be used in lifting schemes show that the frame bounds of the lifted wavelet frames are optimal. Moreover, the examples given in Section 4 indicate that the lifted framelets can play the roles of some operators such as the weighted average operator, the Sobel operator and the Laplacian operator, which operators are often used in edge detection and motion estimation applications.

  13. Complex-valued acquisition of the diffraction imaging by incoherent quasimonochromatic light without a support constraint

    SciTech Connect

    Zhang Minghui; Xu Jianfei; Wang Xianfu; Wei Qing

    2010-10-15

    A scheme for complex-valued acquisition of the diffraction imaging with quasimonochromatic incoherent light is theoretically proposed. The main idea is to project the real and the imaginary parts of a Fraunhofer diffraction field on intensity distributions, respectively, with the use of a {pi}/2 phase-changing plate. The whole procedure is iterative algorithm free and needs no a priori knowledge of an arbitrary object. A numerical experiment and a quantitative confirmation are also given. To our knowledge, it was the first physical proposal for the complex-valued acquisition of a diffraction imaging by two-dimensional coherent patterns with thermal illumination.

  14. Weapon Acquisition Program Outcomes and Efforts to Reform DOD’s Acquisition Process

    DTIC Science & Technology

    2016-05-09

    growth while in production. This represents concurrency, which can be caused by many factors, and is a contributor to cost growth. 9. As measured against...portfolio, were also in the 2005 portfolio and represent 80 percent of the portfolio’s current total acquisition cost or over $1.1 of the $1.4 trillion...continues a trend we have seen for the past decade. • When measured from first full estimates the total estimated cost of the portfolio has grown by over

  15. Facilities and the Air Force Systems Acquisition Process.

    DTIC Science & Technology

    1985-05-01

    to provide es- senti-l fLcilitio-s by, system Initial Cperatlcnal Capability (’-0C) . And secondly, vince the systems ;acjui. tior proceso is event...funds exclusively for systems acquisition. This change will remove the current military construction calendar constraint and allow facilities to be

  16. DEVELOPMENT OF MARKETABLE TYPING SKILL--SENSORY PROCESSES UNDERLYING ACQUISITION.

    ERIC Educational Resources Information Center

    WEST, LEONARD J.

    THE PROJECT ATTEMPTED TO PROVIDE FURTHER DATA ON THE DOMINANT HYPOTHESIS ABOUT THE SENSORY MECHANISMS UNDERLYING SKILL ACQUISITION IN TYPEWRITING. IN SO DOING, IT PROPOSED TO FURNISH A BASIS FOR IMPORTANT CORRECTIVES TO SUCH CONVENTIONAL INSTRUCTIONAL PROCEDURES AS TOUCH TYPING. SPECIFICALLY, THE HYPOTHESIS HAS BEEN THAT KINESTHESIS IS NOT…

  17. Developmental Stages in Receptive Grammar Acquisition: A Processability Theory Account

    ERIC Educational Resources Information Center

    Buyl, Aafke; Housen, Alex

    2015-01-01

    This study takes a new look at the topic of developmental stages in the second language (L2) acquisition of morphosyntax by analysing receptive learner data, a language mode that has hitherto received very little attention within this strand of research (for a recent and rare study, see Spinner, 2013). Looking at both the receptive and productive…

  18. Developmental Processes and Stages in the Acquisition of Cardinality.

    ERIC Educational Resources Information Center

    Bermejo, Vicente; Lago, M. Oliva

    1990-01-01

    Cardinality responses are affected by both the direction and nature of the elements in the counting sequence. Error analysis suggests six stages in the acquisition of cardinality. Although there appears to be a developmental dependency between counting and cardinality, this relationship is not significant in all cases. (RH)

  19. Processing of medical images using Maple

    NASA Astrophysics Data System (ADS)

    Toro Betancur, V.

    2013-05-01

    Maple's Image Tools package was used to process medical images. The results showed clearer images and records of its intensities and entropy. The medical images of a rhinocerebral mucormycosis patient, who was not early diagnosed, were processed and analyzed using Maple's tools, which showed, in a clearer way, the affected parts in the perinasal cavities.

  20. Seismic acquisition and processing methodologies in overthrust areas: Some examples from Latin America

    SciTech Connect

    Tilander, N.G.; Mitchel, R..

    1996-08-01

    Overthrust areas represent some of the last frontiers in petroleum exploration today. Billion barrel discoveries in the Eastern Cordillera of Colombia and the Monagas fold-thrust belt of Venezuela during the past decade have highlighted the potential rewards for overthrust exploration. However the seismic data recorded in many overthrust areas is disappointingly poor. Challenges such as rough topography, complex subsurface structure, presence of high-velocity rocks at the surface, back-scattered energy and severe migration wavefronting continue to lower data quality and reduce interpretability. Lack of well/velocity control also reduces the reliability of depth estimations and migrated images. Failure to obtain satisfactory pre-drill structural images can easily result in costly wildcat failures. Advances in the methodologies used by Chevron for data acquisition, processing and interpretation have produced significant improvements in seismic data quality in Bolivia, Colombia and Trinidad. In this paper, seismic test results showing various swath geometries will be presented. We will also show recent examples of processing methods which have led to improved structural imaging. Rather than focusing on {open_quotes}black box{close_quotes} methodology, we will emphasize the cumulative effect of step-by-step improvements. Finally, the critical significance and interrelation of velocity measurements, modeling and depth migration will be explored. Pre-drill interpretations must ultimately encompass a variety of model solutions, and error bars should be established which realistically reflect the uncertainties in the data.

  1. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  2. Low cost 3D scanning process using digital image processing

    NASA Astrophysics Data System (ADS)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  3. Troubleshooting digital macro photography for image acquisition and the analysis of biological samples.

    PubMed

    Liepinsh, Edgars; Kuka, Janis; Dambrova, Maija

    2013-01-01

    For years, image acquisition and analysis have been an important part of life science experiments to ensure the adequate and reliable presentation of research results. Since the development of digital photography and digital planimetric methods for image analysis approximately 20 years ago, new equipment and technologies have emerged, which have increased the quality of image acquisition and analysis. Different techniques are available to measure the size of stained tissue samples in experimental animal models of disease; however, the most accurate method is digital macro photography with software that is based on planimetric analysis. In this study, we described the methodology for the preparation of infarcted rat heart and brain tissue samples before image acquisition, digital macro photography techniques and planimetric image analysis. These methods are useful in the macro photography of biological samples and subsequent image analysis. In addition, the techniques that are described in this study include the automated analysis of digital photographs to minimize user input and exclude the risk of researcher-generated errors or bias during image analysis.

  4. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  5. Possible overlapping time frames of acquisition and consolidation phases in object memory processes: a pharmacological approach

    PubMed Central

    Akkerman, Sven; Blokland, Arjan

    2016-01-01

    In previous studies, we have shown that acetylcholinesterase inhibitors and phosphodiesterase inhibitors (PDE-Is) are able to improve object memory by enhancing acquisition processes. On the other hand, only PDE-Is improve consolidation processes. Here we show that the cholinesterase inhibitor donepezil also improves memory performance when administered within 2 min after the acquisition trial. Likewise, both PDE5-I and PDE4-I reversed the scopolamine deficit model when administered within 2 min after the learning trial. PDE5-I was effective up to 45 min after the acquisition trial and PDE4-I was effective when administered between 3 and 5.5 h after the acquisition trial. Taken together, our study suggests that acetylcholine, cGMP, and cAMP are all involved in acquisition processes and that cGMP and cAMP are also involved in early and late consolidation processes, respectively. Most important, these pharmacological studies suggest that acquisition processes continue for some time after the learning trial where they share a short common time frame with early consolidation processes. Additional brain concentration measurements of the drugs suggest that these acquisition processes can continue up to 4–6 min after learning. PMID:26670184

  6. Detection of pseudoaneurysm of the left ventricle by fast imaging employing steady-state acquisition (FIESTA) magnetic resonance imaging.

    PubMed

    Rerkpattanapipat, Pairoj; Mazur, Wojciech; Link, Kerry M; Clark, Hollins P; Hundley, W Gregory

    2003-01-01

    This report highlights the importance of interpretating images throughout the course of a dobutamine MRI stress test. Upon review of the baseline images, the left ventricular (LV) endocardium was not well seen due to flow artifacts associated with low intracavitary blood-flow velocity resulting from a prior myocardial infarction. Physicians implemented a cine fast imaging employing steady-state acquisition (FIESTA) technique that was not subject to low flow artifact within the LV cavity. With heightened image clarity, physicians unexpectedly identified a LV pseudoaneurysm.

  7. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera's frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera's focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 [mu]s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  8. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera`s frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera`s focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 {mu}s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  9. Science Process Skill as a Predictor of Acquisition of Knowledge Among Preservice Teachers.

    ERIC Educational Resources Information Center

    Flehinger, Lenore Edith

    This study discusses the relationships between the level of science process skills and the degree of acquisition of new science knowledge. Participants included 257 preservice teachers enrolled in an elementary science methods course. A test, Test of Oceanographic Knowledge, was designed and used to define the level of knowledge acquisition. Level…

  10. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  11. Calibration of a flood inundation model using a SAR image: influence of acquisition time

    NASA Astrophysics Data System (ADS)

    Van Wesemael, Alexandra; Gobeyn, Sacha; Neal, Jeffrey; Lievens, Hans; Van Eerdenbrugh, Katrien; De Vleeschouwer, Niels; Schumann, Guy; Vernieuwe, Hilde; Di Baldassarre, Giuliano; De Baets, Bernard; Bates, Paul; Verhoest, Niko

    2016-04-01

    Flood risk management has always been in a search for effective prediction approaches. As such, the calibration of flood inundation models is continuously improved. In practice, this calibration process consists of finding the optimal roughness parameters, both channel and floodplain Manning coefficients, since these values considerably influence the flood extent in a catchment. In addition, Synthetic Aperture Radar (SAR) images have been proven to be a very useful tool in calibrating the flood extent. These images can distinguish between wet (flooded) and dry (non-flooded) pixels through the intensity of backscattered radio waves. To this date, however, satellite overpass often occurs only once during a flood event. Therefore, this study is specifically concerned with the effect of the timing of the SAR data acquisition on calibration results. In order to model the flood extent, the raster-based inundation model, LISFLOOD-FP, is used together with a high resolution synthetic aperture radar image (ERS-2 SAR) of a flood event of the river Dee, Wales, in December 2006. As only one satellite image of the considered case study is available, a synthetic framework is implemented in order to generate a time series of SAR observations. These synthetic observations are then used to calibrate the model at different time instants. In doing so, the sensitivity of the model output to the channel and floodplain Manning coefficients is studied through time. As results are examined, these suggest that there is a clear difference in the spatial variability to which water is held within the floodplain. Furthermore, these differences seem to be variable through time. Calibration by means of satellite flood observations obtained from the rising or receding limb, would generally lead to more reliable results rather than near peak flow observations.

  12. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  13. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  14. Computer acquisition of 3D images utilizing dynamic speckles

    NASA Astrophysics Data System (ADS)

    Kamshilin, Alexei A.; Semenov, Dmitry V.; Nippolainen, Ervin; Raita, Erik

    2006-05-01

    We present novel technique for fast non-contact and continuous profile measurements of rough surfaces by use of dynamic speckles. The dynamic speckle pattern is generated when the laser beam scans the surface under study. The most impressive feature of the proposed technique is its ability to work at extremely high scanning speed of hundreds meters per second. The technique is based on the continuous frequency measurements of the light-power modulation after spatial filtering of the scattered light. The complete optical-electronic system was designed and fabricated for fast measurement of the speckles velocity, its recalculation into the distance, and further data acquisition into computer. The measured surface profile is displayed in a PC monitor in real time. The response time of the measuring system is below 1 μs. Important parameters of the system such as accuracy, range of measurements, and spatial resolution are analyzed. Limits of the spatial filtering technique used for continuous tracking of the speckle-pattern velocity are shown. Possible ways of further improvement of the measurements accuracy are demonstrated. Owing to its extremely fast operation, the proposed technique could be applied for online control of the 3D-shape of complex objects (e.g., electronic circuits) during their assembling.

  15. Computer-Aided Process and Tools for Mobile Software Acquisition

    DTIC Science & Technology

    2013-04-01

    file?based runtime verification. A case study of formally specifying, validating, and verifying a set of requirements for an iPhone application that...Center for Strategic and International Studies The Making of a DoD Acquisition Lead System Integrator (LSI) Paul Montgomery, Ron Carlson, and John...code against the execution trace of the mobile apps using log file–based runtime verification. A case study of formally specifying, validating, and

  16. The Use of Small Business Administration Section 8(a) Contractors in Automatic Data Processing Acquisitions.

    DTIC Science & Technology

    2007-11-02

    OFFICE OF THE INSPECTOR GENERAL THE USE OF SMALL BUSINESS ADMINISTRATION SECTION 8(a) CONTRACTORS IN AUTOMATIC DATA PROCESSING ACQUISITIONS...CENTER Department of Defensei^KS™"* WASHINGTON D.C. 20301-7100 053,35 The following acronyms are used in this report. ADP Automatic Data Processing...Of The Inspector General: The Use Of Small Business Administration Section 8(a) Contractors in Automatic Data Processing Acquisitions Corporate

  17. The 2014 Broadband Acquisition and Imaging Operation (BAcIO) at Stromboli Volcano (Italy)

    NASA Astrophysics Data System (ADS)

    Scarlato, P.; Taddeucci, J.; Del Bello, E.; Gaudin, D.; Ricci, T.; Andronico, D.; Lodato, L.; Cannata, A.; Ferrari, F.; Orr, T. R.; Sesterhenn, J.; Plescher, R.; Baumgärtel, Y.; Harris, A. J. L.; Bombrun, M.; Barnie, T. D.; Houghton, B. F.; Kueppers, U.; Capponi, A.

    2014-12-01

    In May 2014, Stromboli volcano, one of the best natural laboratories for the study of weak explosive volcanism, hosted a large combination of state-of-the-art and prototype eruption monitoring technologies. Aiming to expand our parameterization capabilities for explosive eruption dynamics, we temporarily deployed in direct view of the active vents a range of imaging, acoustic, and seismic data acquisition systems. Imaging systems included: two high-speed visible cameras acquiring synchronized images at 500 and 1000 frames per second (fps); two thermal infrared forward looking (FLIR) cameras zooming into the active vents and acquiring at 50-200 fps; two FLIR cameras acquiring at lower (3-50 fps) frame rates with a broader field of view; one visible time-lapse camera; one UV camera system for the measurement of sulphur dioxide emission; and one drone equipped with a camcorder. Acoustic systems included: four broadband microphones (range of tens of kHz to 0.1 Hz), two of them co-located with one of the high-speed cameras and one co-located with one of the seismometers (see below); and an acoustic microphone array. This array included sixteen microphones with a circular arrangement located on a steep slope above the active vents. Seismic systems included two broadband seismometers, one of them co-located with one of the high-speed cameras, and one co-located with one of the microphones. The above systems were synchronized with a variety of methods, and temporarily added to the permanent monitoring networks already operating on the island. Observation focus was on pyroclast ejection processes extending from the shallow conduit, through their acceleration and interaction with the atmosphere, and to their dispersal and deposition. The 3-D distribution of bombs, the sources of jet noise in the explosions, the comparison between methods for estimating explosion properties, and the relations between erupted gas and magma volumes, are some examples of the processes targeted

  18. Quantitative evaluation of phase processing approaches in susceptibility weighted imaging

    NASA Astrophysics Data System (ADS)

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2012-03-01

    Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.

  19. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  20. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review.

    PubMed

    Woodfield, Julie; Kealey, Susan

    2015-08-01

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size.

  1. Halftoning and Image Processing Algorithms

    DTIC Science & Technology

    1999-02-01

    screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our

  2. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  3. Imageability, age of acquisition, and frequency factors in acronym comprehension.

    PubMed

    Playfoot, David; Izura, Cristina

    2013-06-01

    In spite of their unusual orthographic and phonological form, acronyms (e.g., BBC, HIV, NATO) can become familiar to the reader, and their meaning can be accessed well enough that they are understood. The factors in semantic access for acronym stimuli were assessed using a word association task. Two analyses examined the time taken to generate a word association response to acronym cues. Responses were recorded more quickly to cues that elicited a large proportion of semantic responses, and those that were high in associative strength. Participants were shown to be faster to respond to cues which were imageable or early acquired. Frequency was not a significant predictor of word association responses. Implications for theories of lexical organisation are discussed.

  4. Multiple-CCD stereo acquisition system for high-speed imaging

    NASA Astrophysics Data System (ADS)

    Yafuso, Eiji S.; Sass, David T.; Dereniak, Eustace L.; Hoffman, Steven; Gonzalez, Rene; Gonzalez, Martin; Rettke, Douglas

    1998-10-01

    A high-speed 3D imaging system has been developed using multiple independent CCD cameras with sequentially triggered acquisition and individual field storage capability. The system described here utilizes sixteen independent cameras. A stereo alignment and triggering scheme arranges the cameras into two angularly separated banks of eight cameras each. By simultaneously triggering correlated stereo pairs, an eight-frame sequence of stereo images is captured. The delays can be individually adjusted to yield a greater number of acquired frames during more rapid segments of the vent, and the individual integration periods may be adjusted to ensure adequate radiometric response while minimizing image blur. Representation of the data as a 3D sequence introduces the issue of independent camera coordinate registration with the real scene. A discussion of the forward and inverse transform operator for the digital data is provided along with a description of the acquisition system.

  5. Evolutionary partial differential equations for biomedical image processing.

    PubMed

    Sarti, Alessandro; Mikula, Karol; Sgallari, Fiorella; Lamberti, Claudio

    2002-04-01

    We are presenting here a model for processing space-time image sequences and applying them to 3D echo-cardiography. The non-linear evolutionary equations filter the sequence with keeping space-time coherent structures. They have been developed using ideas of regularized Perona-Malik an-isotropic diffusion and geometrical diffusion of mean curvature flow type (Malladi-Sethian), combined with Galilean invariant movie multi-scale analysis of Alvarez et al. A discretization of space-time filtering equations by means of finite volume method is discussed in detail. Computational results in processing of 3D echo-cardiographic sequences obtained by rotational acquisition technique and by real-time 3D echo volumetrics acquisition technique are presented. Quantitative error estimation is also provided.

  6. A New Method of Theodolite Calibration Based on Image Processing Technology

    NASA Astrophysics Data System (ADS)

    Zou, Hui-Hui; Wu, Hong-Bing; Chen, Di

    Aiming at improving the theodolite calibration method for space tracking ship, a calibration device which consists of hardware and software is designed in this paper. Hereinto, the hardware part is a set of optical acquisition system that includes CCD, lens and 0.2" collimator, while the software part contains image acquisition module, image processing module, data processing module and interface display module. During the calibration process, the new methods of image denoising and image character extraction are applied to improve the precision of image measure. The result of the experiment shows that the calibration criteria of the theodolite errors was met by applying the image processing technology of the theodolite calibration device, it is more accurate than the manual reading method under the same situation in dock.

  7. Acquisition of Flexible Image Recognition by Coupling of Reinforcement Learning and a Neural Network

    NASA Astrophysics Data System (ADS)

    Shibata, Katsunari; Kawano, Tomohiko

    The authors have proposed a very simple autonomous learning system consisting of one neural network (NN), whose inputs are raw sensor signals and whose outputs are directly passed to actuators as control signals, and which is trained by using reinforcement learning (RL). However, the current opinion seems that such simple learning systems do not actually work on complicated tasks in the real world. In this paper, with a view to developing higher functions in robots, the authors bring up the necessity to introduce autonomous learning of a massively parallel and cohesively flexible system with massive inputs based on the consideration about the brain architecture and the sequential property of our consciousness. The authors also bring up the necessity to place more importance on “optimization” of the total system under a uniform criterion than “understandability” for humans. Thus, the authors attempt to stress the importance of their proposed system when considering the future research on robot intelligence. The experimental result in a real-world-like environment shows that image recognition from as many as 6240 visual signals can be acquired through RL under various backgrounds and light conditions without providing any knowledge about image processing or the target object. It works even for camera image inputs that were not experienced in learning. In the hidden layer, template-like representation, division of roles between hidden neurons, and representation to detect the target uninfluenced by light condition or background were observed after learning. The autonomous acquisition of such useful representations or functions makes us feel the potential towards avoidance of the frame problem and the development of higher functions.

  8. Reducing respiratory effect in motion correction for EPI images with sequential slice acquisition order.

    PubMed

    Cheng, Hu; Puce, Aina

    2014-04-30

    Motion correction is critical for data analysis of fMRI time series. Most motion correction algorithms treat the head as a rigid body. Respiration of the subject, however, can alter the static magnetic field in the head and result in motion-like slice shifts for echo planar imaging (EPI). The delay of acquisition between slices causes a phase difference in respiration so that the shifts vary with slice positions. To characterize the effect of respiration on motion correction, we acquired fast sampled fMRI data using multi-band EPI and then simulated different acquisition schemes. Our results indicated that respiration introduces additional noise after motion correction. The signal variation between volumes after motion correction increases when the effective TR increases from 675ms to 2025ms. This problem can be corrected if slices are acquired sequentially. For EPI with a sequential acquisition scheme, we propose to divide the image volumes into several segments so that slices within each segment are acquired close in time and then perform motion correction on these segments separately. We demonstrated that the temporal signal-to-noise ratio (TSNR) was increased when the motion correction was performed on the segments separately rather than on the whole image. This enhancement of TSNR was not evenly distributed across the segments and was not observed for interleaved acquisition. The level of increase was higher for superior slices. On superior slices the percentage of TSNR gain was comparable to that using image based retrospective correction for respiratory noise. Our results suggest that separate motion correction on segments is highly recommended for sequential acquisition schemes, at least for slices distal to the chest.

  9. Whole Heart Coronary Imaging with Flexible Acquisition Window and Trigger Delay

    PubMed Central

    Kawaji, Keigo; Foppa, Murilo; Roujol, Sébastien; Akçakaya, Mehmet; Nezafat, Reza

    2015-01-01

    Coronary magnetic resonance imaging (MRI) requires a correctly timed trigger delay derived from a scout cine scan to synchronize k-space acquisition with the quiescent period of the cardiac cycle. However, heart rate changes between breath-held cine and free-breathing coronary imaging may result in inaccurate timing errors. Additionally, the determined trigger delay may not reflect the period of minimal motion for both left and right coronary arteries or different segments. In this work, we present a whole-heart coronary imaging approach that allows flexible selection of the trigger delay timings by performing k-space sampling over an enlarged acquisition window. Our approach addresses coronary motion in an interactive manner by allowing the operator to determine the temporal window with minimal cardiac motion for each artery region. An electrocardiogram-gated, k-space segmented 3D radial stack-of-stars sequence that employs a custom rotation angle is developed. An interactive reconstruction and visualization platform is then employed to determine the subset of the enlarged acquisition window for minimal coronary motion. Coronary MRI was acquired on eight healthy subjects (5 male, mean age = 37 ± 18 years), where an enlarged acquisition window of 166–220 ms was set 50 ms prior to the scout-derived trigger delay. Coronary visualization and sharpness scores were compared between the standard 120 ms window set at the trigger delay, and those reconstructed using a manually adjusted window. The proposed method using manual adjustment was able to recover delineation of five mid and distal right coronary artery regions that were otherwise not visible from the standard window, and the sharpness scores improved in all coronary regions using the proposed method. This paper demonstrates the feasibility of a whole-heart coronary imaging approach that allows interactive selection of any subset of the enlarged acquisition window for a tailored reconstruction for each branch

  10. Whole heart coronary imaging with flexible acquisition window and trigger delay.

    PubMed

    Kawaji, Keigo; Foppa, Murilo; Roujol, Sébastien; Akçakaya, Mehmet; Nezafat, Reza

    2015-01-01

    Coronary magnetic resonance imaging (MRI) requires a correctly timed trigger delay derived from a scout cine scan to synchronize k-space acquisition with the quiescent period of the cardiac cycle. However, heart rate changes between breath-held cine and free-breathing coronary imaging may result in inaccurate timing errors. Additionally, the determined trigger delay may not reflect the period of minimal motion for both left and right coronary arteries or different segments. In this work, we present a whole-heart coronary imaging approach that allows flexible selection of the trigger delay timings by performing k-space sampling over an enlarged acquisition window. Our approach addresses coronary motion in an interactive manner by allowing the operator to determine the temporal window with minimal cardiac motion for each artery region. An electrocardiogram-gated, k-space segmented 3D radial stack-of-stars sequence that employs a custom rotation angle is developed. An interactive reconstruction and visualization platform is then employed to determine the subset of the enlarged acquisition window for minimal coronary motion. Coronary MRI was acquired on eight healthy subjects (5 male, mean age = 37 ± 18 years), where an enlarged acquisition window of 166-220 ms was set 50 ms prior to the scout-derived trigger delay. Coronary visualization and sharpness scores were compared between the standard 120 ms window set at the trigger delay, and those reconstructed using a manually adjusted window. The proposed method using manual adjustment was able to recover delineation of five mid and distal right coronary artery regions that were otherwise not visible from the standard window, and the sharpness scores improved in all coronary regions using the proposed method. This paper demonstrates the feasibility of a whole-heart coronary imaging approach that allows interactive selection of any subset of the enlarged acquisition window for a tailored reconstruction for each branch

  11. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series

    NASA Astrophysics Data System (ADS)

    Nitze, Ingmar; Barrett, Brian; Cawkwell, Fiona

    2015-02-01

    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8-10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of

  12. The acquisition process of musical tonal schema: implications from connectionist modeling

    PubMed Central

    Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-ichi

    2015-01-01

    Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical ‘scale’ sensitivity early and ‘harmony’ sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music. PMID:26441725

  13. Image processing for medical diagnosis using CNN

    NASA Astrophysics Data System (ADS)

    Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

    2003-01-01

    Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images.

  14. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  15. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  16. Quantitative high spatiotemporal imaging of biological processes

    NASA Astrophysics Data System (ADS)

    Borbely, Joe; Otterstrom, Jason; Mohan, Nitin; Manzo, Carlo; Lakadamyali, Melike

    2015-08-01

    Super-resolution microscopy has revolutionized fluorescence imaging providing access to length scales that are much below the diffraction limit. The super-resolution methods have the potential for novel discoveries in biology. However, certain technical limitations must be overcome for this potential to be fulfilled. One of the main challenges is the use of super-resolution to study dynamic events in living cells. In addition, the ability to extract quantitative information from the super-resolution images is confounded by the complex photophysics that the fluorescent probes exhibit during the imaging. Here, we will review recent developments we have been implementing to overcome these challenges and introduce new steps in automated data acquisition towards high-throughput imaging.

  17. A novel acquisition-reconstruction algorithm for surface magnetic resonance imaging.

    PubMed

    Franchi, Danilo; Sotgiu, Antonello; Placidi, Giuseppe

    2008-11-01

    In U-shaped, hand-size magnetic resonance surface scanners, imaging is performed along only one spatial direction, with the application of just one gradient (one-dimensional imaging). Lateral spatial resolution can be obtained by magnet displacement, but, in this case, resolution is very poor (on the order of some millimeters) and cannot be useful for high-resolution imaging applications. In this article, an innovative technique for acquisition and reconstruction of images produced by U-shaped, hand-size MRI surface scanners is presented. The proposed method is based on the acquisition of overlapping strips and an analytical reconstruction technique; it is capable of arbitrarily improving spatial lateral resolution without either using a second magnetic field gradient or making any assumptions about the imaged sample extension. Numerical simulations on synthetic images are reported demonstrating the method functionalities. The presented method also makes it possible to use U-shaped, hand-size MRI surface scanners for high-resolution biomedical applications, such as the imaging of skin lesions.

  18. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and

  19. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  20. NASA Regional Planetary Image Facility image retrieval and processing system

    NASA Technical Reports Server (NTRS)

    Slavney, Susan

    1986-01-01

    The general design and analysis functions of the NASA Regional Planetary Image Facility (RPIF) image workstation prototype are described. The main functions of the MicroVAX II based workstation will be database searching, digital image retrieval, and image processing and display. The uses of the Transportable Applications Executive (TAE) in the system are described. File access and image processing programs use TAE tutor screens to receive parameters from the user and TAE subroutines are used to pass parameters to applications programs. Interface menus are also provided by TAE.

  1. Non-Equispaced System Matrix Acquisition for Magnetic Particle Imaging Based on Lissajous Node Points.

    PubMed

    Kaethner, Christian; Erb, Wolfgang; Ahlborg, Mandy; Szwargulski, Patryk; Knopp, Tobias; Buzug, Thorsten M

    2016-11-01

    Magnetic Particle Imaging (MPI) is an emerging technology in the field of (pre)clinical imaging. The acquisition of a particle signal is realized along specific sampling trajectories covering a defined field of view (FOV). In a system matrix (SM) based reconstruction procedure, the commonly used acquisition path in MPI is a Lissajous trajectory. Such a trajectory features an inhomogeneous coverage of the FOV, i.e. the center region is sampled less dense than the regions towards the edges of the FOV. Conventionally, the respective SM acquisition and the subsequent reconstruction do not reflect this inhomogeneous coverage. Instead, they are performed on an equispaced grid. The objective of this work is to introduce a sampling grid that inherently features the aforementioned inhomogeneity by using node points of Lissajous trajectories. Paired with a tailored polynomial interpolation of the reconstructed MPI signal, the entire image can be recovered. It is the first time that such a trajectory related non-equispaced grid is used for image reconstruction on simulated and measured MPI data and it is shown that the number of sampling positions can be reduced, while the spatial resolution remains constant.

  2. Off-axis quantitative phase imaging processing using CUDA: toward real-time applications

    PubMed Central

    Pham, Hoa; Ding, Huafeng; Sobh, Nahil; Do, Minh; Patel, Sanjay; Popescu, Gabriel

    2011-01-01

    We demonstrate real time off-axis Quantitative Phase Imaging (QPI) using a phase reconstruction algorithm based on NVIDIA’s CUDA programming model. The phase unwrapping component is based on Goldstein’s algorithm. By mapping the process of extracting phase information and unwrapping to GPU, we are able to speed up the whole procedure by more than 18.8× with respect to CPU processing and ultimately achieve video rate for mega-pixel images. Our CUDA implementation also supports processing of multiple images simultaneously. This enables our imaging system to support high speed, high throughput, and real-time image acquisition and visualization. PMID:21750757

  3. An Analysis of the Support Equipment Acquisition Process and Methods Designed to Reduce Acquisition Leadtime

    DTIC Science & Technology

    1991-09-01

    For !4TIs GRA& I DTIC TAB 1-j. O urn~nc Qd D1 ."JuJt I r:. % +I 1Avajll 1c/or. Distv , Li AFIT/GLM/LSY/91S-68 AN ANALYSIS OF THE SUPPORT EQUIPMENT...a specific function or purpose (21:10). Contractor Furnished Equipment ( CFE ). Items acquired or manufactured directly by the contractor for use in the...process, or output: 1) inputs are those items or resources used by the system which allow iL o func 4 ion ; 2) processes or transforms inputs into outputs

  4. Coordination in serial-parallel image processing

    NASA Astrophysics Data System (ADS)

    Wójcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  5. Performance of a Novel PMMA Polymer Imaging Bundle for Field Acquisition and Wavefront Sensing

    NASA Astrophysics Data System (ADS)

    Richards, S. N.; Leon-Saval, S.; Goodwin, M.; Zheng, J.; Lawrence, J. S.; Bryant, J. J.; Bland-Hawthorn, J.; Norris, B.; Cvetojevic, N.; Argyros, A.

    2017-01-01

    Imaging bundles provide a convenient way to translate a spatially coherent image, yet conventional imaging bundles made from silica fibre optics typically remain expensive with large losses due to poor filling factors ( 40%). We present the characterisation of a novel polymer imaging bundle made from poly(methyl methacrylate) (PMMA) that is considerably cheaper and a better alternative to silica imaging bundles over short distances ( 1 m; from the middle to the edge of a telescope's focal plane). The large increase in filling factor (92% for the polymer imaging bundle) outweighs the large increase in optical attenuation from using PMMA (1 dB/m) instead of silica (10-3 dB/m). We present and discuss current and possible future multi-object applications of the polymer imaging bundle in the context of astronomical instrumentation including: field acquisition, guiding, wavefront sensing, narrow-band imaging, aperture masking, and speckle imaging. The use of PMMA limits its use in low-light applications (e.g., imaging of galaxies); however, it is possible to fabricate polymer imaging bundles from a range of polymers that are better suited to the desired science.

  6. EXD HME MicroCT Data Acquisition, Processing and Data Request Overview

    SciTech Connect

    Seetho, Isaac M.; Brown, William D.; Martz, Jr., Harry E.

    2016-12-06

    This document is a short summary of the steps required for MicroCT evaluation of a specimen. This includes data acquisition through image analysis, for the EXD HME program. Expected outputs for each stage are provided. Data shall be shipped to LLNL as described herein.

  7. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  8. Data acquisition and analysis for the energy-subtraction Compton scatter camera for medical imaging

    NASA Astrophysics Data System (ADS)

    Khamzin, Murat Kamilevich

    In response to the shortcomings of the Anger camera currently being used in conventional SPECT, particularly the trade-off between sensitivity and spatial resolution, a novel energy-subtraction Compton scatter camera, or the ESCSC, has been proposed. A successful clinical implementation of the ESCSC could revolutionize the field of SPECT. Features of this camera include utilization of silicon and CdZnTe detectors in primary and secondary detector systems, list-mode time stamping data acquisition, modular architecture, and post-acquisition data analysis. Previous ESCSC studies were based on Monte Carlo modeling. The objective of this work is to test the theoretical framework developed in previous studies by developing the data acquisition and analysis techniques necessary to implement the ESCSC. The camera model working in list-mode with time stamping was successfully built and tested thus confirming potential of the ESCSC that was predicted in previous simulation studies. The obtained data were processed during the post-acquisition data analysis based on preferred event selection criteria. Along with the construction of a camera model and proving the approach, the post-acquisition data analysis was further extended to include preferred event weighting based on the likelihood of a preferred event to be a true preferred event. While formulated to show ESCSC capabilities, the results of this study are important for any Compton scatter camera implementation as well as for coincidence data acquisition systems in general.

  9. 48 CFR 736.602-5 - Short selection process for procurements not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... selection process for procurements not to exceed the simplified acquisition threshold. References to FAR 36... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section...

  10. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    NASA Astrophysics Data System (ADS)

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  11. Learning (Not) to Predict: Grammatical Gender Processing in Second Language Acquisition

    ERIC Educational Resources Information Center

    Hopp, Holger

    2016-01-01

    In two experiments, this article investigates the predictive processing of gender agreement in adult second language (L2) acquisition. We test (1) whether instruction on lexical gender can lead to target predictive agreement processing and (2) how variability in lexical gender representations moderates L2 gender agreement processing. In a…

  12. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  13. The selection of field acquisition parameters for dispersion images from multichannel surface wave data

    USGS Publications Warehouse

    Zhang, S.X.; Chan, L.S.; Xia, J.

    2004-01-01

    The accuracy and resolution of surface wave dispersion results depend on the parameters used for acquiring data in the field. The optimized field parameters for acquiring multichannel analysis of surface wave (MASW) dispersion images can be determined if preliminary information on the phase velocity range and interface depth is available. In a case study on a fill slope in Hong Kong, the optimal acquisition parameters were first determined from a preliminary seismic survey prior to a MASW survey. Field tests using different sets of receiver distances and array lengths showed that the most consistent and useful dispersion images were obtained from the optimal acquisition parameters predicted. The inverted S-wave velocities from the dispersion curve obtained at the optimal offset distance range also agreed with those obtained by using direct refraction survey.

  14. System safety management lessons learned from the US Army acquisition process

    SciTech Connect

    Piatt, J.A.

    1989-05-01

    The Assistant Secretary of the Army for Research, Development and Acquisition directed the Army Safety Center to provide an audit of the causes of accidents and safety of use restrictions on recently fielded systems by tracking residual hazards back through the acquisition process. The objective was to develop lessons learned'' that could be applied to the acquisition process to minimize mishaps in fielded systems. System safety management lessons learned are defined as Army practices or policies, derived from past successes and failures, that are expected to be effective in eliminating or reducing specific systemic causes of residual hazards. They are broadly applicable and supportive of the Army structure and acquisition objectives. Pacific Northwest Laboratory (PNL) was given the task of conducting an independent, objective appraisal of the Army's system safety program in the context of the Army materiel acquisition process by focusing on four fielded systems which are products of that process. These systems included the Apache helicopter, the Bradley Fighting Vehicle (BFV), the Tube Launched, Optically Tracked, Wire Guided (TOW) Missile and the High Mobility Multipurpose Wheeled Vehicle (HMMWV). The objective of this study was to develop system safety management lessons learned associated with the acquisition process. The first step was to identify residual hazards associated with the selected systems. Since it was impossible to track all residual hazards through the acquisition process, certain well-known, high visibility hazards were selected for detailed tracking. These residual hazards illustrate a variety of systemic problems. Systemic or process causes were identified for each residual hazard and analyzed to determine why they exist. System safety management lessons learned were developed to address related systemic causal factors. 29 refs., 5 figs.

  15. DDS-Suite - A Dynamic Data Acquisition, Processing, and Analysis System for Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Burnside, Jathan J.

    2012-01-01

    Wind Tunnels have optimized their steady-state data systems for acquisition and analysis and even implemented large dynamic-data acquisition systems, however development of near real-time processing and analysis tools for dynamic-data have lagged. DDS-Suite is a set of tools used to acquire, process, and analyze large amounts of dynamic data. Each phase of the testing process: acquisition, processing, and analysis are handled by separate components so that bottlenecks in one phase of the process do not affect the other, leading to a robust system. DDS-Suite is capable of acquiring 672 channels of dynamic data at rate of 275 MB / s. More than 300 channels of the system use 24-bit analog-to-digital cards and are capable of producing data with less than 0.01 of phase difference at 1 kHz. System architecture, design philosophy, and examples of use during NASA Constellation and Fundamental Aerodynamic tests are discussed.

  16. Evaluation of the Pre-Milestone I Acquisition Logistics Process at the Aeronautical Systems Center

    DTIC Science & Technology

    1994-09-01

    schedule, and performance was developed to reduce risks and assure specified performances. 13 All acquisition programs are based on identified needs...Engineerng and Emphasizes risk management; promising approach is Manufacturing translated into a stable, predicable, cost effective Development design...shortcomings or cost overruns [31:7-8]. 16 The acquisition process is an incremental development commitment phased so that the associated risk is continually

  17. Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies

    NASA Astrophysics Data System (ADS)

    Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.

    2017-02-01

    In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.

  18. Isolated sixth cranial nerve aplasia visualized with Fast Imaging Employing Steady-State Acquisition (FIESTA) MRI.

    PubMed

    Pilyugina, Svetlana A; Fischbein, Nancy J; Liao, Y Joyce; McCulley, Timothy J

    2007-06-01

    An otherwise healthy 12-month-old girl presented for evaluation of reduced abduction of the left eye detected at 6 months of age. The remainder of the examination was unremarkable. A special MRI sequence-fast imaging employing steady-state acquisition (FIESTA)-visualized the right but not the left sixth nerve cisternal segment. This is the first reported use of the MRI FIESTA sequence to diagnose aplasia of the sixth cranial nerve.

  19. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  20. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  1. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  2. Image processing utilizing an APL interface

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Kapp, Oscar H.

    1991-03-01

    The past few years have seen the growing use of digital techniques in the analysis of electron microscope image data. This trend is driven by the need to maximize the information extracted from the electron micrograph by submitting its digital representation to the broad spectrum of analytical techniques made available by the digital computer. We are developing an image processing system for the analysis of digital images obtained with a scanning transmission electron microscope (STEM) and a scanning electron microscope (SEM). This system, run on an IBM PS/2 model 70/A21, uses menu-based image processing and an interactive APL interface which permits the direct manipulation of image data.

  3. Asynchronous data acquisition and on-the-fly analysis of dose fractionated cryoEM images by UCSFImage.

    PubMed

    Li, Xueming; Zheng, Shawn; Agard, David A; Cheng, Yifan

    2015-11-01

    Newly developed direct electron detection cameras have a high image output frame rate that enables recording dose fractionated image stacks of frozen hydrated biological samples by electron cryomicroscopy (cryoEM). Such novel image acquisition schemes provide opportunities to analyze cryoEM data in ways that were previously impossible. The file size of a dose fractionated image stack is 20-60 times larger than that of a single image. Thus, efficient data acquisition and on-the-fly analysis of a large number of dose-fractionated image stacks become a serious challenge to any cryoEM data acquisition system. We have developed a computer-assisted system, named UCSFImage4, for semi-automated cryo-EM image acquisition that implements an asynchronous data acquisition scheme. This facilitates efficient acquisition, on-the-fly motion correction, and CTF analysis of dose fractionated image stacks with a total time of ∼60s/exposure. Here we report the technical details and configuration of this system.

  4. Improving the spatial resolution of magnetic resonance inverse imaging via the blipped-CAIPI acquisition scheme.

    PubMed

    Chang, Wei-Tang; Setsompop, Kawin; Ahveninen, Jyrki; Belliveau, John W; Witzel, Thomas; Lin, Fa-Hsuan

    2014-05-01

    Using simultaneous acquisition from multiple channels of a radio-frequency (RF) coil array, magnetic resonance inverse imaging (InI) achieves functional MRI acquisitions at a rate of 100ms per whole-brain volume. InI accelerates the scan by leaving out partition encoding steps and reconstructs images by solving under-determined inverse problems using RF coil sensitivity information. Hence, the correlated spatial information available in the coil array causes spatial blurring in the InI reconstruction. Here, we propose a method that employs gradient blips in the partition encoding direction during the acquisition to provide extra spatial encoding in order to better differentiate signals from different partitions. According to our simulations, this blipped-InI (bInI) method can increase the average spatial resolution by 15.1% (1.3mm) across the whole brain and from 32.6% (4.2mm) in subcortical regions, as compared to the InI method. In a visual fMRI experiment, we demonstrate that, compared to InI, the spatial distribution of bInI BOLD response is more consistent with that of a conventional echo-planar imaging (EPI) at the level of individual subjects. With the improved spatial resolution, especially in subcortical regions, bInI can be a useful fMRI tool for obtaining high spatiotemporal information for clinical and cognitive neuroscience studies.

  5. An approach to automated acquisition of cryoEM images from lacey carbon grids.

    PubMed

    Nicholson, William V; White, Howard; Trinick, John

    2010-12-01

    An approach to automated acquisition of cryoEM image data from lacey carbon grids using the Leginon program is described. Automated liquid nitrogen top up of the specimen holder dewar was used as a step towards full automation, without operator intervention during the course of data collection. During cryoEM studies of actin labelled with myosin V, we have found it necessary to work with lacey grids rather than Quantifoil or C-flat grids due to interaction of myosin V with the support film. Lacey grids have irregular holes of variable shape and size, in contrast to Quantifoil or C-flat grids which have a regular array of similar circular holes on each grid square. Other laboratories also prefer to work with grids with irregular holes for a variety of reasons. Therefore, it was necessary to develop a different strategy from normal Leginon usage for working with lacey grids for targeting holes for image acquisition and suitable areas for focussing prior to image acquisition. This approach was implemented by using the extensible framework provided by Leginon and by developing a new MSI application within that framework which includes a new Leginon node (for a novel method for finding focus targets).

  6. Improving the spatial resolution of Magnetic Resonance Inverse Imaging via the blipped-CAIPI acquisition scheme

    PubMed Central

    Chang, Wei-Tang; Setsompop, Kawin; Ahveninen, Jyrki; Belliveau, John W.; Witzel, Thomas; Lin, Fa-Hsuan

    2014-01-01

    Using simultaneous acquisition from multiple channels of a radio-frequency (RF) coil array, magnetic resonance inverse imaging (InI) achieves functional MRI acquisitions at a rate of 100 ms per whole-brain volume. InI accelerates the scan by leaving out partition encoding steps and reconstructs images by solving under-determined inverse problems using RF coil sensitivity information. Hence, the correlated spatial information available in the coil array causes spatial blurring in the InI reconstruction. Here, we propose a method that employs gradient blips in the partition encoding direction during the acquisition to provide extra spatial encoding in order to better differentiate signals from different partitions. According to our simulations, this blipped-InI (bInI) method can increase the average spatial resolution by 15.1% (1.3 mm) across the whole brain and from 32.6% (4.2 mm) in subcortical regions, as compared to the InI method. In a visual fMRI experiment, we demonstrate that, compared to InI, the spatial distribution of bInI BOLD response is more consistent with that of a conventional echo-planar imaging (EPI) at the level of individual subjects. With the improved spatial resolution, especially in subcortical regions, bInI can be a useful fMRI tool for obtaining high spatiotemporal information for clinical and cognitive neuroscience studies. PMID:24374076

  7. High-Speed MALDI-TOF Imaging Mass Spectrometry: Rapid Ion Image Acquisition and Considerations for Next Generation Instrumentation

    PubMed Central

    Spraggins, Jeffrey M.; Caprioli, Richard M.

    2012-01-01

    A prototype matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometer has been used for high-speed ion image acquisition. The instrument incorporates a Nd:YLF solid state laser capable of pulse repetition rates up to 5 kHz and continuous laser raster sampling for high-throughput data collection. Lipid ion images of a sagittal rat brain tissue section were collected in 10 min with an effective acquisition rate of roughly 30 pixels/s. These results represent more than a 10-fold increase in throughput compared with current commercially available instrumentation. Experiments aimed at improving conditions for continuous laser raster sampling for imaging are reported, highlighting proper laser repetition rates and stage velocities to avoid signal degradation from significant oversampling. As new high spatial resolution and large sample area applications present themselves, the development of high-speed microprobe MALDI imaging mass spectrometry is essential to meet the needs of those seeking new technologies for rapid molecular imaging. PMID:21953043

  8. A comparison of peripheral imaging technologies for bone and muscle quantification: a technical review of image acquisition.

    PubMed

    Wong, A K

    2016-12-14

    The choice of an appropriate imaging technique to quantify bone, muscle, or muscle adiposity needs to be guided by a thorough understanding of its competitive advantages over other modalities balanced by its limitations. This review details the technical machinery and methods behind peripheral quantitative computed tomography (pQCT), high-resolution (HR)-pQCT, and magnetic resonance imaging (MRI) that drive successful depiction of bone and muscle morphometry, densitometry, and structure. It discusses a number of image acquisition settings, the challenges associated with using one versus another, and compares the risk-benefits across the different modalities. Issues related to all modalities including partial volume artifact, beam hardening, calibration, and motion assessment are also detailed. The review further provides data and images to illustrate differences between methods to better guide the reader in selecting an imaging method strategically. Overall, investigators should be cautious of the impact of imaging parameters on image signal or contrast-to-noise-ratios, and the need to report these settings in future publications. The effect of motion should be assessed on images and a decision made to exclude prior to segmentation. A more standardized approach to imaging bone and muscle on pQCT and MRI could enhance comparability across studies and could improve the quality of meta-analyses.

  9. A comparison of peripheral imaging technologies for bone and muscle quantification: a technical review of image acquisition

    PubMed Central

    Wong, A.K.O.

    2016-01-01

    The choice of an appropriate imaging technique to quantify bone, muscle, or muscle adiposity needs to be guided by a thorough understanding of its competitive advantages over other modalities balanced by its limitations. This review details the technical machinery and methods behind peripheral quantitative computed tomography (pQCT), high-resolution (HR)-pQCT, and magnetic resonance imaging (MRI) that drive successful depiction of bone and muscle morphometry, densitometry, and structure. It discusses a number of image acquisition settings, the challenges associated with using one versus another, and compares the risk-benefits across the different modalities. Issues related to all modalities including partial volume artifact, beam hardening, calibration, and motion assessment are also detailed. The review further provides data and images to illustrate differences between methods to better guide the reader in selecting an imaging method strategically. Overall, investigators should be cautious of the impact of imaging parameters on image signal or contrast-to-noise-ratios, and the need to report these settings in future publications. The effect of motion should be assessed on images and a decision made to exclude prior to segmentation. A more standardized approach to imaging bone and muscle on pQCT and MRI could enhance comparability across studies and could improve the quality of meta-analyses. PMID:27973379

  10. A computational model associating learning process, word attributes, and age of acquisition.

    PubMed

    Hidaka, Shohei

    2013-01-01

    We propose a new model-based approach linking word learning to the age of acquisition (AoA) of words; a new computational tool for understanding the relationships among word learning processes, psychological attributes, and word AoAs as measures of vocabulary growth. The computational model developed describes the distinct statistical relationships between three theoretical factors underpinning word learning and AoA distributions. Simply put, this model formulates how different learning processes, characterized by change in learning rate over time and/or by the number of exposures required to acquire a word, likely result in different AoA distributions depending on word type. We tested the model in three respects. The first analysis showed that the proposed model accounts for empirical AoA distributions better than a standard alternative. The second analysis demonstrated that the estimated learning parameters well predicted the psychological attributes, such as frequency and imageability, of words. The third analysis illustrated that the developmental trend predicted by our estimated learning parameters was consistent with relevant findings in the developmental literature on word learning in children. We further discuss the theoretical implications of our model-based approach.

  11. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  12. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  13. Evaluation of Acquisition Strategies for Image-Based Construction Site Monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2016-06-01

    Construction site monitoring is an essential task for keeping track of the ongoing construction work and providing up-to-date information for a Building Information Model (BIM). The BIM contains the as-planned states (geometry, schedule, costs, ...) of a construction project. For updating, the as-built state has to be acquired repeatedly and compared to the as-planned state. In the approach presented here, a 3D representation of the as-built state is calculated from photogrammetric images using multi-view stereo reconstruction. On construction sites one has to cope with several difficulties like security aspects, limited accessibility, occlusions or construction activity. Different acquisition strategies and techniques, namely (i) terrestrial acquisition with a hand-held camera, (ii) aerial acquisition using a Unmanned Aerial Vehicle (UAV) and (iii) acquisition using a fixed stereo camera pair at the boom of the crane, are tested on three test sites. They are assessed considering the special needs for the monitoring tasks and limitations on construction sites. The three scenarios are evaluated based on the ability of automation, the required effort for acquisition, the necessary equipment and its maintaining, disturbance of the construction works, and on the accuracy and completeness of the resulting point clouds. Based on the experiences during the test cases the following conclusions can be drawn: Terrestrial acquisition has the lowest requirements on the device setup but lacks on automation and coverage. The crane camera shows the lowest flexibility but the highest grade of automation. The UAV approach can provide the best coverage by combining nadir and oblique views, but can be limited by obstacles and security aspects. The accuracy of the point clouds is evaluated based on plane fitting of selected building parts. The RMS errors of the fitted parts range from 1 to a few cm for the UAV and the hand-held scenario. First results show that the crane camera

  14. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  15. Real-time video image processing

    NASA Astrophysics Data System (ADS)

    Smedley, Kirk G.; Yool, Stephen R.

    1990-11-01

    Lockheed has designed and implemented a prototype real-time Video Enhancement Workbench (VEW) using commercial offtheshelf hardware and custom software. The hardware components include a Sun workstation Aspex PIPE image processor time base corrector VCR video camera and realtime disk subsystem. A cornprehensive set of image processing functions can be invoked by the analyst at any time during processing enabling interactive enhancement and exploitation of video sequences. Processed images can be transmitted and stored within the system in digital or video form. VEW also provides image output to a laser printer and to Interleaf technical publishing software.

  16. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  17. Ultrashort echo time imaging using pointwise encoding time reduction with radial acquisition (PETRA).

    PubMed

    Grodzki, David M; Jakob, Peter M; Heismann, Bjoern

    2012-02-01

    Sequences with ultrashort echo times enable new applications of MRI, including bone, tendon, ligament, and dental imaging. In this article, a sequence is presented that achieves the shortest possible encoding time for each k-space point, limited by pulse length, hardware switching times, and gradient performance of the scanner. In pointwise encoding time reduction with radial acquisition (PETRA), outer k-space is filled with radial half-projections, whereas the centre is measured single pointwise on a Cartesian trajectory. This hybrid sequence combines the features of single point imaging with radial projection imaging. No hardware changes are required. Using this method, 3D images with an isotropic resolution of 1 mm can be obtained in less than 3 minutes. The differences between PETRA and the ultrashort echo time (UTE) sequence are evaluated by simulation and phantom measurements. Advantages of pointwise encoding time reduction with radial acquisition are shown for tissue with a T(2) below 1 ms. The signal to noise ratio and Contrast-to-noise ratio (CNR) performance, as well as possible limitations of the approach, are investigated. In-vivo head, knee, ankle, and wrist examples are presented to prove the feasibility of the sequence. In summary, fast imaging with ultrashort echo time is enabled by PETRA and may help to establish new routine clinical applications of ultrashort echo time sequences.

  18. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  19. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  20. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  1. Point-and-stare operation and high-speed image acquisition in real-time hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Driver, Richard D.; Bannon, David P.; Ciccone, Domenic; Hill, Sam L.

    2010-04-01

    The design and optical performance of a small-footprint, low-power, turnkey, Point-And-Stare hyperspectral analyzer, capable of fully automated field deployment in remote and harsh environments, is described. The unit is packaged for outdoor operation in an IP56 protected air-conditioned enclosure and includes a mechanically ruggedized fully reflective, aberration-corrected hyperspectral VNIR (400-1000 nm) spectrometer with a board-level detector optimized for point and stare operation, an on-board computer capable of full system data-acquisition and control, and a fully functioning internal hyperspectral calibration system for in-situ system spectral calibration and verification. Performance data on the unit under extremes of real-time survey operation and high spatial and high spectral resolution will be discussed. Hyperspectral acquisition including full parameter tracking is achieved by the addition of a fiber-optic based downwelling spectral channel for solar illumination tracking during hyperspectral acquisition and the use of other sensors for spatial and directional tracking to pinpoint view location. The system is mounted on a Pan-And-Tilt device, automatically controlled from the analyzer's on-board computer, making the HyperspecTM particularly adaptable for base security, border protection and remote deployments. A hyperspectral macro library has been developed to control hyperspectral image acquisition, system calibration and scene location control. The software allows the system to be operated in a fully automatic mode or under direct operator control through a GigE interface.

  2. Progress in image processing technology related to radiological sciences: a five-year review.

    PubMed

    Huang, H K

    1987-01-01

    The past five years of progress in image processing technology related to radiography applications is reviewed. The following topics are included: image acquisition (computed radiography, X-ray film digitizer); 512, 1024 and 2048 image processor technology; image compression (block quantitation, full-frame bit allocation); image storage (real-time magnetic disk, optical disk); image workstation (input station, display station, diagnostic workstation); and picture archiving and communication systems (PACS). It is anticipated that the growth in this field will continue for many years to come.

  3. An Overview of the Mars Science Laboratory Sample Acquisition, Sample Processing and Handling System

    NASA Astrophysics Data System (ADS)

    Beegle, L. W.; Anderson, R. C.; Hurowitz, J. A.; Jandura, L.; Limonadi, D.

    2012-12-01

    The Mars Science Laboratory Mission (MSL), landed on Mars on August 5. The rover and a scientific payload are designed to identify and assess the habitability, geological, and environmental histories of Gale crater. Unraveling the geologic history of the region and providing an assessment of present and past habitability requires an evaluation of the physical and chemical characteristics of the landing site; this includes providing an in-depth examination of the chemical and physical properties of Martian regolith and rocks. The MSL Sample Acquisition, Processing, and Handling (SA/SPaH) subsystem is the first in-situ system designed to acquire interior rock and soil samples from Martian surface materials. These samples are processed and separated into fine particles and distributed to two onboard analytical science instruments SAM (Sample Analysis at Mars Instrument Suite) and CheMin (Chemistry and Mineralogy) or to a sample analysis tray for visual inspection. The SA/SPaH subsystem is also responsible for the placement of the two contact instruments, Alpha Particle X-Ray Spectrometer (APXS), and the Mars Hand Lens Imager (MAHLI), on rock and soil targets. Finally, there is a Dust Removal Tool (DRT) to remove dust particles from rock surfaces for subsequent analysis by the contact and or mast mounted instruments (e.g. Mast Cameras (MastCam) and the Chemistry and Micro-Imaging instruments (ChemCam)). It is expected that the SA/SPaH system will have produced a scooped system and possibility a drilled sample in the first 90 sols of the mission. Results from these activities and the ongoing testing program will be presented.

  4. Small Interactive Image Processing System (SMIPS) users manual

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    The Small Interactive Image Processing System (SMIP) is designed to facilitate the acquisition, digital processing and recording of image data as well as pattern recognition in an interactive mode. Objectives of the system are ease of communication with the computer by personnel who are not expert programmers, fast response to requests for information on pictures, complete error recovery as well as simplification of future programming efforts for extension of the system. The SMIP system is intended for operation under OS/MVT on an IBM 360/75 or 91 computer equipped with the IBM-2250 Model 1 display unit. This terminal is used as an interface between user and main computer. It has an alphanumeric keyboard, a programmed function keyboard and a light pen which are used for specification of input to the system. Output from the system is displayed on the screen as messages and pictures.

  5. TU-AB-207A-01: Image Acquisition Physics and Hardware.

    PubMed

    Li, B

    2016-06-01

    Practicing medical physicists are often time charged with the tasks of evaluating and troubleshooting complex image quality issues related to CT scanners. This course will equip them with a solid and practical understanding of common CT imaging chain and its major components with emphasis on acquisition physics and hardware, reconstruction, artifacts, image quality, dose, and advanced clinical applications. The core objective is to explain the effects of these major system components on the image quality. This course will not focus on the rapid-changing advanced technologies given the two-hour time limit, but the fundamental principles discussed in this course may facilitate better understanding of those more complicated technologies. The course will begin with an overview of CT acquisition physics and geometry. X-ray tube and CT detector are important acquisition hardware critical to the overall image quality. Each of these two subsystems consists of several major components. An in-depth description of the function and failure modes of these components will be provided. Examples of artifacts related to these failure modes will be presented: off-focal radiation, tube arcing, heel effect, oil bubble, offset drift effect, cross-talk effect, and bad pixels. The fundamentals of CT image reconstruction will first be discussed on an intuitive level. Approaches that do not require rigorous derivation of mathematical formulations will be presented. This is followed by a detailed derivation of the Fourier slice theorem: the foundation of the FBP algorithm. FBP for parallel-beam, fan-beam, and cone-beam geometries will be discussed. To address the issue of radiation dose related to x-ray CT, recent advances in iterative reconstruction, their advantages, and clinical applications will also be described. Because of the nature of fundamental physics and mathematics, limitations in data acquisition, and non-ideal conditions of major system components, image artifact often arise

  6. The Effects of Processing Instruction and Its Components on the Acquisition of Gender Agreement in Italian

    ERIC Educational Resources Information Center

    Benati, Alessandro

    2004-01-01

    This paper reports an experimental investigation of the relative effects of processing instruction, structured input activities and explicit information on the acquisition of gender agreement in Italian adjectives. Subjects were divided into three groups: the first received processing instruction; the second group structured input only; the third…

  7. Possible Overlapping Time Frames of Acquisition and Consolidation Phases in Object Memory Processes: A Pharmacological Approach

    ERIC Educational Resources Information Center

    Akkerman, Sven; Blokland, Arjan; Prickaerts, Jos

    2016-01-01

    In previous studies, we have shown that acetylcholinesterase inhibitors and phosphodiesterase inhibitors (PDE-Is) are able to improve object memory by enhancing acquisition processes. On the other hand, only PDE-Is improve consolidation processes. Here we show that the cholinesterase inhibitor donepezil also improves memory performance when…

  8. Acquisition of Basic Science Process Skills among Malaysian Upper Primary Students

    ERIC Educational Resources Information Center

    Ong, Eng Tek; Ramiah, Puspa; Ruthven, Kenneth; Salleh, Sabri Mohd; Yusuff, Nik Azmah Nik; Mokhsein, Siti Eshah

    2015-01-01

    This study aims to determine whether there are significant differences in the acquisition of basic science process skills by gender, school location and by grade levels among upper primary school students. Using an established 36-item Basic Science Process Skills test that assesses the skills of observing, communicating, classifying, measuring,…

  9. Executive and Phonological Processes in Second-Language Acquisition

    ERIC Educational Resources Information Center

    Engel de Abreu, Pascale M. J.; Gathercole, Susan E.

    2012-01-01

    This article reports a latent variable study exploring the specific links among executive processes of working memory, phonological short-term memory, phonological awareness, and proficiency in first (L1), second (L2), and third (L3) languages in 8- to 9-year-olds experiencing multilingual education. Children completed multiple L1-measures of…

  10. 76 FR 68037 - Federal Acquisition Regulation; Sudan Waiver Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-02

    ... Regulation; Sudan Waiver Process AGENCIES: Department of Defense (DoD), General Services Administration (GSA... prohibition on contracting with entities that conduct restricted business operations in Sudan. This rule adds... that conducts restricted business operations in Sudan. The rule also describes the consultation...

  11. Accelerating COTS Middleware Acquisition: The i-Mate Process

    SciTech Connect

    Liu, Anna; Gorton, Ian

    2003-03-05

    Most major organizations now use some commercial-off-the-shelf middleware components to run their businesses. Key drivers behind this growth include ever-increasing Internet usage and the ongoing need to integrate heterogeneous legacy systems to streamline business processes. As organizations do more business online, they need scalable, high-performance software infrastructures to handle transactions and provide access to core systems.

  12. Human Processing of Knowledge from Texts: Acquisition, Integration, and Reasoning

    DTIC Science & Technology

    1979-06-01

    or partial ordering of all constituent elements (Barclay, 1973; Foos, Smith, Sabol, & Mynatt , 1976; Hayes-Roth & Hayes-Roth, 1975; Potts, 1972, 1977...variables. and Speech, 1966, ~. 217-227. Language Foos, P. W., Smith, K. H., Sabol, M. A., and Mynatt , B. T. Constructive processes in simple linear-order

  13. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  14. Are Changes in Acquisition Policy and Process and in Funding Climate Associated With Cost Growth?

    DTIC Science & Technology

    2015-04-30

    ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= tÉÇåÉëÇ~ó=pÉëëáçåë= sçäìãÉ=f= = Are Changes in Acquisition Policy and Process and in Funding Climate ...acquisition policy and process. Changes in funding climate , however, are found to have a large influence on PAUC growth. These findings have three...fåÑçêãÉÇ=`Ü~åÖÉ= - 11 - Are Changes in Acquisition Policy and Process and in Funding Climate Associated With Cost Growth? David McNicol—joined the DoD

  15. Computer graphic method for direct correspondence image acquisition used in full parallax holographic stereograms

    NASA Astrophysics Data System (ADS)

    Madrid Sánchez, Alejandro; Velásquez Prieto, Daniel

    2016-09-01

    The holoprinter technology based on holographic stereograms has generated a fast development in holographic display applications by the holographic recording of a 2D image sequence with information of a 3D scene, which could be real or computer generated. The images used in holographic stereograms initially start from the acquisition of the different image perspectives of the 3D scene by the re-centering camera configuration and then, this images must be rearranged before the optical recording. This paper proposes a method to acquire the required images or hogel images in one step without using rearrange algorithms, the method uses a virtual camera that moves along a virtual rail by conventional computer graphics software. The proposed method reduced the time required to obtain the hogel images and enhance the quality of the 3D holographic images; it also can be applied in different computer graphics software. To validate the method, a full parallax holographic stereogram was made for a computer generated object.

  16. A uniform method for analytically modeling mulit-target acquisition with independent networked imaging sensors

    NASA Astrophysics Data System (ADS)

    Friedman, Melvin

    2014-05-01

    The problem solved in this paper is easily stated: for a scenario with 𝑛 networked and moving imaging sensors, 𝑚 moving targets and 𝑘 independent observers searching imagery produced by the 𝑛 moving sensors, analytically model system target acquisition probability for each target as a function of time. Information input into the model is the time dependence of 𝘗∞ and 𝜏, two parameters that describe observer-sensor-atmosphere-range-target properties of the target acquisition system for the case where neither the sensor nor target is moving. The parameter 𝘗∞ can be calculated by the NV-IPM model and 𝜏 is estimated empirically from 𝘗∞. In this model 𝑛, 𝑚 and 𝑘 are integers and 𝑘 can be less than, equal to or greater than 𝑛. Increasing 𝑛 and 𝑘 results in a substantial increase in target acquisition probabilities. Because the sensors are networked, a target is said to be detected the moment the first of the 𝑘 observers declares the target. The model applies to time-limited or time-unlimited search, and applies to any imaging sensors operating in any wavelength band provided each sensor can be described by 𝘗∞ and 𝜏 parameters.

  17. Effects of a patient’s name and image on medical knowledge acquisition

    PubMed Central

    Guajardo, Jesus R.; Petershack, Jean A.; Caplow, Julie A.; Littlefield, John H.

    2015-01-01

    Purpose To assess whether there are differences in medical students’ (MS) knowledge acquisition after being provided a virtual patient (VP) case summary with a patient’s name and facial picture included compared to no patient’s name or image. Method 76 MS from four clerkship blocks participated. Blocks one and three (Treatment group) were provided case materials containing the patient’s name and facial picture while blocks two and four (Control group) were provided similar materials without the patient’s name or image. Knowledge acquisition was evaluated with a multiple-choice-question examination (CQA_K). Results Treatment group CQA_K scores were 64.6% (block one, n = 18) and 76.0% (block three, n = 22). Control group scores were 71.7%, (block two, n = 17) and 68.4% (block four, n = 19). ANOVA F-test among the four block mean scores was not significant; F (3, 72) = 1.68, p = 0.18, η2=0.07. Only 22.2% and 27.3% of the MS from blocks one and three respectively correctly recalled the patient’s name while 16.7% and 40.9% recalled the correct final diagnosis of the patient. Conclusions These results suggest that including a patient’s name and facial picture on reading materials may not improve MS knowledge acquisition. Corroborating studies should be performed before applying these results to the design of instructional materials. PMID:27004072

  18. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  19. Efficient Data Capture and Post-Processing for Real-Time Imaging Using AN Ultrasonic Array

    NASA Astrophysics Data System (ADS)

    Moreau, L.; Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2010-02-01

    Over the past few years, ultrasonic phased arrays have shown good potential for nondestructive testing (NDT), thanks to high resolution imaging algorithms. Many algorithms are based on the full matrix capture, obtained by firing each element of an ultrasonic array independently, and collecting the data with all elements. Because of the finite sound velocity in the specimen, two consecutive firings must be separated by a minimum time interval. Therefore, more array elements require longer data acquisition times. Moreover, if the array has N elements, then the full matrix contains N2 temporal signals to be processed. Because of the limited calculation speed of current computers, a large matrix of data can result in long post-processing times. In an industrial context where real-time imaging is desirable, it is crucial to reduce acquisition and/or post-processing times. This paper investigates methods designed to reduce acquisition and post-processing times for the total focusing method and wavenumber imaging algorithms. Limited transmission cycles are used to reduce data capture and post-processing. Post-processing times are further reduced by demodulating the data to temporal baseband frequencies. Results are presented so that a compromise can be made between acquisition time, post-processing time and image quality.

  20. Retinal oximetry based on nonsimultaneous image acquisition using a conventional fundus camera.

    PubMed

    Kim, Sun Kwon; Kim, Dong Myung; Suh, Min Hee; Kim, Martha; Kim, Hee Chan

    2011-08-01

    To measure the retinal arteriole and venule oxygen saturation (SO(2)) using a conventional fundus camera, retinal oximetry based on nonsimultaneous image acquisition was developed and evaluated. Two retinal images were sequentially acquired using a conventional fundus camera with two bandpass filters (568 nm: isobestic, 600 nm: nonisobestic wavelength), one after another, instead of a built-in green filter. The images were registered to compensate for the differences caused by eye movements during the image acquisition. Retinal SO(2) was measured using two wavelength oximetry. To evaluate sensitivity of the proposed method, SO(2) in the arterioles and venules before and after inhalation of 100% O(2) were compared, respectively, in 11 healthy subjects. After inhalation of 100% O(2), SO(2) increased from 96.0 ±6.0% to 98.8% ±7.1% in the arterioles (p=0.002) and from 54.0 ±8.0% to 66.7% ±7.2% in the venules (p=0.005) (paired t-test, n=11). Reproducibility of the method was 2.6% and 5.2% in the arterioles and venules, respectively (average standard deviation of five measurements, n=11).

  1. Evidence on the Effect of DoD Acquisition Policy and Process and Funding Climate on Cancellations of Major Defense Acquisitions Programs

    DTIC Science & Technology

    2015-05-01

    growth and changes over time in acquisition policies and processes, after taking funding climate into account. McN-W (2014) found that there is not a...reasonable summary conclusion is that neither acquisition policy and process changes nor changes in funding climate have had much, if any, effect on... changes in funding climate are strongly associated with cancellations. They arguably have been crucial. DoD force structure, the capabilities that

  2. Image processing of digital chest ionograms.

    PubMed

    Yarwood, J R; Moores, B M

    1988-10-01

    A number of image-processing techniques have been applied to a digital ionographic chest image in order to evaluate their possible effects on this type of image. In order to quantify any effect, a simulated lesion was superimposed on the image at a variety of locations representing different types of structural detail. Visualization of these lesions was evaluated by a number of observers both pre- and post-processing operations. The operations employed included grey-scale transformations, histogram operations, edge-enhancement and smoothing functions. The resulting effects of these operations on the visualization of the simulated lesions are discussed.

  3. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  4. Lossy cardiac x-ray image compression based on acquisition noise

    NASA Astrophysics Data System (ADS)

    de Bruijn, Frederik J.; Slump, Cornelis H.

    1997-05-01

    In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially since, in reality, human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors. Therefore, we investigate the possibility to develop alternative measures for data loss, based on the characteristics of the acquisition system, in our case, a digital cardiac imaging system. In general, due to the low exposure, cardiac x-ray images tend to be relatively noisy. The main noise contributions are quantum noise and electrical noise. The electrical noise is not correlated with the signal. In addition, the signal can be transformed such that the correlated Poisson-distributed quantum noise is transformed into an additional zero-mean Gaussian noise source which is uncorrelated with the signal. Furthermore, the systems modulation transfer function imposes a known spatial-frequency limitation to the output signal. In the assumption that noise which is not correlated with the signal contains no diagnostic information, we have derived a compression measure based on the acquisition parameters of a digital cardiac imaging system. The measure is used for bit- assignment and quantization of transform coefficients. We present a blockwise-DCT compression algorithm which is based on the conventional JPEG-standard. However, the bit- assignment to the transform coefficients is now determined by an assumed noise variance for each coefficient, for a given set of acquisition parameters. Experiments with the algorithm indicate that a bit rate of 0.6 bit/pixel is feasible, without apparent loss of clinical information.

  5. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    SciTech Connect

    Yip, Stephen Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  6. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  7. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  8. Task-driven image acquisition and reconstruction in cone-beam CT.

    PubMed

    Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H

    2015-04-21

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt

  9. Task-driven image acquisition and reconstruction in cone-beam CT

    PubMed Central

    Gang, Grace J.; Stayman, J. Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H.

    2015-01-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters and in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e., the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  10. Task-driven image acquisition and reconstruction in cone-beam CT

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Webster Stayman, J.; Ehtiati, Tina; Siewerdsen, Jeffrey H.

    2015-04-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d‧) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d‧ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d‧ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  11. On Processing Hexagonally Sampled Images

    DTIC Science & Technology

    2011-07-01

    two points p1 = (a1,r1, c1 ) and p2 = (a2,r2,c2):       2 21 21 2 21 21 21 2 3 2...rr aa cc aa d pp “City-Block” distance (on the image plane) between two points p1 = (a1,r1, c1 ) and p2 = (a2,r2,c2...A. Approved for public release, distribution unlimited. (96ABW-2011-0325) Neuromorphic Infrared Sensor ( NIFS ) 31 DISTRIBUTION A. Approved

  12. Dynamic whole-body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-10-01

    Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ˜15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ˜45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically

  13. Image processing technology for enhanced situational awareness

    NASA Astrophysics Data System (ADS)

    Page, S. F.; Smith, M. I.; Hickman, D.

    2009-09-01

    This paper discusses the integration of a number of advanced image and data processing technologies in support of the development of next-generation Situational Awareness systems for counter-terrorism and crime fighting applications. In particular, the paper discusses the European Union Framework 7 'SAMURAI' project, which is investigating novel approaches to interactive Situational Awareness using cooperative networks of heterogeneous imaging sensors. Specific focus is given to novel Data Fusion aspects of the research which aim to improve system performance through intelligently fusing both image data and non image data sources, resolving human-machine conflicts, and refining the Situational Awareness picture. In addition, the paper highlights some recent advances in supporting image processing technologies. Finally, future trends in image-based Situational Awareness are identified, such as Post-Event Analysis (also known as 'Back-Tracking'), and the associated technical challenges are discussed.

  14. Image processing system for the measurement of timber truck loads

    NASA Astrophysics Data System (ADS)

    Carvalho, Fernando D.; Correia, Bento A. B.; Davies, Roger; Rodrigues, Fernando C.; Freitas, Jose C. A.

    1993-01-01

    The paper industry uses wood as its raw material. To know the quantity of wood in the pile of sawn tree trunks, every truck load entering the plant is measured to determine its volume. The objective of this procedure is to know the solid volume of wood stocked in the plant. Weighing the tree trunks has its own problems, due to their high capacity for absorbing water. Image processing techniques were used to evaluate the volume of a truck load of logs of wood. The system is based on a PC equipped with an image processing board using data flow processors. Three cameras allow image acquisition of the sides and rear of the truck. The lateral images contain information about the sectional area of the logs, and the rear image contains information about the length of the logs. The machine vision system and the implemented algorithms are described. The results being obtained with the industrial prototype that is now installed in a paper mill are also presented.

  15. Development of a 3-D data acquisition system for human facial imaging

    NASA Astrophysics Data System (ADS)

    Marshall, Stephen J.; Rixon, R. C.; Whiteford, Don N.; Wells, Peter J.; Powell, S. J.

    1990-07-01

    While preparing to conduct human facial surgery, it is necessary to visualise the effects of proposed surgery on the patient's appearance. This visualisation is of great benefit to both surgeon and patient, and has traditionally been achieved by the manual manipulation of photographs. Technological developments in the areas of computer-aided design and optical sensing now make it possible to construct a computer-based imaging system which can simulate the effects of facial surgery on patients. A collaborative project with the aim of constructing a prototype facial imaging system is under way between the National Engineering Laboratory and St George's Hospital. The proposed system will acquire, display and manipulate 3-dimensional facial images of patients requiring facial surgery. The feasibility of using two NEL developed optical measurement methods for 3-D facial data acquisition had been established by their successful application to the measurement of dummy heads. The two optical measurement systems, the NEL Auto-MATE moire fringe contouring system and the NEL STRIPE laser scanning triangulation system, were further developed to adapt them for use in facial imaging and additional tests carried out in which emphasis was placed on the use of live human subjects. The knowledge gained in the execution of the tests enabled the selection of the most suitable of the two methods studied for facial data acquisition. A full description of the methods and equipment used in the study will be given. Additionally, work on the effects of the quality and quantity of measurement data on the facial image will be described. Finally, the question of how best to provide display and manipulation of the facial images will be addressed.

  16. Energy preserving QMF for image processing.

    PubMed

    Lian, Jian-ao; Wang, Yonghui

    2014-07-01

    Implementation of new biorthogonal filter banks (BFB) for image compression and denoising is performed, using test images with diversified characteristics. These new BFB’s are linear-phase, have odd lengths, and with a critical feature, namely, the filters preserve signal energy very well. Experimental results show that the proposed filter banks demonstrate promising performance improvement over the filter banks of those widely used in the image processing area, such as the CDF 9/7.

  17. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  18. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  19. Investigations on the efficiency of cardiac-gated methods for the acquisition of diffusion-weighted images

    NASA Astrophysics Data System (ADS)

    Nunes, Rita G.; Jezzard, Peter; Clare, Stuart

    2005-11-01

    Diffusion-weighted images are inherently very sensitive to motion. Pulsatile motion of the brain can give rise to artifactual signal attenuation leading to over-estimation of the apparent diffusion coefficients, even with snapshot echo planar imaging. Such miscalculations can result in erroneous estimates of the principal diffusion directions. Cardiac gating can be performed to confine acquisition to the quiet portion of the cycle. Although effective, this approach leads to significantly longer acquisition times. On the other hand, it has been demonstrated that pulsatile motion is not significant in regions above the corpus callosum. To reduce acquisition times and improve the efficiency of whole brain cardiac-gated acquisitions, the upper slices of the brain can be imaged during systole, reserving diastole for those slices most affected by pulsatile motion. The merits and disadvantages of this optimized approach are investigated here, in comparison to a more standard gating method and to the non-gated approach.

  20. An Acquisition Guide for Executives

    EPA Pesticide Factsheets

    This guide covers the following subjects; What is Acquisition?, Purpose and Primary Functions of the Agency’s Acquisition System, Key Organizations in Acquisitions, Legal Framework, Key Players in Acquisitions, Acquisition Process, Acquisition Thresholds

  1. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  2. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    PubMed Central

    GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT

    2014-01-01

    Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be

  3. Inverse Process Analysis for the Acquisition of Thermophysical Data

    SciTech Connect

    Jay Frankel; Adrian Sabau

    2004-10-31

    One of the main barriers in the analysis and design of materials processing and industrial applications is the lack of accurate experimental data on the thermophysical properties of materials. To date, the measurement of most of these high-temperature thermophysical properties has often been plagued by temperature lags that are inherent in measurement techniques. These lags can be accounted for with the appropriate mathematical models, reflecting the experimental apparatus and sample region, in order to deduce the desired measurement as a function of true sample temperature. Differential scanning calorimeter (DSC) measurements are routinely used to determine enthalpies of phase change, phase transition temperatures, glass transition temperatures, and heat capacities. In the aluminum, steel, and metal casting industries, predicting the formation of defects such as shrinkage voids, microporosity, and macrosegregation is limited by the data available on fraction solid and density evolution during solidification. Dilatometer measurements are routinely used to determine the density of a sample at various temperatures. An accurate determination of the thermophysical properties of materials is needed to achieve accuracy in the numerical simulations used to improve or design new material processes. In most of the instruments used to measure properties, the temperature is changed according to instrument controllers and there is a nonhomogeneous temperature distribution within the instrument. Additionally, the sample temperature cannot be measured directly: temperature data are collected from a thermocouple that is placed at a different location than that of the sample, thus introducing a time lag. The goal of this project was to extend the utility, quality and accuracy of two types of commercial instruments -a DSC and a dilatometer - used for thermophysical property measurements in high-temperature environments. In particular, the quantification of solid fraction and

  4. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  5. Wide-field flexible endoscope for simultaneous color and NIR fluorescence image acquisition during surveillance colonoscopy

    NASA Astrophysics Data System (ADS)

    García-Allende, P. Beatriz; Nagengast, Wouter B.; Glatz, Jürgen; Ntziachristos, Vasilis

    2013-03-01

    Colorectal cancer (CRC) is the third most common form of cancer and, despite recent declines in both incidence and mortality, it still remains the second leading cause of cancer-related deaths in the western world. Colonoscopy is the standard for detection and removal of premalignant lesions to prevent CRC. The major challenges that physicians face during surveillance colonoscopy are the high adenoma miss-rates and the lack of functional information to facilitate decision-making concerning which lesions to remove. Targeted imaging with NIR fluorescence would address these limitations. Tissue penetration is increased in the NIR range while the combination with targeted NIR fluorescent agents provides molecularly specific detection of cancer cells, i.e. a red-flag detection strategy that allows tumor imaging with optimal sensitivity and specificity. The development of a flexible endoscopic fluorescence imaging method that can be integrated with standard medical endoscopes and facilitates the clinical use of this potential is described in this work. A semi-disposable coherent fiber optic imaging bundle that is traditionally employed in the exploration of biliary and pancreatic ducts is proposed, since it is long and thin enough to be guided through the working channel of a traditional video colonoscope allowing visualization of proximal lesions in the colon. A custom developed zoom system magnifies the image of the proximal end of the imaging bundle to fill the dimensions of two cameras operating in parallel providing the simultaneous color and fluorescence video acquisition.

  6. Four-channel surface coil array for sequential CW-EPR image acquisition

    NASA Astrophysics Data System (ADS)

    Enomoto, Ayano; Emoto, Miho; Fujii, Hirotada; Hirata, Hiroshi

    2013-09-01

    This article describes a four-channel surface coil array to increase the area of visualization for continuous-wave electron paramagnetic resonance (CW-EPR) imaging. A 776-MHz surface coil array was constructed with four independent surface coil resonators and three kinds of switches. Control circuits for switching the resonators were also built to sequentially perform EPR image acquisition for each resonator. The resonance frequencies of the resonators were shifted using PIN diode switches to decouple the inductively coupled coils. To investigate the area of visualization with the surface coil array, three-dimensional EPR imaging was performed using a glass cell phantom filled with a solution of nitroxyl radicals. The area of visualization obtained with the surface coil array was increased approximately 3.5-fold in comparison to that with a single surface coil resonator. Furthermore, to demonstrate the applicability of this surface coil array to animal imaging, three-dimensional EPR imaging was performed in a living mouse with an exogenously injected nitroxyl radical imaging agent.

  7. Learning and Individual Differences: An Ability/Information-Processing Framework for Skill Acquisition. Final Report.

    ERIC Educational Resources Information Center

    Ackerman, Phillip L.

    A program of theoretical and empirical research focusing on the ability determinants of individual differences in skill acquisition is reviewed. An integrative framework for information-processing and cognitive ability determinants of skills is reviewed, along with principles for ability-skill relations. Experimental manipulations were used to…

  8. Individual Variation in Infant Speech Processing: Implications for Language Acquisition Theories

    ERIC Educational Resources Information Center

    Cristia, Alejandrina

    2009-01-01

    To what extent does language acquisition recruit domain-general processing mechanisms? In this dissertation, evidence concerning this question is garnered from the study of individual differences in infant speech perception and their predictive value with respect to language development in early childhood. In the first experiment, variation in the…

  9. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  10. Image processing for HTS SQUID probe microscope

    NASA Astrophysics Data System (ADS)

    Hayashi, T.; Koetitz, R.; Itozaki, H.; Ishikawa, T.; Kawabe, U.

    2005-10-01

    An HTS SQUID probe microscope has been developed using a high-permeability needle to enable high spatial resolution measurement of samples in air even at room temperature. Image processing techniques have also been developed to improve the magnetic field images obtained from the microscope. Artifacts in the data occur due to electromagnetic interference from electric power lines, line drift and flux trapping. The electromagnetic interference could successfully be removed by eliminating the noise peaks from the power spectrum of fast Fourier transforms of line scans of the image. The drift between lines was removed by interpolating the mean field value of each scan line. Artifacts in line scans occurring due to flux trapping or unexpected noise were removed by the detection of a sharp drift and interpolation using the line data of neighboring lines. Highly detailed magnetic field images were obtained from the HTS SQUID probe microscope by the application of these image processing techniques.

  11. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  12. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  13. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and

  14. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  15. Health Hazard Assessment and Toxicity Clearances in the Army Acquisition Process

    NASA Technical Reports Server (NTRS)

    Macko, Joseph A., Jr.

    2000-01-01

    The United States Army Materiel Command, Army Acquisition Pollution Prevention Support Office (AAPPSO) is responsible for creating and managing the U.S. Army Wide Acquisition Pollution Prevention Program. They have established Integrated Process Teams (IPTs) within each of the Major Subordinate Commands of the Army Materiel Command. AAPPSO provides centralized integration, coordination, and oversight of the Army Acquisition Pollution Prevention Program (AAPPP) , and the IPTs provide the decentralized execution of the AAPPSO program. AAPPSO issues policy and guidance, provides resources and prioritizes P2 efforts. It is the policy of the (AAPPP) to require United States Army Surgeon General approval of all materials or substances that will be used as an alternative to existing hazardous materials, toxic materials and substances, and ozone-depleting substances. The Army has a formal process established to address this effort. Army Regulation 40-10 requires a Health Hazard Assessment (HHA) during the Acquisition milestones of a new Army system. Army Regulation 40-5 addresses the Toxicity Clearance (TC) process to evaluate new chemicals and materials prior to acceptance as an alternative. U.S. Army Center for Health Promotion and Preventive Medicine is the Army's matrixed medical health organization that performs the HHA and TC mission.

  16. An intelligent pre-processing framework for standardizing medical images for CAD and other post-processing applications

    NASA Astrophysics Data System (ADS)

    Raghupathi, Lakshminarasimhan; Devarakota, Pandu R.; Wolf, Matthias

    2012-03-01

    There is an increasing need to provide end-users with seamless and secure access to healthcare information acquired from a diverse range of sources. This might include local and remote hospital sites equipped with different vendors and practicing varied acquisition protocols and also heterogeneous external sources such as the Internet cloud. In such scenarios, image post-processing tools such as CAD (computer-aided diagnosis) which were hitherto developed using a smaller set of images may not always work optimally on newer set of images having entirely different characteristics. In this paper, we propose a framework that assesses the quality of a given input image and automatically applies an appropriate pre-processing method in such a manner that the image characteristics are normalized regardless of its source. We focus mainly on medical images, and the objective of the said preprocessing method is to standardize the performance of various image processing and workflow applications like CAD to perform in a consistent manner. First, our system consists of an assessment step wherein an image is evaluated based on criteria such as noise, image sharpness, etc. Depending on the measured characteristic, we then apply an appropriate normalization technique thus giving way to our overall pre-processing framework. A systematic evaluation of the proposed scheme is carried out on large set of CT images acquired from various vendors including images reconstructed with next generation iterative methods. Results demonstrate that the images are normalized and thus suitable for an existing LungCAD prototype1.

  17. Optimizing Uas Image Acquisition and Geo-Registration for Precision Agriculture

    NASA Astrophysics Data System (ADS)

    Hearst, A. A.; Cherkauer, K. A.; Rainey, K. M.

    2014-12-01

    Unmanned Aircraft Systems (UASs) can acquire imagery of crop fields in various spectral bands, including the visible, near-infrared, and thermal portions of the spectrum. By combining techniques of computer vision, photogrammetry, and remote sensing, these images can be stitched into precise, geo-registered maps, which may have applications in precision agriculture and other industries. However, the utility of these maps will depend on their positional accuracy. Therefore, it is important to quantify positional accuracy and consider the tradeoffs between accuracy, field site setup, and the computational requirements for data processing and analysis. This will enable planning of data acquisition and processing to obtain the required accuracy for a given project. This study focuses on developing and evaluating methods for geo-registration of raw aerial frame photos acquired by a small fixed-wing UAS. This includes visual, multispectral, and thermal imagery at 3, 6, and 14 cm/pix resolutions, respectively. The study area is 10 hectares of soybean fields at the Agronomy Center for Research and Education (ACRE) at Purdue University. The dataset consists of imagery from 6 separate days of flights (surveys) and supporting ground measurements. The Direct Sensor Orientation (DiSO) and Integrated Sensor Orientation (InSO) methods for geo-registration are tested using 16 Ground Control Points (GCPs). Subsets of these GCPs are used to test for the effects of different numbers and spatial configurations of GCPs on positional accuracy. The horizontal and vertical Root Mean Squared Error (RMSE) is used as the primary metric of positional accuracy. Preliminary results from 1 of the 6 surveys show that the DiSO method (0 GCPs used) achieved an RMSE in the X, Y, and Z direction of 2.46 m, 1.04 m, and 1.91 m, respectively. InSO using 5 GCPs achieved an RMSE of 0.17 m, 0.13 m, and 0.44 m. InSO using 10 GCPs achieved an RMSE of 0.10 m, 0.09 m, and 0.12 m. Further analysis will identify

  18. IECON '87: Signal acquisition and processing; Proceedings of the 1987 International Conference on Industrial Electronics, Control, and Instrumentation, Cambridge, MA, Nov. 3, 4, 1987

    NASA Astrophysics Data System (ADS)

    Niederjohn, Russell J.

    1987-01-01

    Theoretical and applications aspects of signal processing are examined in reviews and reports. Topics discussed include speech processing methods, algorithms, and architectures; signal-processing applications in motor and power control; digital signal processing; signal acquisition and analysis; and processing algorithms and applications. Consideration is given to digital coding of speech algorithms, an algorithm for continuous-time processes in discrete-time measurement, quantization noise and filtering schemes for digital control systems, distributed data acquisition for biomechanics research, a microcomputer-based differential distance and velocity measurement system, velocity observations from discrete position encoders, a real-time hardware image preprocessor, and recognition of partially occluded objects by a knowledge-based system.

  19. Reference radiochromic film dosimetry in kilovoltage photon beams during CBCT image acquisition

    SciTech Connect

    Tomic, Nada; Devic, Slobodan; DeBlois, Francois; Seuntjens, Jan

    2010-03-15

    Purpose: A common approach for dose assessment during cone beam computed tomography (CBCT) acquisition is to use thermoluminescent detectors for skin dose measurements (on patients or phantoms) or ionization chamber (in phantoms) for body dose measurements. However, the benefits of a daily CBCT image acquisition such as margin reduction in planning target volume and the image quality must be weighted against the extra dose received during CBCT acquisitions. Methods: The authors describe a two-dimensional reference dosimetry technique for measuring dose from CBCT scans using the on-board imaging system on a Varian Clinac-iX linear accelerator that employs the XR-QA radiochromic film model, specifically designed for dose measurements at low energy photons. The CBCT dose measurements were performed for three different body regions (head and neck, pelvis, and thorax) using humanoid Rando phantom. Results: The authors report on both surface dose and dose profiles measurements during clinical CBCT procedures carried out on a humanoid Rando phantom. Our measurements show that the surface doses per CBCT scan can range anywhere between 0.1 and 4.7 cGy, with the lowest surface dose observed in the head and neck region, while the highest surface dose was observed for the Pelvis spot light CBCT protocol in the pelvic region, on the posterior side of the Rando phantom. The authors also present results of the uncertainty analysis of our XR-QA radiochromic film dosimetry system. Conclusions: Radiochromic film dosimetry protocol described in this work was used to perform dose measurements during CBCT acquisitions with the one-sigma dose measurement uncertainty of up to 3% for doses above 1 cGy. Our protocol is based on film exposure calibration in terms of ''air kerma in air,'' which simplifies both the calibration procedure and reference dosimetry measurements. The results from a full Monte Carlo investigation of the dose conversion of measured XR-QA film dose at the surface into

  20. Cardiovascular Magnetic Resonance in Cardiology Practice: A Concise Guide to Image Acquisition and Clinical Interpretation.

    PubMed

    Valbuena-López, Silvia; Hinojar, Rocío; Puntmann, Valentina O

    2016-02-01

    Cardiovascular magnetic resonance plays an increasingly important role in routine cardiology clinical practice. It is a versatile imaging modality that allows highly accurate, broad and in-depth assessment of cardiac function and structure and provides information on pertinent clinical questions in diseases such as ischemic heart disease, nonischemic cardiomyopathies, and heart failure, as well as allowing unique indications, such as the assessment and quantification of myocardial iron overload or infiltration. Increasing evidence for the role of cardiovascular magnetic resonance, together with the spread of knowledge and skill outside expert centers, has afforded greater access for patients and wider clinical experience. This review provides a snapshot of cardiovascular magnetic resonance in modern clinical practice by linking image acquisition and postprocessing with effective delivery of the clinical meaning.

  1. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing.

  2. Engineering the Business of Defense Acquisition: An Analysis of Program Office Processes

    DTIC Science & Technology

    2015-05-01

    Engineering the Business of Defense Acquisition: An Analysis of Program Office Processes Charles K. Pickar, Naval Postgraduate School Raymond D...control number. 1. REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Engineering the Business ...Information Technology and Business Process Redesign | MIT Sloan Management Review. MIT Sloan Management Review. Retrieved from http://sloanreview.mit.edu

  3. Real-time digital signal processing for live electro-optic imaging.

    PubMed

    Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro

    2009-08-31

    We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.

  4. Software-Based Real-Time Acquisition and Processing of PET Detector Raw Data.

    PubMed

    Goldschmidt, Benjamin; Schug, David; Lerche, Christoph W; Salomon, André; Gebhardt, Pierre; Weissler, Bjoern; Wehner, Jakob; Dueppenbecker, Peter M; Kiessling, Fabian; Schulz, Volkmar

    2016-02-01

    In modern positron emission tomography (PET) readout architectures, the position and energy estimation of scintillation events (singles) and the detection of coincident events (coincidences) are typically carried out on highly integrated, programmable printed circuit boards. The implementation of advanced singles and coincidence processing (SCP) algorithms for these architectures is often limited by the strict constraints of hardware-based data processing. In this paper, we present a software-based data acquisition and processing architecture (DAPA) that offers a high degree of flexibility for advanced SCP algorithms through relaxed real-time constraints and an easily extendible data processing framework. The DAPA is designed to acquire detector raw data from independent (but synchronized) detector modules and process the data for singles and coincidences in real-time using a center-of-gravity (COG)-based, a least-squares (LS)-based, or a maximum-likelihood (ML)-based crystal position and energy estimation approach (CPEEA). To test the DAPA, we adapted it to a preclinical PET detector that outputs detector raw data from 60 independent digital silicon photomultiplier (dSiPM)-based detector stacks and evaluated it with a [(18)F]-fluorodeoxyglucose-filled hot-rod phantom. The DAPA is highly reliable with less than 0.1% of all detector raw data lost or corrupted. For high validation thresholds (37.1 ± 12.8 photons per pixel) of the dSiPM detector tiles, the DAPA is real time capable up to 55 MBq for the COG-based CPEEA, up to 31 MBq for the LS-based CPEEA, and up to 28 MBq for the ML-based CPEEA. Compared to the COG-based CPEEA, the rods in the image reconstruction of the hot-rod phantom are only slightly better separable and less blurred for the LS- and ML-based CPEEA. While the coincidence time resolution (∼ 500 ps) and energy resolution (∼12.3%) are comparable for all three CPEEA, the system sensitivity is up to 2.5 × higher for the LS- and ML-based CPEEA.

  5. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  6. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  7. SENTINEL-2 Level 1 Products and Image Processing Performances

    NASA Astrophysics Data System (ADS)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m) and multi-temporal (better than 0.3 pixels) cases. Then, the prototyped image processing techniques (both radiometric and geometric) will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …). Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

  8. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  9. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  10. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  11. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  12. Image acquisition, geometric correction and display of images from a 2×2 x-ray detector array based on Electron Multiplying Charge Coupled Device (EMCCD) technology.

    PubMed

    Vasan, S N Swetadri; Sharma, P; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2013-03-06

    A high resolution (up to 11.2 lp/mm) x-ray detector with larger field of view (8.5 cm × 8.5 cm) has been developed. The detector is a 2 × 2 array of individual imaging modules based on EMCCD technology. Each module outputs a frame of size 1088 × 1037 pixels, each 12 bits. The frames from the 4 modules are acquired into the processing computer using one of two techniques. The first uses 2 CameraLink communication channels with each carrying information from two modules, the second uses a application specific custom integrated circuits, the Multiple Module Multiplexer Integrated Circuit (MMMIC), 3 of which are used to multiplex the data from 4 modules into one CameraLink channel. Once the data is acquired using either of the above mentioned techniques, it is decoded in the graphics processing unit (GPU) to form one single frame of size 2176 × 2074 pixels each 16 bits. Each imaging module uses a fiber optic taper coupled to the EMCCD sensor. To correct for mechanical misalignment between the sensors and the fiber optic tapers and produce a single seamless image, the images in each module may be rotated and translated slightly in the x-y plane with respect to each other. To evaluate the detector acquisition and correction techniques, an aneurysm model was placed over an anthropomorphic head phantom and a coil was guided into the aneurysm under fluoroscopic guidance using the detector array. Image sequences before and after correction are presented which show near-seamless boundary matching and are well suited for fluoroscopic imaging.

  13. A high speed data acquisition and processing system for real time data analysis and control

    NASA Astrophysics Data System (ADS)

    Ferron, J. R.

    1992-11-01

    A high speed data acquisition system which is closely coupled with a high speed digital processor is described. Data acquisition at a rate of 40 million 14 bit data values per second is possible simultaneously with data processing at a rate of 80 million floating point operations per second. This is achieved by coupling a commercially available VME format single board computer based on the Intel i860 microprocessor with a custom designed first-in, first-out memory circuit that transfers data at high speed to the processor board memory. Parallel processing to achieve increased computation speed is easily implemented because the data can be transferred simultaneously to multiple processor boards. Possible applications include high speed process control and real time data reduction. A specific example is described in which this hardware is used to implement a feedback control system for 18 parameters which uses 100 input signals and achieves a 100 μs cycle time.

  14. Employing image processing techniques for cancer detection using microarray images.

    PubMed

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively.

  15. Development of an automated data acquisition and processing pipeline using multiple telescopes for observing transient phenomena

    NASA Astrophysics Data System (ADS)

    Savant, Vaibhav; Smith, Niall

    2016-07-01

    We report on the current status in the development of a pilot automated data acquisition and reduction pipeline based around the operation of two nodes of remotely operated robotic telescopes based in California, USA and Cork, Ireland. The observatories are primarily used as a testbed for automation and instrumentation and as a tool to facilitate STEM (Science Technology Engineering Mathematics) promotion. The Ireland node is situated at Blackrock Castle Observatory (operated by Cork Institute of Technology) and consists of two optical telescopes - 6" and 16" OTAs housed in two separate domes while the node in California is its 6" replica. Together they form a pilot Telescope ARrAy known as TARA. QuickPhot is an automated data reduction pipeline designed primarily to throw more light on the microvariability of blazars employing precision optical photometry and using data from the TARA telescopes as they constantly monitor predefined targets whenever observing conditions are favourable. After carrying out aperture photometry, if any variability above a given threshold is observed, the reporting telescope will communicate the source concerned and the other nodes will follow up with multi-band observations, taking advantage that they are located in strategically separated time-zones. Ultimately we wish to investigate the applicability of Shock-in-Jet and Geometric models. These try to explain the processes at work in AGNs which result in the formation of jets, by looking for temporal and spectral variability in TARA multi-band observations. We are also experimenting with using a Twochannel Optical PHotometric Imaging CAMera (TOΦCAM) that we have developed and which has been optimised for simultaneous two-band photometry on our 16" OTA.

  16. Development of a safeguards data acquisition system for the process monitoring of a simulated reprocessing facility

    SciTech Connect

    Wachter, J.W.

    1986-01-01

    As part of the Consolidated Fuel Reprocessing Program of the Fuel Recycle Division at the Oak Ridge National Laboratory (ORNL), an Integrated Process Demonstration (IPD) facility has been constructed for development of reprocessing plant technology. Through the use of cold materials, the IPD facility provides for the integrated operation of the major equipment items of the chemical-processing portion of a nuclear fuel reprocessing plant. The equipment, processes, and the extensive use of computers in data acquisition and control are prototypical of future reprocessing facilities and provide a unique test-bed for nuclear safeguards demonstrations. The data acquisition and control system consists of several microprocessors that communicate with one another and with a host minicomputer over a common data highway. At intervals of a few minutes, a ''snapshot'' is taken of the process variables, and the data are transmitted to a safeguards computer and minicomputer work station for analysis. This paper describes this data acquisition system and the data-handling procedures leading to microscopic process monitoring for safeguards purposes.

  17. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  18. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  19. Multibeam Sonar Backscatter Data Acquisition and Processing: Guidelines and Recommendations from the GEOHAB Backscatter Working Group

    NASA Astrophysics Data System (ADS)

    Heffron, E.; Lurton, X.; Lamarche, G.; Brown, C.; Lucieer, V.; Rice, G.; Schimel, A.; Weber, T.

    2015-12-01

    Backscatter data acquired with multibeam sonars are now commonly used for the remote geological interpretation of the seabed. The systems hardware, software, and processing methods and tools have grown in numbers and improved over the years, yet many issues linger: there are no standard procedures for acquisition, poor or absent calibration, limited understanding and documentation of processing methods, etc. A workshop organized at the GeoHab (a community of geoscientists and biologists around the topic of marine habitat mapping) annual meeting in 2013 was dedicated to seafloor backscatter data from multibeam sonars and concluded that there was an overwhelming need for better coherence and agreement on the topics of acquisition, processing and interpretation of data. The GeoHab Backscatter Working Group (BSWG) was subsequently created with the purpose of documenting and synthetizing the state-of-the-art in sensors and techniques available today and proposing methods for best practice in the acquisition and processing of backscatter data. Two years later, the resulting document "Backscatter measurements by seafloor-mapping sonars: Guidelines and Recommendations" was completed1. The document provides: An introduction to backscatter measurements by seafloor-mapping sonars; A background on the physical principles of sonar backscatter; A discussion on users' needs from a wide spectrum of community end-users; A review on backscatter measurement; An analysis of best practices in data acquisition; A review of data processing principles with details on present software implementation; and finally A synthesis and key recommendations. This presentation reviews the BSWG mandate, structure, and development of this document. It details the various chapter contents, its recommendations to sonar manufacturers, operators, data processing software developers and end-users and its implication for the marine geology community. 1: Downloadable at https://www.niwa.co.nz/coasts-and-oceans/research-projects/backscatter-measurement-guidelines

  20. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  1. Radiographic image processing for industrial applications

    NASA Astrophysics Data System (ADS)

    Dowling, Martin J.; Kinsella, Timothy E.; Bartels, Keith A.; Light, Glenn M.

    1998-03-01

    One advantage of working with digital images is the opportunity for enhancement. While it is important to preserve the original image, variations can be generated that yield greater understanding of object properties. It is often possible to effectively increase dynamic range, improve contrast in regions of interest, emphasize subtle features, reduce background noise, and provide more robust detection of faults. This paper describes and illustrates some of these processes using real world examples.

  2. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  3. System identification by video image processing

    NASA Astrophysics Data System (ADS)

    Shinozuka, Masanobu; Chung, Hung-Chi; Ichitsubo, Makoto; Liang, Jianwen

    2001-07-01

    Emerging image processing techniques demonstrate their potential applications in earthquake engineering, particularly in the area of system identification. In this respect, the objectives of this research are to demonstrate the underlying principle that permits system identification, non-intrusively and remotely, with the aid of video camera and, for the purpose of the proof-of-concept, to apply the principle to a system identification problem involving relative motion, on the basis of the images. In structural control, accelerations at different stories of a building are usually measured and fed back for processing and control. As an alternative, this study attempts to identify the relative motion between different stories of a building for the purpose of on-line structural control by digitizing the images taken by video camera. For this purpose, the video image of the vibration of a structure base-isolated by a friction device under shaking-table was used successfully to observe relative displacement between the isolated structure and the shaking-table. This proof-of-concept experiment demonstrates that the proposed identification method based on digital image processing can be used with appropriate modifications to identify many other engineering-wise significant quantities remotely. In addition to the system identification study in the structural dynamics mentioned above, a result of preliminary study is described involving the video imaging of state of crack damage of road and highway pavement.

  4. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  5. TU-F-CAMPUS-I-02: Contrast Enhanced Cone Beam CT Imaging with Dual- Gantry Image Acquisition and Constrained Iterative Reconstruction-a Simulation Study for Liver Imaging Application

    SciTech Connect

    Zhong, Y; Gupta, S; Lai, C; Wang, T; Shaw, C

    2015-06-15

    Purpose: Contrast time-density curves may help differentiate malignant tumors from normal tissues or benign tumors. Repetitive scans using conventional CT or cone beam CT techniques, which Result in unacceptably high dose, may not achieve the desired temporal resolution. In this study we describe and demonstrate a 4D imaging technique for imaging and quantifying contrast flows requiring only one or two 360° scans. Methods: A dual-gantry system is used to simultaneously acquire two projection images at orthogonal orientations. Following the scan, each or both of the two 360° projection sets are used to reconstruct an average contrast enhanced image set which is then segmented to form a 3D contrast map. Alternatively, a pre-injection scan may be made and used to reconstruct a pre-injection image set which is subtracted from the post-injection image set to form the 3D contrast map. Each of the two 360° projection sets is divided into 12 subsets, thus creating 12 pairs of 30° limited angle projection sets, each corresponding to a time spanning over 1/12 of the scanning time. Each pair of the projection sets are reconstructed as a time specific 3D image set with the maximum likelihood estimation iterative algorithm using the contrast map as the constraint. As a demonstration, a 4D abdominal phantom was constructed from clinical CT images with blood flow through the normal tissue and a tumor modeled and imaging process simulated. Results: We have successfully generated a 4D image phantom, and calculated the projection images. The time density curves derived from the reconstructed image set matched well with the flow model used to generate the phantom. Conclusion: Dual-gantry image acquisition and constrained iterative reconstruction algorithm may help to obtain time-density curves of contrast agents in blood flows, which may help differentiate malignant tumors from normal tissues or benign tumors.

  6. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  7. Distributed real time data processing architecture for the TJ-II data acquisition system

    SciTech Connect

    Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Vega, J.; Sanchez, E.

    2004-10-01

    This article describes the performance of a new model of architecture that has been developed for the TJ-II data acquisition system in order to increase its real time data processing capabilities. The current model consists of several compact PCI extension for instrumentation (PXI) standard chassis, each one with various digitizers. In this architecture, the data processing capability is restricted to the PXI controller's own performance. The controller must share its CPU resources between the data processing and the data acquisition tasks. In the new model, distributed data processing architecture has been developed. The solution adds one or more processing cards to each PXI chassis. This way it is possible to plan how to distribute the data processing of all acquired signals among the processing cards and the available resources of the PXI controller. This model allows scalability of the system. More or less processing cards can be added based on the requirements of the system. The processing algorithms are implemented in LabVIEW (from National Instruments), providing efficiency and time-saving application development when compared with other efficient solutions.

  8. How to crack nuts: acquisition process in captive chimpanzees (Pan troglodytes) observing a model.

    PubMed

    Hirata, Satoshi; Morimura, Naruki; Houki, Chiharu

    2009-10-01

    Stone tool use for nut cracking consists of placing a hard-shelled nut onto a stone anvil and then cracking the shell open by pounding it with a stone hammer to get to the kernel. We investigated the acquisition of tool use for nut cracking in a group of captive chimpanzees to clarify what kind of understanding of the tools and actions will lead to the acquisition of this type of tool use in the presence of a skilled model. A human experimenter trained a male chimpanzee until he mastered the use of a hammer and anvil stone to crack open macadamia nuts. He was then put in a nut-cracking situation together with his group mates, who were naïve to this tool use; we did not have a control group without a model. The results showed that the process of acquisition could be broken down into several steps, including recognition of applying pressure to the nut,emergence of the use of a combination of three objects, emergence of the hitting action, using a tool for hitting, and hitting the nut. The chimpanzees recognized these different components separately and practiced them one after another. They gradually united these factors in their behavior leading to their first success. Their behavior did not clearly improve immediately after observing successful nut cracking by a peer, but observation of a skilled group member seemed to have a gradual, long-term influence on the acquisition of nut cracking by naïve chimpanzees.

  9. Summary of the activities of the subgroup on data acquisition and processing

    SciTech Connect

    Connolly, P.L.; Doughty, D.C.; Elias, J.E.

    1981-01-01

    A data acquisition and handling subgroup consisting of approximately 20 members met during the 1981 ISABELLE summer study. Discussions were led by members of the BNL ISABELLE Data Acquisition Group (DAG) with lively participation from outside users. Particularly large contributions were made by representatives of BNL experiments 734, 735, and the MPS, as well as the Fermilab Colliding Detector Facility and the SLAC LASS Facility. In contrast to the 1978 study, the subgroup did not divide its activities into investigations of various individual detectors, but instead attempted to review the current state-of-the-art in the data acquisition, trigger processing, and data handling fields. A series of meetings first reviewed individual pieces of the problem, including status of the Fastbus Project, the Nevis trigger processor, the SLAC 168/E and 3081/E emulators, and efforts within DAG. Additional meetings dealt with the question involving specifying and building complete data acquisition systems. For any given problem, a series of possible solutions was proposed by the members of the subgroup. In general, any given solution had both advantages and disadvantages, and there was never any consensus on which approach was best. However, there was agreement that certain problems could only be handled by systems of a given power or greater. what will be given here is a review of various solutions with associated powers, costs, advantages, and disadvantages.

  10. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  11. Biospeckle image stack process based on artificial neural networks.

    PubMed

    Meschino, Gustavo; Murialdo, Silvia; Passoni, Lucia; Rabal, Hector; Trivi, Marcelo

    2010-01-01

    This paper proposes the identification of regions of interest in biospeckle patterns using unsupervised neural networks of the type Self-Organizing Maps. Segmented images are obtained from the acquisition and processing of laser speckle sequences. The dynamic speckle is a phenomenon that occurs when a beam of coherent light illuminates a sample in which there is some type of activity, not visible, which results in a variable pattern over time. In this particular case the method is applied to the evaluation of bacterial chemotaxis. Image stacks provided by a set of experiments are processed to extract features of the intensity dynamics. A Self-Organizing Map is trained and its cells are colored according to a criterion of similarity. During the recall stage the features of patterns belonging to a new biospeckle sample impact on the map, generating a new image using the color of the map cells impacted by the sample patterns. It is considered that this method has shown better performance to identify regions of interest than those that use a single descriptor. To test the method a chemotaxis assay experiment was performed, where regions were differentiated according to the bacterial motility within the sample.

  12. An Analysis of the Acquisition Process at the End of the Fiscal Year.

    DTIC Science & Technology

    1981-12-01

    34cradle to grave" process; acquisition, the contracting officer’s role , is steps 4-10, or from definition of purchase require- ment through contract...The role of the contracting officer is that of acquiring items and services to support the defense mission. The role of the requisitioner is to complete...major commands. Also the study of obligational rates of multi-year funds may prove to be en- lightening as to the role which Congressional control in the

  13. Implementing Electronic Data Interchange (EDI) with Small Business Suppliers in the Pre-Award Acquisition Process

    DTIC Science & Technology

    1993-06-01

    initiative " Electronic Commerce through EDI." Consistent with the DoD initiative to implement EDI with industry, participation of small businesses in the pre...paperwork associated with the pre-award acquisition process, electronic commerce is being integrated with EDI through electronic bulletin boards...This thesis will explore the issues surrounding DoD’s successfully implementing the use of Electronic Commerce / Electronic Data Interchange (EC/EDI

  14. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  15. [Image processing of early gastric cancer cases].

    PubMed

    Inamoto, K; Umeda, T; Inamura, K

    1992-11-25

    Computer image processing was used to enhance gastric lesions in order to improve the detection of stomach cancer. Digitization was performed in 25 cases of early gastric cancer that had been confirmed surgically and pathologically. The image processing consisted of grey scale transformation, edge enhancement (Sobel operator), and high-pass filtering (unsharp masking). Gery scale transformation improved image quality for the detection of gastric lesions. The Sobel operator enhanced linear and curved margins, and consequently, suppressed the rest. High-pass filtering with unsharp masking was superior to visualization of the texture pattern on the mucosa. Eight of 10 small lesions (less than 2.0 cm) were successfully demonstrated. However, the detection of two lesions in the antrum, was difficult even with the aid of image enhancement. In the other 15 lesions (more than 2.0 cm), the tumor surface pattern and margin between the tumor and non-pathological mucosa were clearly visualized. Image processing was considered to contribute to the detection of small early gastric cancer lesions by enhancing the pathological lesions.

  16. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  17. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  18. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-01-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  19. 3D integral imaging with optical processing

    NASA Astrophysics Data System (ADS)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  20. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.

  1. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  2. Optimized acquisition time for x-ray fluorescence imaging of gold nanoparticles: a preliminary study using photon counting detector

    NASA Astrophysics Data System (ADS)

    Ren, Liqiang; Wu, Di; Li, Yuhua; Chen, Wei R.; Zheng, Bin; Liu, Hong

    2016-03-01

    X-ray fluorescence (XRF) is a promising spectroscopic technique to characterize imaging contrast agents with high atomic numbers (Z) such as gold nanoparticles (GNPs) inside small objects. Its utilization for biomedical applications, however, is greatly limited to experimental research due to longer data acquisition time. The objectives of this study are to apply a photon counting detector array for XRF imaging and to determine an optimized XRF data acquisition time, at which the acquired XRF image is of acceptable quality to allow the maximum level of radiation dose reduction. A prototype laboratory XRF imaging configuration consisting of a pencil-beam X-ray and a photon counting detector array (1 × 64 pixels) is employed to acquire the XRF image through exciting the prepared GNP/water solutions. In order to analyze the signal to noise ratio (SNR) improvement versus the increased exposure time, all the XRF photons within the energy range of 63 - 76KeV that include two Kα gold fluorescence peaks are collected for 1s, 2s, 3s, and so on all the way up to 200s. The optimized XRF data acquisition time for imaging different GNP solutions is determined as the moment when the acquired XRF image just reaches a quality with a SNR of 20dB which corresponds to an acceptable image quality.

  3. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  4. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  5. VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS

    NASA Technical Reports Server (NTRS)

    Rizzi, S. A.

    1994-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech

  6. Cranial nerve assessment in posterior fossa tumors with fast imaging employing steady-state acquisition (FIESTA).

    PubMed

    Mikami, Takeshi; Minamida, Yoshihiro; Yamaki, Toshiaki; Koyanagi, Izumi; Nonaka, Tadashi; Houkin, Kiyohiro

    2005-10-01

    Steady-state free precession is widely used for ultra-fast cardiac or abdominal imaging. The purpose of this work was to assess fast imaging employing steady-state acquisition (FIESTA) and to evaluate its efficacy for depiction of the cranial nerve affected by the tumor. Twenty-three consecutive patients with posterior fossa tumors underwent FIESTA sequence after contrast agent administration, and then displacement of the cranial nerve was evaluated. The 23 patients with posterior fossa tumor consisted of 12 schwannomas, eight meningiomas, and three cases of epidermoid. Except in the cases of epidermoid, intensity of all tumors increased on FIESTA imaging of the contrast enhancement. In the schwannoma cases, visualization of the nerve became poorer as the tumor increased in size. In cases of encapsulated meningioma, all the cranial nerves of the posterior fossa were depicted regardless of location. The ability to depict the nerves was also significantly higher in meningioma patients than in schwannoma patients (P<0.05). In cases of epidermoid, extension of the tumors was depicted clearly. Although the FIESTA sequence offers similar contrast to other heavily T2-weighted sequences, it facilitated a superior assessment of the effect of tumors on cranial nerve anatomy. FIESTA sequence was useful for preoperative simulations of posterior fossa tumors.

  7. Photoacoustic pump-probe tomography of fluorophores in vivo using interleaved image acquisition for motion suppression

    NASA Astrophysics Data System (ADS)

    Märk, Julia; Wagener, Asja; Zhang, Edward; Laufer, Jan

    2017-01-01

    In fluorophores, the excited state lifetime can be modulated using pump-probe excitation. By generating photoacoustic (PA) signals using simultaneous and time-delayed pump and probe excitation pulses at fluences below the maximum permissible exposure, a modulation of the signal amplitude is observed in fluorophores but not in endogenous chromophores. This provides a highly specific contrast mechanism that can be used to recover the location of the fluorophore using difference imaging. The practical challenges in applying this method to in vivo PA tomography include the typically low concentrations of fluorescent contrast agents, and tissue motion. The former results in smaller PA signal amplitudes compared to those measured in blood, while the latter gives rise to difference image artefacts that compromise the unambiguous and potentially noise-limited detection of fluorescent contrast agents. To address this limitation, a method based on interleaved pump-probe image acquisition was developed. It relies on fast switching between simultaneous and time-delayed pump-probe excitation to acquire PA difference signals in quick succession, and to minimise the effects of tissue motion. The feasibility of this method is demonstrated in tissue phantoms and in initial experiments in vivo.

  8. Photoacoustic pump-probe tomography of fluorophores in vivo using interleaved image acquisition for motion suppression

    PubMed Central

    Märk, Julia; Wagener, Asja; Zhang, Edward; Laufer, Jan

    2017-01-01

    In fluorophores, the excited state lifetime can be modulated using pump-probe excitation. By generating photoacoustic (PA) signals using simultaneous and time-delayed pump and probe excitation pulses at fluences below the maximum permissible exposure, a modulation of the signal amplitude is observed in fluorophores but not in endogenous chromophores. This provides a highly specific contrast mechanism that can be used to recover the location of the fluorophore using difference imaging. The practical challenges in applying this method to in vivo PA tomography include the typically low concentrations of fluorescent contrast agents, and tissue motion. The former results in smaller PA signal amplitudes compared to those measured in blood, while the latter gives rise to difference image artefacts that compromise the unambiguous and potentially noise-limited detection of fluorescent contrast agents. To address this limitation, a method based on interleaved pump-probe image acquisition was developed. It relies on fast switching between simultaneous and time-delayed pump-probe excitation to acquire PA difference signals in quick succession, and to minimise the effects of tissue motion. The feasibility of this method is demonstrated in tissue phantoms and in initial experiments in vivo. PMID:28091571

  9. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  10. Recovery of phase inconsistencies in continuously moving table extended field of view magnetic resonance imaging acquisitions.

    PubMed

    Kruger, David G; Riederer, Stephen J; Rossman, Phillip J; Mostardi, Petrice M; Madhuranthakam, Ananth J; Hu, Houchun H

    2005-09-01

    MR images formed using extended FOV continuously moving table data acquisition can have signal falloff and loss of lateral spatial resolution at localized, periodic positions along the direction of table motion. In this work we identify the origin of these artifacts and provide a means for correction. The artifacts are due to a mismatch of the phase of signals acquired from contiguous sampling fields of view and are most pronounced when the central k-space views are being sampled. Correction can be performed using the phase information from a periodically sampled central view to adjust the phase of all other views of that view cycle, making the net phase uniform across each axial plane. Results from experimental phantom and contrast-enhanced peripheral MRA studies show that the correction technique substantially eliminates the artifact for a variety of phase encode orders.

  11. [EOS imaging acquisition system : 2D/3D diagnostics of the skeleton].

    PubMed

    Tarhan, T; Froemel, D; Meurer, A

    2015-12-01

    The application spectrum of the EOS imaging acquisition system is versatile. It is especially useful in the diagnostics and planning of corrective surgical procedures in complex orthopedic cases. The application is indicated when assessing deformities and malpositions of the spine, pelvis and lower extremities. It can also be used in the assessment and planning of hip and knee arthroplasty. For the first time physicians have the opportunity to conduct examinations of the whole body under weight-bearing conditions in order to anticipate the effects of a planned surgical procedure on the skeletal system as a whole and therefore on the posture of the patient. Compared to conventional radiographic examination techniques, such as x-ray or computed tomography, the patient is exposed to much less radiation. Therefore, the pediatric application of this technique can be described as reasonable.

  12. Phase incremented echo train acquisition applied to magnetic resonance pore imaging

    NASA Astrophysics Data System (ADS)

    Hertel, S. A.; Galvosas, P.

    2017-02-01

    Efficient phase cycling schemes remain a challenge for NMR techniques if the pulse sequences involve a large number of rf-pulses. Especially complex is the Carr Purcell Meiboom Gill (CPMG) pulse sequence where the number of rf-pulses can range from hundreds to several thousands. Our recent implementation of Magnetic Resonance Pore Imaging (MRPI) is based on a CPMG rf-pulse sequence in order to refocus the effect of internal gradients inherent in porous media. While the spin dynamics for spin- 1 / 2 systems in CPMG like experiments are well understood it is still not straight forward to separate the desired pathway from the spectrum of unwanted coherence pathways. In this contribution we apply Phase Incremented Echo Train Acquisition (PIETA) to MRPI. We show how PIETA offers a convenient way to implement a working phase cycling scheme and how it allows one to gain deeper insights into the amplitudes of undesired pathways.

  13. A digital receiver module with direct data acquisition for magnetic resonance imaging systems.

    PubMed

    Tang, Weinan; Sun, Hongyu; Wang, Weimin

    2012-10-01

    A digital receiver module for magnetic resonance imaging (MRI) with detailed hardware implementations is presented. The module is based on a direct sampling scheme using the latest mixed-signal circuit design techniques. A single field-programmable gate array chip is employed to perform software-based digital down conversion for radio frequency signals. The modular architecture of the receiver allows multiple acquisition channels to be implemented on a highly integrated printed circuit board. To maintain the phase coherence of the receiver and the exciter in the context of direct sampling, an effective phase synchronization method was proposed to achieve a phase deviation as small as 0.09°. The performance of the described receiver module was verified in the experiments for both low- and high-field (0.5 T and 1.5 T) MRI scanners and was compared to a modern commercial MRI receiver system.

  14. Results of shuttle EMU thermal vacuum tests incorporating an infrared imaging camera data acquisition system

    NASA Technical Reports Server (NTRS)

    Anderson, James E.; Tepper, Edward H.; Trevino, Louis A.

    1991-01-01

    Manned tests in Chamber B at NASA JSC were conducted in May and June of 1990 to better quantify the Space Shuttle Extravehicular Mobility Unit's (EMU) thermal performance in the cold environmental extremes of space. Use of an infrared imaging camera with real-time video monitoring of the output significantly added to the scope, quality and interpretation of the test conduct and data acquisition. Results of this test program have been effective in the thermal certification of a new insulation configuration and the '5000 Series' glove. In addition, the acceptable thermal performance of flight garments with visually deteriorated insulation was successfully demonstrated, thereby saving significant inspection and garment replacement cost. This test program also established a new method for collecting data vital to improving crew thermal comfort in a cold environment.

  15. Acquisition of priori tissue optical structure based on non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Wan, Wenbo; Li, Jiao; Liu, Lingling; Wang, Yihan; Zhang, Yan; Gao, Feng

    2015-03-01

    Shape-parameterized diffuse optical tomography (DOT), which is based on a priori that assumes the uniform distribution of the optical properties in the each region, shows the effectiveness of complex biological tissue optical heterogeneities reconstruction. The priori tissue optical structure could be acquired with the assistance of anatomical imaging methods such as X-ray computed tomography (XCT) which suffers from low-contrast for soft tissues including different optical characteristic regions. For the mouse model, a feasible strategy of a priori tissue optical structure acquisition is proposed based on a non-rigid image registration algorithm. During registration, a mapping matrix is calculated to elastically align the XCT image of reference mouse to the XCT image of target mouse. Applying the matrix to the reference atlas which is a detailed mesh of organs/tissues in reference mouse, registered atlas can be obtained as the anatomical structure of target mouse. By assigning the literature published optical parameters of each organ to the corresponding anatomical structure, optical structure of the target organism can be obtained as a priori information for DOT reconstruction algorithm. By applying the non-rigid image registration algorithm to a target mouse which is transformed from the reference mouse, the results show that the minimum correlation coefficient can be improved from 0.2781 (before registration) to 0.9032 (after fine registration), and the maximum average Euclid distances can be decreased from 12.80mm (before registration) to 1.02mm (after fine registration), which has verified the effectiveness of the algorithm.

  16. Wavelet-aided pavement distress image processing

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2003-11-01

    A wavelet-based pavement distress detection and evaluation method is proposed. This method consists of two main parts, real-time processing for distress detection and offline processing for distress evaluation. The real-time processing part includes wavelet transform, distress detection and isolation, and image compression and noise reduction. When a pavement image is decomposed into different frequency subbands by wavelet transform, the distresses, which are usually irregular in shape, appear as high-amplitude wavelet coefficients in the high-frequency details subbands, while the background appears in the low-frequency approximation subband. Two statistical parameters, high-amplitude wavelet coefficient percentage (HAWCP) and high-frequency energy percentage (HFEP), are established and used as criteria for real-time distress detection and distress image isolation. For compression of isolated distress images, a modified EZW (Embedded Zerotrees of Wavelet coding) is developed, which can simultaneously compress the images and reduce the noise. The compressed data are saved to the hard drive for further analysis and evaluation. The offline processing includes distress classification, distress quantification, and reconstruction of the original image for distress segmentation, distress mapping, and maintenance decision-making. The compressed data are first loaded and decoded to obtain wavelet coefficients. Then Radon transform is then applied and the parameters related to the peaks in the Radon domain are used for distress classification. For distress quantification, a norm is defined that can be used as an index for evaluating the severity and extent of the distress. Compared to visual or manual inspection, the proposed method has the advantages of being objective, high-speed, safe, automated, and applicable to different types of pavements and distresses.

  17. Hemispheric superiority for processing a mirror image.

    PubMed

    Garren, R B; Gehlsen, G M

    1981-04-01

    39 adult subjects were administered a test using tachistoscopic half-field presentations to determine hemispheric dominance and a mirror-tracing task to determine if an hemispheric superiority exists for processing a mirror-image. The results indicate superiority of the nondominant hemisphere for this task.

  18. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    Computer," Byte, 3: 14-25 (December 1978). McGraw-Hill, 1985 24. Trussell, H. Joel . "Processing of X-ray Images," Proceedings of the IEEE, 69: 615-627...Services Electronics Program contract N00014-79-C-0424 (AD-085-846). 107 Therrien , Charles W. et al. "A Multiprocessor System for Simulation of

  19. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  20. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  1. A Psychometric Study of Reading Processes in L2 Acquisition: Deploying Deep Processing to Push Learners' Discourse Towards Syntactic Processing-Based Constructions

    ERIC Educational Resources Information Center

    Manuel, Carlos J.

    2009-01-01

    This study assesses reading processes and/or strategies needed to deploy deep processing that could push learners towards syntactic-based constructions in L2 classrooms. Research has found L2 acquisition to present varying degrees of success and/or fossilization (Bley-Vroman 1989, Birdsong 1992 and Sharwood Smith 1994). For example, learners have…

  2. Pattern Recognition and Image Processing of Infrared Astronomical Satellite Images

    NASA Astrophysics Data System (ADS)

    He, Lun Xiong

    1996-01-01

    The Infrared Astronomical Satellite (IRAS) images with wavelengths of 60 mu m and 100 mu m contain mainly information on both extra-galactic sources and low-temperature interstellar media. The low-temperature interstellar media in the Milky Way impose a "cirrus" screen of IRAS images, especially in images with 100 mu m wavelength. This dissertation deals with the techniques of removing the "cirrus" clouds from the 100 mu m band in order to achieve accurate determinations of point sources and their intensities (fluxes). We employ an image filtering process which utilizes mathematical morphology and wavelet analysis as the key tools in removing the "cirrus" foreground emission. The filtering process consists of extraction and classification of the size information, and then using the classification results in removal of the cirrus component from each pixel of the image. Extraction of size information is the most important step in this process. It is achieved by either mathematical morphology or wavelet analysis. In the mathematical morphological method, extraction of size information is done using the "sieving" process. In the wavelet method, multi-resolution techniques are employed instead. The classification of size information distinguishes extra-galactic sources from cirrus using their averaged size information. The cirrus component for each pixel is then removed by using the averaged cirrus size information. The filtered image contains much less cirrus. Intensity alteration for extra-galactic sources in the filtered image are discussed. It is possible to retain the fluxes of the point sources when we weigh the cirrus component differently pixel by pixel. The importance of the uni-directional size information extractions are addressed in this dissertation. Such uni-directional extractions are achieved by constraining the structuring elements, or by constraining the sieving process to be sequential. The generalizations of mathematical morphology operations based

  3. Image processing techniques for passive millimeter-wave imaging

    NASA Astrophysics Data System (ADS)

    Lettington, Alan H.; Gleed, David G.

    1998-08-01

    We present our results on the application of image processing techniques for passive millimeter-wave imaging and discuss possible future trends. Passive millimeter-wave imaging is useful in poor weather such as in fog and cloud. Its spatial resolution, however, can be restricted due to the diffraction limit of the front aperture. Its resolution may be increased using super-resolution techniques but often at the expense of processing time. Linear methods may be implemented in real time but non-linear methods which are required to restore missing spatial frequencies are usually more time consuming. In the present paper we describe fast super-resolution techniques which are potentially capable of being applied in real time. Associated issues such as reducing the influence of noise and improving recognition capability will be discussed. Various techniques have been used to enhance passive millimeter wave images giving excellent results and providing a significant quantifiable increase in spatial resolution. Examples of applying these techniques to imagery will be given.

  4. Measuring Acquisition Workforce Quality through Dynamic Knowledge and Performance: An Exploratory Investigation to Interrelate Acquisition Knowledge with Process Maturity

    DTIC Science & Technology

    2013-10-08

    9  Figure 5.  Combined Score- PCO Relationship ............................................... 20  Figure 6.  Organization T...Score- PCO Relationship ........................................ 20  Figure 7.  Organization R Score- PCO Relationship...stocks from two DoD contracting centers including Procuring Contracting Officer ( PCO ) assignment, Defense Acquisition Workforce Improvement Act (DAWIA

  5. Hardware System for Real-Time EMG Signal Acquisition and Separation Processing during Electrical Stimulation.

    PubMed

    Hsueh, Ya-Hsin; Yin, Chieh; Chen, Yan-Hong

    2015-09-01

    The study aimed to develop a real-time electromyography (EMG) signal acquiring and processing device that can acquire signal during electrical stimulation. Since electrical stimulation output can affect EMG signal acquisition, to integrate the two elements into one system, EMG signal transmitting and processing method has to be modified. The whole system was designed in a user-friendly and flexible manner. For EMG signal processing, the system applied Altera Field Programmable Gate Array (FPGA) as the core to instantly process real-time hybrid EMG signal and output the isolated signal in a highly efficient way. The system used the power spectral density to evaluate the accuracy of signal processing, and the cross correlation showed that the delay of real-time processing was only 250 μs.

  6. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  7. APNEA list mode data acquisition and real-time event processing

    SciTech Connect

    Hogle, R.A.; Miller, P.; Bramblett, R.L.

    1997-11-01

    The LMSC Active Passive Neutron Examinations and Assay (APNEA) Data Logger is a VME-based data acquisition system using commercial-off-the-shelf hardware with the application-specific software. It receives TTL inputs from eighty-eight {sup 3}He detector tubes and eight timing signals. Two data sets are generated concurrently for each acquisition session: (1) List Mode recording of all detector and timing signals, timestamped to 3 microsecond resolution; (2) Event Accumulations generated in real-time by counting events into short (tens of microseconds) and long (seconds) time bins following repetitive triggers. List Mode data sets can be post-processed to: (1) determine the optimum time bins for TRU assay of waste drums, (2) analyze a given data set in several ways to match different assay requirements and conditions and (3) confirm assay results by examining details of the raw data. Data Logger events are processed and timestamped by an array of 15 TMS320C40 DSPs and delivered to an embedded controller (PowerPC604) for interim disk storage. Three acquisition modes, corresponding to different trigger sources are provided. A standard network interface to a remote host system (Windows NT or SunOS) provides for system control, status, and transfer of previously acquired data. 6 figs.

  8. A knowledge acquisition process to analyse operational problems in solid waste management facilities.

    PubMed

    Dokas, Ioannis M; Panagiotakopoulos, Demetrios C

    2006-08-01

    The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions.

  9. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  10. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  11. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  12. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  13. A prototype data acquisition and processing system for Schumann resonance measurements

    NASA Astrophysics Data System (ADS)

    Tatsis, Giorgos; Votis, Constantinos; Christofilakis, Vasilis; Kostarakis, Panos; Tritakis, Vasilis; Repapis, Christos

    2015-12-01

    In this paper, a cost-effective prototype data acquisition system specifically designed for Schumann resonance measurements and an adequate signal processing method are described in detail. The implemented system captures the magnetic component of the Schumann resonance signal, using a magnetic antenna, at much higher sampling rates than the Nyquist rate for efficient signal improvement. In order to obtain the characteristics of the individual resonances of the SR spectrum a new and efficient software was developed. The processing techniques used in this software are analyzed thoroughly in the following. Evaluation of system's performance and operation is realized using preliminary measurements taken in the region of Northwest Greece.

  14. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  15. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  16. Technique for real-time frontal face image acquisition using stereo system

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.; Kudryashov, Yuri I.

    2013-04-01

    Most part of existing systems for face recognition is usually based on two-dimensional images. And the quality of recognition is rather high for frontal images of face. But for other kind of images the quality decreases significantly. It is necessary to compensate for the effect of a change in the posture of a person (the camera angle) for correct operation of such systems. There are methods of transformation of 2D image of the person to the canonical orientation. The efficiency of these methods depends on the accuracy of determination of specific anthropometric points. Problems can arise for cases of partly occlusion of the person`s face. Another approach is to have a set of person images for different view angles for the further processing. But a need for storing and processing a large number of two-dimensional images makes this method considerably time-consuming. The proposed technique uses stereo system for fast generation of person face 3D model and obtaining face image in given orientation using this 3D model. Real-time performance is provided by implementing and graph cut methods for face surface 3D reconstruction and applying CUDA software library for parallel calculation.

  17. A Practical Approach to Quantitative Processing and Analysis of Small Biological Structures by Fluorescent Imaging

    PubMed Central

    Noller, Crystal M.; Boulina, Maria; McNamara, George; Szeto, Angela; McCabe, Philip M.

    2016-01-01

    Standards in quantitative fluorescent imaging are vaguely recognized and receive insufficient discussion. A common best practice is to acquire images at Nyquist rate, where highest signal frequency is assumed to be the highest obtainable resolution of the imaging system. However, this particular standard is set to insure that all obtainable information is being collected. The objective of the current study was to demonstrate that for quantification purposes, these correctly set acquisition rates can be redundant; instead, linear size of the objects of interest can be used to calculate sufficient information density in the image. We describe optimized image acquisition parameters and unbiased methods for processing and quantification of medium-size cellular structures. Sections of rabbit aortas were immunohistochemically stained to identify and quantify sympathetic varicosities, >2 μm in diameter. Images were processed to reduce background noise and segment objects using free, open-access software. Calculations of the optimal sampling rate for the experiment were based on the size of the objects of interest. The effect of differing sampling rates and processing techniques on object quantification was demonstrated. Oversampling led to a substantial increase in file size, whereas undersampling hindered reliable quantification. Quantification of raw and incorrectly processed images generated false structures, misrepresenting the underlying data. The current study emphasizes the importance of defining image-acquisition parameters based on the structure(s) of interest. The proposed postacquisition processing steps effectively removed background and noise, allowed for reliable quantification, and eliminated user bias. This customizable, reliable method for background subtraction and structure quantification provides a reproducible tool for researchers across biologic disciplines. PMID:27182204

  18. Gaia astrometric instrument calibration and image processing

    NASA Astrophysics Data System (ADS)

    Castaneda, J.; Fabricius, C.; Portell, J.; Garralda, N.; González-Vidal, J. J.; Clotet, M.; Torra, J.

    2017-03-01

    The astrometric instrument calibration and image processing is an integral and critical part of the Gaia mission. The data processing starts with a preliminary treatment on daily basis of the most recent data received and continues with the execution of several processing chains included in a cyclic reduction system. The cyclic processing chains are reprocessing all the accumulated data again in each iteration, thus adding the latest measurements and recomputing the outputs to obtain better quality on their results. This cyclic processing lasts until the convergence of the results is achieved and the catalogue is consolidated and published periodically. In this paper we describe the core of the data processing which has made possible the first catalogue release from the Gaia mission.

  19. Towards the low-dose characterization of beam sensitive nanostructures via implementation of sparse image acquisition in scanning transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan

    2017-04-01

    Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials

  20. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  1. Advanced communications technologies for image processing

    NASA Technical Reports Server (NTRS)

    Likens, W. C.; Jones, H. W.; Shameson, L.

    1984-01-01

    It is essential for image analysts to have the capability to link to remote facilities as a means of accessing both data bases and high-speed processors. This can increase productivity through enhanced data access and minimization of delays. New technology is emerging to provide the high communication data rates needed in image processing. These developments include multi-user sharing of high bandwidth (60 megabits per second) Time Division Multiple Access (TDMA) satellite links, low-cost satellite ground stations, and high speed adaptive quadrature modems that allow 9600 bit per second communications over voice-grade telephone lines.

  2. Image processing with JPEG2000 coders

    NASA Astrophysics Data System (ADS)

    Śliwiński, Przemysław; Smutnicki, Czesław; Chorażyczewski, Artur

    2008-04-01

    In the note, several wavelet-based image processing algorithms are presented. Denoising algorithm is derived from the Donoho's thresholding. Rescaling algorithm reuses sub-division scheme of the Sweldens' lifting and a sensor linearization procedure exploiting system identification algorithms developed for nonlinear dynamic systems. Proposed autofocus algorithm is a passive one, works in wavelet domain and relies on properties of lens transfer function. The common advantage of the algorithms is that they can easily be implemented within the JPEG2000 image compression standard encoder, offering simplification of the final circuitry (or the software package) and the reduction of the power consumption (program size, respectively) when compared to solutions based on separate components.

  3. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  4. Digital signal and image processing in echocardiography. The American Society of Echocardiography.

    PubMed

    Skorton, D J; Collins, S M; Garcia, E; Geiser, E A; Hillard, W; Koppes, W; Linker, D; Schwartz, G

    1985-12-01

    Digital signal and image processing techniques are acquiring an increasingly important role in the generation and analysis of cardiac images. This is particularly true of 2D echocardiography, in which image acquisition, manipulation, and storage within the echocardiograph, as well as quantitative analysis of echocardiographic data by means of "off-line" systems, depend upon digital techniques. The increasing role of computers in echocardiography makes it essential that echocardiographers and technologists understand the basic principles of digital techniques applied to echocardiographic instrumentation and data analysis. In this article, we have discussed digital techniques as applied to image generation (digital scan conversion, preprocessing, and postprocessing) as well as to the analysis of image data (computer-assisted border detection, 3D reconstruction, tissue characterization, and contrast echocardiography); a general introduction to off-line analysis systems was also given. Experience with other cardiac imaging methods indicates that digital techniques will likely play a dominant role in the future of echocardiographic imaging.

  5. Performance of the front end electronics and data acquisition system for the SLD Cherenkov Ring Imaging Detector

    SciTech Connect

    Abe, K.; Hasegawa, K.; Suekane, F.; Yuta, H. . Dept. of Physics); Antilogus, P.; Aston, D.; Bienz, T.; Bird, F.; Dasu, S.; Dolinsky, S.; Dunwoodie, W.; Hallewell, G.; Hoeflich, J.; Kawahara, H.; Kwon, Y.; Leith, D.W.G.S.; Marshall, D.; Muller, D.; Nagamine, T.; Oxoby, G.; Pavel, T.J.; Ratcliff, B.; Rensing, P.; Schultz, D.; Shapiro, S.; Simopoulos, C.; Solodov, E.; Stiles, P.; Toge, N.; Va'vra, J

    1991-11-01

    The front end electronics and data acquisition system for the SLD barrel Cherenkov Ring Imaging Detector (CRID) are described. This electronics must provide a 1% charge division measurement with a maximum acceptable noise level of 2000 electrons (rms). Noise and system performance results are presented for the initial SLD engineering run data.

  6. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.

  7. Revisiting age-of-acquisition effects in Spanish visual word recognition: the role of item imageability.

    PubMed

    Wilson, Maximiliano A; Cuetos, Fernando; Davies, Rob; Burani, Cristina

    2013-11-01

    Word age-of-acquisition (AoA) affects reading. The mapping hypothesis predicts AoA effects when input-output mappings are arbitrary. In Spanish, the orthography-to-phonology mappings required for word naming are consistent; therefore, no AoA effects are expected. Nevertheless, AoA effects have been found, motivating the present investigation of how AoA can affect reading in Spanish. Four experiments were run to examine reading with a factorial design manipulating AoA and frequency. In Experiments 1 and 2 (immediate and speeded naming), only word frequency affected word naming. In Experiment 3 (lexical decision), both AoA and frequency affected word recognition. In Experiment 4 (immediate naming with highly imageable items), both frequency and AoA affected naming. The results suggest that highly imageable items induce a larger reliance on semantics in reading aloud. Such reliance causes faster naming of earlier acquired words because the corresponding concepts have richer visual and sensory features acquired mainly through direct sensory experience.

  8. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  9. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  10. Multidimensional energy operator for image processing

    NASA Astrophysics Data System (ADS)

    Maragos, Petros; Bovik, Alan C.; Quatieri, Thomas F.

    1992-11-01

    The 1-D nonlinear differential operator (Psi) (f) equals (f')2 - ff' has been recently introduced to signal processing and has been found very useful for estimating the parameters of sinusoids and the modulating signals of AM-FM signals. It is called an energy operator because it can track the energy of an oscillator source generating a sinusoidal signal. In this paper we introduce the multidimensional extension (Phi) (f) equals (parallel)DELf(parallel)2 - fDEL2f of the 1-D energy operator and briefly outline some of its applications to image processing. We discuss some interesting properties of the multidimensional operator and develop demodulation algorithms to estimate the amplitude envelope and instantaneous frequencies of 2-D spatially-varying AM-FM signals, which can model image texture. The attractive features of the multidimensional operator and the related amplitude/frequency demodulation algorithms are their simplicity, efficiency, and ability to track instantaneously- varying spatial modulation patterns.

  11. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  12. Processing strategies and software solutions for data-independent acquisition in mass spectrometry.

    PubMed

    Bilbao, Aivett; Varesio, Emmanuel; Luban, Jeremy; Strambio-De-Castillia, Caterina; Hopfgartner, Gérard; Müller, Markus; Lisacek, Frédérique

    2015-03-01

    Data-independent acquisition (DIA) offers several advantages over data-dependent acquisition (DDA) schemes for characterizing complex protein digests analyzed by LC-MS/MS. In contrast to the sequential detection, selection, and analysis of individual ions during DDA, DIA systematically parallelizes the fragmentation of all detectable ions within a wide m/z range regardless of intensity, thereby providing broader dynamic range of detected signals, improved reproducibility for identification, better sensitivity, and accuracy for quantification, and, potentially, enhanced proteome coverage. To fully exploit these advantages, composite or multiplexed fragment ion spectra generated by DIA require more elaborate processing algorithms compared to DDA. This review examines different DIA schemes and, in particular, discusses the concepts applied to and related to data processing. Available software implementations for identification and quantification are presented as comprehensively as possible and examples of software usage are cited. Processing workflows, including complete proprietary frameworks or combinations of modules from different open source data processing packages are described and compared in terms of software availability and usability, programming language, operating system support, input/output data formats, as well as the main principles employed in the algorithms used for identification and quantification. This comparative study concludes with further discussion of current limitations and expectable improvements in the short- and midterm future.

  13. Determinants of famous name processing speed: age of acquisition versus semantic connectedness.

    PubMed

    Smith-Spark, James H; Moore, Viv; Valentine, Tim

    2013-02-01

    The age of acquisition (AoA) and the amount of biographical information known about celebrities have been independently shown to influence the processing of famous people. In this experiment, we investigated the facilitative contribution of both factors to famous name processing. Twenty-four mature adults participated in a familiarity judgement task, in which the names of famous people were grouped orthogonally by AoA and by the number of bits of biographical information known about them (number of facts known; NoFK). Age of acquisition was found to have a significant effect on both reaction time (RT) and accuracy of response, but NoFK did not. The RT data also revealed a significant AoA×NoFK interaction. The amount of information known about a celebrity played a facilitative role in the processing of late-acquired, but not early-acquired, celebrities. Once AoA is controlled, it would appear that the semantic system ceases to have a significant overall influence on the processing of famous people. The pre-eminence of AoA over semantic connectedness is considered in the light of current theories of AoA and how their influence might interact.

  14. Collecting Samples in Gale Crater, Mars; an Overview of the Mars Science Laboratory Sample Acquisition, Sample Processing and Handling System

    NASA Astrophysics Data System (ADS)

    Anderson, R. C.; Jandura, L.; Okon, A. B.; Sunshine, D.; Roumeliotis, C.; Beegle, L. W.; Hurowitz, J.; Kennedy, B.; Limonadi, D.; McCloskey, S.; Robinson, M.; Seybold, C.; Brown, K.

    2012-09-01

    The Mars Science Laboratory Mission (MSL), scheduled to land on Mars in the summer of 2012, consists of a rover and a scientific payload designed to identify and assess the habitability, geological, and environmental histories of Gale crater. Unraveling the geologic history of the region and providing an assessment of present and past habitability requires an evaluation of the physical and chemical characteristics of the landing site; this includes providing an in-depth examination of the chemical and physical properties of Martian regolith and rocks. The MSL Sample Acquisition, Processing, and Handling (SA/SPaH) subsystem will be the first in-situ system designed to acquire interior rock and soil samples from Martian surface materials. These samples are processed and separated into fine particles and distributed to two onboard analytical science instruments SAM (Sample Analysis at Mars Instrument Suite) and CheMin (Chemistry and Mineralogy) or to a sample analysis tray for visual inspection. The SA/SPaH subsystem is also responsible for the placement of the two contact instruments, Alpha Particle X-Ray Spectrometer (APXS), and the Mars Hand Lens Imager (MAHLI), on rock and soil targets. Finally, there is a Dust Removal Tool (DRT) to remove dust particles from rock surfaces for subsequent analysis by the contact and or mast mounted instruments (e.g. Mast Cameras (MastCam) and the Chemistry and Micro-Imaging instruments (ChemCam)).

  15. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  16. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  17. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  18. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  19. Memory acquisition and retrieval impact different epigenetic processes that regulate gene expression

    PubMed Central

    2015-01-01

    Background A fundamental question in neuroscience is how memories are stored and retrieved in the brain. Long-term memory formation requires transcription, translation and epigenetic processes that control gene expression. Thus, characterizing genome-wide the transcriptional changes that occur after memory acquisition and retrieval is of broad interest and importance. Genome-wide technologies are commonly used to interrogate transcriptional changes in discovery-based approaches. Their ability to increase scientific insight beyond traditional candidate gene approaches, however, is usually hindered by batch effects and other sources of unwanted variation, which are particularly hard to control in the study of brain and behavior. Results We examined genome-wide gene expression after contextual conditioning in the mouse hippocampus, a brain region essential for learning and memory, at all the time-points in which inhibiting transcription has been shown to impair memory formation. We show that most of the variance in gene expression is not due to conditioning and that by removing unwanted variance through additional normalization we are able provide novel biological insights. In particular, we show that genes downregulated by memory acquisition and retrieval impact different functions: chromatin assembly and RNA processing, respectively. Levels of histone 2A variant H2AB are reduced only following acquisition, a finding we confirmed using quantitative proteomics. On the other hand, splicing factor Rbfox1 and NMDA receptor-dependent microRNA miR-219 are only downregulated after retrieval, accompanied by an increase in protein levels of miR-219 target CAMKIIγ. Conclusions We provide a thorough characterization of coding and non-coding gene expression during long-term memory formation. We demonstrate that unwanted variance dominates the signal in transcriptional studies of learning and memory and introduce the removal of unwanted variance through normalization as a

  20. Impact of image acquisition on voxel-based-morphometry investigations of age-related structural brain changes.

    PubMed

    Streitbürger, Daniel-Paolo; Pampel, André; Krueger, Gunnar; Lepsien, Jöran; Schroeter, Matthias L; Mueller, Karsten; Möller, Harald E

    2014-02-15

    A growing number of magnetic resonance imaging studies employ voxel-based morphometry (VBM) to assess structural brain changes. Recent reports have shown that image acquisition parameters may influence VBM results. For systematic evaluation, gray-matter-density (GMD) changes associated with aging were investigated by VBM employing acquisitions with different radiofrequency head coils (12-channel matrix coil vs. 32-channel array), different pulse sequences (MP-RAGE vs. MP2RAGE), and different voxel dimensions (1mm vs. 0.8mm). Thirty-six healthy subjects, classified as young, middle-aged, or elderly, participated in the study. Two-sample and paired t-tests revealed significant effects of acquisition parameters (coil, pulse sequence, and resolution) on the estimated age-related GMD changes in cortical and subcortical regions. Potential advantages in tissue classification and segmentation were obtained for MP2RAGE. The 32-channel coil generally outperformed the 12-channel coil, with more benefit for MP2RAGE. Further improvement can be expected from higher resolution if the loss in SNR is accounted for. Use of inconsistent acquisition parameters in VBM analyses is likely to introduce systematic bias. Overall, acquisition and protocol changes require careful adaptations of the VBM analysis strategy before generalized conclusion can be drawn.