Science.gov

Sample records for acquisition image processing

  1. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  2. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  3. Towards a Platform for Image Acquisition and Processing on RASTA

    NASA Astrophysics Data System (ADS)

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele

    2013-08-01

    This paper presents the architecture of a platform for image acquisition and processing based on commercial hardware and space qualified hardware. The aim is to extend the Reference Architecture Test-bed for Avionics (RASTA) system in order to obtain a Test-bed that allows testing different hardware and software solutions in the field of image acquisition and processing. The platform will allow the integration of space qualified hardware and Commercial Off The Shelf (COTS) hardware in order to test different architectural configurations. The first implementation is being performed on a low cost commercial board and on the GR712RC board based on the Dual Core Leon3 fault tolerant processor. The platform will include an actuation module with the aim of implementing a complete pipeline from image acquisition to actuation, making possible the simulation of a real case scenario involving acquisition and actuation.

  4. An effective data acquisition system using image processing

    NASA Astrophysics Data System (ADS)

    Poh, Chung-How; Poh, Chung-Kiak

    2005-12-01

    The authors investigate a data acquisition system utilising the widely available digital multi-meter and the webcam. The system is suited for applications that require sampling rates of less than about 1 Hz, such as for ambient temperature recording or the monitoring of the charging state of rechargeable batteries. The data displayed on the external digital readout is acquired into the computer through the process of template matching. MATLAB is used as the programming language for processing the captured 2-D images in this demonstration. A RC charging experiment with a time characteristic of approximately 33 s is setup to verify the accuracy of the image-to-data conversion. It is found that the acquired data matches the steady-state voltage value displayed by the digital meter after an error detection technique has been devised and implemented into the data acquisition script file. It is possible to acquire a number of different readings simultaneously from various sources with this imaging method by placing a number of digital readouts within the camera's field-of-view.

  5. PET/CT for radiotherapy: image acquisition and data processing.

    PubMed

    Bettinardi, V; Picchio, M; Di Muzio, N; Gianolli, L; Messa, C; Gilardi, M C

    2010-10-01

    This paper focuses on acquisition and processing methods in positron emission tomography/computed tomography (PET/CT) for radiotherapy (RT) applications. The recent technological evolutions of PET/CT systems are described. Particular emphasis is dedicated to the tools needed for the patient positioning and immobilization, to be used in PET/CT studies as well as during RT treatment sessions. The effect of organ and lesion motion due to patient's respiration on PET/CT imaging is discussed. Breathing protocols proposed to minimize PET/CT spatial mismatches in relation to respiratory movements are illustrated. The respiratory gated (RG) 4D-PET/CT techniques, developed to measure and compensate for organ and lesion motion, are then introduced. Finally a description is provided of different acquisition and data processing techniques, implemented with the aim at improving: i) image quality and quantitative accuracy of PET images, and ii) target volume definition and treatment planning in RT, by using specific and personalised motion information.

  6. System of acquisition and processing of images of dynamic speckle

    NASA Astrophysics Data System (ADS)

    Vega, F.; >C Torres,

    2015-01-01

    In this paper we show the design and implementation of a system to capture and analysis of dynamic speckle. The device consists of a USB camera, an isolated system lights for imaging, a laser pointer 633 nm 10 mw as coherent light source, a diffuser and a laptop for processing video. The equipment enables the acquisition and storage of video, also calculated of different descriptors of statistical analysis (vector global accumulation of activity, activity matrix accumulation, cross-correlation vector, autocorrelation coefficient, matrix Fujji etc.). The equipment is designed so that it can be taken directly to the site where the sample for biological study and is currently being used in research projects within the group.

  7. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  8. Multi-channel high-speed CMOS image acquisition and pre-processing system

    NASA Astrophysics Data System (ADS)

    Sun, Chun-feng; Yuan, Feng; Ding, Zhen-liang

    2008-10-01

    A new multi-channel high-speed CMOS image acquisition and pre-processing system is designed to realize the image acquisition, data transmission, time sequential control and simple image processing by high-speed CMOS image sensor. The modular structure design, LVDS and ping-pong cache techniques used during the designed image data acquisition sub-system design ensure the real-time data acquisition and transmission. Furthermore, a new histogram equalization algorithm of adaptive threshold value based on the reassignment of redundant gray level is incorporated in the image preprocessing module of FPGA. The iterative method is used in the course of setting threshold value, and a redundant graylevel is redistributed rationally according to the proportional gray level interval. The over-enhancement of background is restrained and the feasibility of mergence of foreground details is reduced. The experimental certificates show that the system can be used to realize the image acquisition, transmission, memory and pre-processing to 590MPixels/s data size, and make for the design and realization of the subsequent system.

  9. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  10. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  11. A review of breast tomosynthesis. Part I. The image acquisition process

    SciTech Connect

    Sechopoulos, Ioannis

    2013-01-15

    Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process.

  12. Liquid crystal materials and structures for image processing and 3D shape acquisition

    NASA Astrophysics Data System (ADS)

    Garbat, K.; Garbat, P.; Jaroszewicz, L.

    2012-03-01

    The image processing supported by liquid crystals device has been used in numerous imaging applications, including polarization imaging, digital holography and programmable imaging. Liquid crystals have been extensively studied and are massively used in display and optical processing technology. We present here the main relevant parameters of liquid crystal for image processing and 3D shape acquisition and we compare the main liquid crystal options which can be used with their respective advantages. We propose here to compare performance of several types of liquid crystal materials: nematic mixtures with high and medium optical and dielectrical anisotropies and relatively low rotational viscosities nematic materials which may operate in TN mode in mono and dual frequency addressing systems.

  13. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing

    PubMed Central

    2014-01-01

    Introduction Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator’s (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. Material and method The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. Results For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient’s back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects – error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18

  14. Automated system for acquisition and image processing for the control and monitoring boned nopal

    NASA Astrophysics Data System (ADS)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  15. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  16. Horizon Acquisition for Attitude Determination Using Image Processing Algorithms- Results of HORACE on REXUS 16

    NASA Astrophysics Data System (ADS)

    Barf, J.; Rapp, T.; Bergmann, M.; Geiger, S.; Scharf, A.; Wolz, F.

    2015-09-01

    The aim of the Horizon Acquisition Experiment (HORACE) was to prove a new concept for a two-axis horizon sensor using algorithms processing ordinary images, which is also operable at high spinning rates occurring during emergencies. The difficulty to cope with image distortions, which is avoided by conventional horizon sensors, was introduced on purpose as we envision a system being capable of using any optical data. During the flight on REXUS1 16, which provided a suitable platform similar to the future application scenario, a malfunction of the payload cameras caused severe degradation of the collected scientific data. Nevertheless, with the aid of simulations we could show that the concept is accurate (±0.6°), fast (~ lOOms/frame) and robust enough for coarse attitude determination during emergencies and also applicable for small satellites. Besides, technical knowledge regarding the design of REXUS-experiments, including the detection of interferences between SATA and GPS, was gained.

  17. Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter.

    PubMed

    Latorre-Carmona, Pedro; Sánchez-Ortiga, Emilio; Xiao, Xiao; Pla, Filiberto; Martínez-Corral, Manuel; Navarro, Héctor; Saavedra, Genaro; Javidi, Bahram

    2012-11-01

    This paper presents an acquisition system and a procedure to capture 3D scenes in different spectral bands. The acquisition system is formed by a monochrome camera, and a Liquid Crystal Tunable Filter (LCTF) that allows to acquire images at different spectral bands in the [480, 680]nm wavelength interval. The Synthetic Aperture Integral Imaging acquisition technique is used to obtain the elemental images for each wavelength. These elemental images are used to computationally obtain the reconstruction planes of the 3D scene at different depth planes. The 3D profile of the acquired scene is also obtained using a minimization of the variance of the contribution of the elemental images at each image pixel. Experimental results show the viability to recover the 3D multispectral information of the scene. Integration of 3D and multispectral information could have important benefits in different areas, including skin cancer detection, remote sensing and pattern recognition, among others.

  18. Data acquisition and processing system of the electron cyclotron emission imaging system of the KSTAR tokamak

    SciTech Connect

    Kim, J. B.; Lee, W.; Yun, G. S.; Park, H. K.; Domier, C. W.; Luhmann, N. C. Jr.

    2010-10-15

    A new innovative electron cyclotron emission imaging (ECEI) diagnostic system for the Korean Superconducting Tokamak Advanced Research (KSTAR) produces a large amount of data. The design of the data acquisition and processing system of the ECEI diagnostic system should consider covering the large data production and flow. The system design is based on the layered structure scalable to the future extension to accommodate increasing data demands. Software architecture that allows a web-based monitoring of the operation status, remote experiment, and data analysis is discussed. The operating software will help machine operators and users validate the acquired data promptly, prepare next discharge, and enhance the experiment performance and data analysis in a distributed environment.

  19. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  20. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    SciTech Connect

    Schickert, Martin

    2015-03-31

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants.

  1. Micro-MRI-based image acquisition and processing system for assessing the response to therapeutic intervention

    NASA Astrophysics Data System (ADS)

    Vasilić, B.; Ladinsky, G. A.; Saha, P. K.; Wehrli, F. W.

    2006-03-01

    Osteoporosis is the cause of over 1.5 million bone fractures annually. Most of these fractures occur in sites rich in trabecular bone, a complex network of bony struts and plates found throughout the skeleton. The three-dimensional structure of the trabecular bone network significantly determines mechanical strength and thus fracture resistance. Here we present a data acquisition and processing system that allows efficient noninvasive assessment of trabecular bone structure through a "virtual bone biopsy". High-resolution MR images are acquired from which the trabecular bone network is extracted by estimating the partial bone occupancy of each voxel. A heuristic voxel subdivision increases the effective resolution of the bone volume fraction map and serves a basis for subsequent analysis of topological and orientational parameters. Semi-automated registration and segmentation ensure selection of the same anatomical location in subjects imaged at different time points during treatment. It is shown with excerpts from an ongoing clinical study of early post-menopausal women, that significant reduction in network connectivity occurs in the control group while the structural integrity is maintained in the hormone replacement group. The system described should be suited for large-scale studies designed to evaluate the efficacy of therapeutic intervention in subjects with metabolic bone disease.

  2. Monitoring of HTS compound library quality via a high-resolution image acquisition and processing instrument.

    PubMed

    Baillargeon, Pierre; Scampavia, Louis; Einsteder, Ross; Hodder, Peter

    2011-06-01

    This report presents the high-resolution image acquisition and processing instrument for compound management applications (HIAPI-CM). The HIAPI-CM combines imaging spectroscopy and machine-vision analysis to perform rapid assessment of high-throughput screening (HTS) compound library quality. It has been customized to detect and classify typical artifacts found in HTS compound library microtiter plates (MTPs). These artifacts include (1) insufficient volume of liquid compound sample, (2) compound precipitation, and (3) colored compounds that interfere with HTS assay detection format readout. The HIAPI-CM is also configured to automatically query and compare its analysis results to data stored in a LIMS or corporate database, aiding in the detection of compound registration errors. To demonstrate its capabilities, several compound plates (n=5760 wells total) containing different artifacts were measured via automated HIAPI-CM analysis, and results compared with those obtained by manual (visual) inspection. In all cases, the instrument demonstrated high fidelity (99.8% empty wells; 100.1% filled wells; 94.4% for partially filled wells; 94.0% for wells containing colored compounds), and in the case of precipitate detection, the HIAPI-CM results significantly exceeded the fidelity of visual observations (220.0%). As described, the HIAPI-CM allows for noninvasive, nondestructive MTP assessment with a diagnostic throughput of about 1min per plate, reducing analytical expenses and improving the quality and stewardship of HTS compound libraries.

  3. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  4. Thermal Imaging of the Waccasassa Bay Preserve: Image Acquisition and Processing

    USGS Publications Warehouse

    Raabe, Ellen A.; Bialkowska-Jelinska, Elzbieta

    2010-01-01

    Thermal infrared (TIR) imagery was acquired along coastal Levy County, Florida, in March 2009 with the goal of identifying groundwater-discharge locations in Waccasassa Bay Preserve State Park (WBPSP). Groundwater discharge is thermally distinct in winter when Floridan aquifer temperature, 71-72 degrees F, contrasts with the surrounding cold surface waters. Calibrated imagery was analyzed to assess temperature anomalies and related thermal traces. The influence of warm Gulf water and image artifacts on small features was successfully constrained by image evaluation in three separate zones: Creeks, Bay, and Gulf. Four levels of significant water-temperature anomalies were identified, and 488 sites of interest were mapped. Among the sites identified, at least 80 were determined to be associated with image artifacts and human activity, such as excavation pits and the Florida Barge Canal. Sites of interest were evaluated for geographic concentration and isolation. High site densities, indicating interconnectivity and prevailing flow, were located at Corrigan Reef, No. 4 Channel, Winzy Creek, Cow Creek, Withlacoochee River, and at excavation sites. In other areas, low to moderate site density indicates the presence of independent vents and unique flow paths. A directional distribution assessment of natural seep features produced a northwest trend closely matching the strike direction of regional faults. Naturally occurring seeps were located in karst ponds and tidal creeks, and several submerged sites were detected in Waccasassa River and Bay, representing the first documentation of submarine vents in the Waccasassa region. Drought conditions throughout the region placed constraints on positive feature identification. Low discharge or displacement by landward movement of saltwater may have reduced or reversed flow during this season. Approximately two-thirds of seep locations in the overlap between 2009 and 2005 TIR night imagery were positively re-identified in 2009

  5. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  6. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  7. Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Maignan, William; Koeplinger, David; Carhart, Gary W.; Aubailly, Mathieu; Kiamilev, Fouad; Liu, J. Jiang

    2013-05-01

    "Lucky-region fusion" (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and "fuses" them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.

  8. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  9. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  10. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  11. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  12. The influence of the microscope lamp filament colour temperature on the process of digital images of histological slides acquisition standardization

    PubMed Central

    2014-01-01

    Background The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Methods Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. Results For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. Conclusions The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature

  13. SNAP: Simulating New Acquisition Processes

    NASA Technical Reports Server (NTRS)

    Alfeld, Louis E.

    1997-01-01

    Simulation models of acquisition processes range in scope from isolated applications to the 'Big Picture' captured by SNAP technology. SNAP integrates a family of models to portray the full scope of acquisition planning and management activities, including budgeting, scheduling, testing and risk analysis. SNAP replicates the dynamic management processes that underlie design, production and life-cycle support. SNAP provides the unique 'Big Picture' capability needed to simulate the entire acquisition process and explore the 'what-if' tradeoffs and consequences of alternative policies and decisions. Comparison of cost, schedule and performance tradeoffs help managers choose the lowest-risk, highest payoff at each step in the acquisition process.

  14. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  15. Data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Tsuda, Toshitaka

    1989-10-01

    Fundamental methods of signal processing used in normal mesosphere stratosphere troposphere (MST) radar observations are described. Complex time series of received signals obtained in each range gate are converted into Doppler spectra, from which the mean Doppler shift, spectral width and signal-to-noise ratio (SNR) are estimated. These spectral parameters are further utilized to study characteristics of scatterers and atmospheric motions.

  16. Integral imaging acquisition and processing for visualization of photon counting images in the mid-wave infrared range

    NASA Astrophysics Data System (ADS)

    Latorre-Carmona, P.; Pla, F.; Javidi, B.

    2016-06-01

    In this paper, we present an overview of our previously published work on the application of the maximum likelihood (ML) reconstruction method to integral images acquired with a mid-wave infrared detector on two different types of scenes: one of them consisting of a road, a group of trees and a vehicle just behind one of the trees (being the car at a distance of more than 200m from the camera), and another one consisting of a view of the Wright Air Force Base airfield, with several hangars and different other types of installations (including warehouses) at distances ranging from 600m to more than 2km. Dark current noise is considered taking into account the particular features this type of sensors have. Results show that this methodology allows to improve visualization in the photon counting domain.

  17. Effective GPR Data Acquisition and Imaging

    NASA Astrophysics Data System (ADS)

    Sato, M.

    2014-12-01

    We have demonstrated that dense GPR data acquisition typically antenna step increment less than 1/10 wave length can provide clear 3-dimeantiona subsurface images, and we created 3DGPR images. Now we are interested in developing GPR survey methodologies which required less data acquisition time. In order to speed up the data acquisition, we are studying efficient antenna positioning for GPR survey and 3-D imaging algorithm. For example, we have developed a dual sensor "ALIS", which combines GPR with metal detector (Electromagnetic Induction sensor) for humanitarian demining, which acquires GPR data by hand scanning. ALIS is a pulse radar system, which has a frequency range 0.5-3GHz.The sensor position tracking system has accuracy about a few cm, and the data spacing is typically more than a few cm, but it can visualize the mines, which has a diameter about 8cm. 2 systems of ALIS have been deployed by Cambodian Mine Action Center (CMAC) in mine fields in Cambodia since 2009 and have detected more than 80 buried land mines. We are now developing signal processing for an array type GPR "Yakumo". Yakumo is a SFCW radar system which is a multi-static radar, consisted of 8 transmitter antennas and 8 receiver antennas. We have demonstrated that the multi-static data acquisition is not only effective in data acquisition, but at the same time, it can increase the quality of GPR images. Archaeological survey by Yakumo in large areas, which are more than 100m by 100m have been conducted, for promoting recovery from Tsunami attacked East Japan in March 2011. With a conventional GPR system, we are developing an interpolation method of radar signals, and demonstrated that it can increase the quality of the radar images, without increasing the data acquisition points. When we acquire one dimensional GPR profile along a survey line, we can acquire relatively high density data sets. However, when we need to relocate the data sets along a "virtual" survey line, for example a

  18. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  19. X-ray beam modulation, image acquisition and real-time processing in region-of-interest fluoroscopy

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying Joseph

    2000-07-01

    Region of interest (ROI) fluoroscopy is a technique whereby a partially attenuating filter with an aperture in the center is placed in the x-ray beam between the source and the patient The part of the x-ray beam going through the filter aperture un-attenuated is used to project the main features of interest in the patient to form the ROI in each fluoroscopic image. The periphery of the image is formed by the projection of the features needed only for reference using the part of the attenuated x-ray beam passing through the filter. This technique can substantially reduce patient and staff dose and improve the image quality in the ROI of the image. By using Gd for the filter material, it is even possible to improve the x-ray attenuation contrast in the periphery. However, real-time image processing is needed to compensate for the x-ray intensity attenuation in the periphery so that the brightness in the two parts of the fluoroscopic image is linearity is restored. Based on the method of binary masks, a system was developed to perform the real-time image processing with the flexibility to accommodate both the horizontal and vertical movement of the imaging chain relative to the patient. A binary mask is a binary image used to define those regions in the fluoroscopic image which should be processed and those which should not. A method of binary mask generation was proposed so the region defined as not to be processed in the binary mask maintains as close a resemblance as possible to the ROI of the fluoroscopic image. The construction method for the look-up table used for the processing of the periphery and its dependence on physical quantities were described and studied. An algorithm for constantly tracking the change of the ROI in the fluoroscopic images and selecting the proper corresponding binary mask was developed. The quality of the processed ROI fluoroscopic images such as brightness, contrast and noise were evaluated and compared using test phantoms. The test

  20. MONSOON: Image Acquisition System or "Pixel Server"

    NASA Astrophysics Data System (ADS)

    Starr, Barry M.; Buchholz, Nick C.; Rahmer, Gustavo; Penegor, Gerald; Schmidt, Ricardo E.; Warner, Michael; Merrill, Michael; Claver, Charles F.; Ho, Y.; Chopra, K. N.; Shroff, C.; Shroff, D.

    2003-03-01

    The MONSOON Image Acquisition System has been designed to meet the need for scalable, multichannel, high-speed image acquisition required for the next-generation optical and infared detectors and mosaic projects currently under development at NOAO as described in other papers at this proceeding such as ORION, NEWFIRM, QUOTA, ODI and LSST. These new systems with their large scale (64 to 2000 channels) and high performance (up to 1Gbyte/s) raise new challenges in terms of communication bandwidth, data storage and data processing requirements which are not adequately met by existing astronomical controllers. In order to meet this demand, new techniques for not only a new detector controller, but rather a new image acquisition architecture, have been defined. These extremely large scale imaging systems also raise less obvious concerns in previously neglected areas of controller design such as physical size and form factor issues, power dissipation and cooling near the telescope, system assembly/test/ integration time, reliability, and total cost of ownership. At NOAO we have taken efforts to look outside of the astronomical community for solutions found in other disciplines to similar classes of problems. A large number of the challenges raised by these system needs are already successfully being faced in other areas such as telecommunications, instrumentation and aerospace. Efforts have also been made to use true commercial off the shelf (COTS) system elements, and find truly technology independent solutions for a number of system design issues whenever possible. The Monsoon effort is a full-disclosure development effort by NOAO in collaboration with the CARA ASTEROID project for the benefit of the astronomical community.

  1. Image Acquisition in Real Time

    NASA Technical Reports Server (NTRS)

    2003-01-01

    In 1995, Carlos Jorquera left NASA s Jet Propulsion Laboratory (JPL) to focus on erasing the growing void between high-performance cameras and the requisite software to capture and process the resulting digital images. Since his departure from NASA, Jorquera s efforts have not only satisfied the private industry's cravings for faster, more flexible, and more favorable software applications, but have blossomed into a successful entrepreneurship that is making its mark with improvements in fields such as medicine, weather forecasting, and X-ray inspection. Formerly a JPL engineer who constructed imaging systems for spacecraft and ground-based astronomy projects, Jorquera is the founder and president of the three-person firm, Boulder Imaging Inc., based in Louisville, Colorado. Joining Jorquera to round out the Boulder Imaging staff are Chief Operations Engineer Susan Downey, who also gained experience at JPL working on space-bound projects including Galileo and the Hubble Space Telescope, and Vice President of Engineering and Machine Vision Specialist Jie Zhu Kulbida, who has extensive industrial and research and development experience within the private sector.

  2. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  3. Optimisation of acquisition time in bioluminescence imaging

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie; Cobbold, Mark; Styles, Iain B.; Dehghani, Hamid

    2015-03-01

    Decreasing the acquisition time in bioluminescence imaging (BLI) and bioluminescence tomography (BLT) will enable animals to be imaged within the window of stable emission of the bioluminescent source, a higher imaging throughput and minimisation of the time which an animal is anaesthetised. This work investigates, through simulation using a heterogeneous mouse model, two methods of decreasing acquisition time: 1. Imaging at fewer wavelengths (a reduction from five to three); and 2. Increasing the bandwidth of filters used for imaging. The results indicate that both methods are viable ways of decreasing the acquisition time without a loss in quantitative accuracy. Importantly, when choosing imaging wavelengths, the spectral attenuation of tissue and emission spectrum of the source must be considered, in order to choose wavelengths at which a high signal can be achieved. Additionally, when increasing the bandwidth of the filters used for imaging, the bandwidth must be accounted for in the reconstruction algorithm.

  4. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  5. Image acquisition in laparoscopic and endoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gill, Brijesh S.; Georgeson, Keith E.; Hardin, William D., Jr.

    1995-04-01

    Laparoscopic and endoscopic surgery rely uniquely on high quality display of acquired images, but a multitude of problems plague the researcher who attempts to reproduce such images for educational purposes. Some of these are intrinsic limitations of current laparoscopic/endoscopic visualization systems, while others are artifacts solely of the process used to acquire and reproduce such images. Whatever the genesis of these problems, a glance at current literature will reveal the extent to which endoscopy suffers from an inability to reproduce what the surgeon sees during a procedure. The major intrinsic limitation to the acquisition of high-quality still images from laparoscopic procedures lies in the inability to couple directly a camera to the laparoscope. While many systems have this capability, this is useful mostly for otolaryngologists, who do not maintain a sterile field around their scopes. For procedures in which a sterile field must be maintained, one trial method has been to use a beam splitter to send light both to the still camera and the digital video camera. This is no solution, however, since this results in low quality still images as well as a degradation of the image that the surgeon must use to operate, something no surgeon tolerates lightly. Researchers thus must currently rely on other methods for producing images from a laparoscopic procedure. Most manufacturers provide an optional slide or print maker that provides a hardcopy output from the processed composite video signal. The results achieved from such devices are marginal, to say the least. This leaves only one avenue for possible image production, the videotape record of an endoscopic or laparoscopic operation. Video frame grabbing is at least a problem to which industry has applied considerable time and effort to solving. Our own experience with computerized enhancement of videotape frames has been very promising. Computer enhancement allows the researcher to correct several of the

  6. Image acquisition system for a hospital enterprise

    NASA Astrophysics Data System (ADS)

    Moore, Stephen M.; Beecher, David E.

    1998-07-01

    Hospital enterprises are being created through mergers and acquisitions of existing hospitals. One area of interest in the PACS literature has been the integration of information systems and imaging systems. Hospital enterprises with multiple information and imaging systems provide new challenges to the integration task. This paper describes the requirements at the BJC Health System and a testbed system that is designed to acquire images from a number of different modalities and hospitals. This testbed system is integrated with Project Spectrum at BJC which is designed to provide a centralized clinical repository and a single desktop application for physician review of the patient chart (text, lab values, images).

  7. Adaptive processing for enhanced target acquisition

    NASA Astrophysics Data System (ADS)

    Page, Scott F.; Smith, Moira I.; Hickman, Duncan; Bernhardt, Mark; Oxford, William; Watson, Norman; Beath, F.

    2009-05-01

    Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources. This ignores information that may be available through modern mission management systems which could be fused into the detection process in order to provide enhanced performance. By way of an example relating to target detection, this paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation, sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric characteristics of a target in terms of probability density functions. An important consideration in the construction of the target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as strategies for managing poor quality or absent a-priori information.

  8. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  9. Self-adaptive iris image acquisition system

    NASA Astrophysics Data System (ADS)

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu; Qiu, Xianchao

    2008-03-01

    Iris image acquisition is the fundamental step of the iris recognition, but capturing high-resolution iris images in real-time is very difficult. The most common systems have small capture volume and demand users to fully cooperate with machines, which has become the bottleneck of iris recognition's application. In this paper, we aim at building an active iris image acquiring system which is self-adaptive to users. Two low resolution cameras are co-located in a pan-tilt-unit (PTU), for face and iris image acquisition respectively. Once the face camera detects face region in real-time video, the system controls the PTU to move towards the eye region and automatically zooms, until the iris camera captures an clear iris image for recognition. Compared with other similar works, our contribution is that we use low-resolution cameras, which can transmit image data much faster and are much cheaper than the high-resolution cameras. In the system, we use Haar-like cascaded feature to detect faces and eyes, linear transformation to predict the iris camera's position, and simple heuristic PTU control method to track eyes. A prototype device has been established, and experiments show that our system can automatically capture high-quality iris image in the range of 0.6m×0.4m×0.4m in average 3 to 5 seconds.

  10. SU-C-18C-06: Radiation Dose Reduction in Body Interventional Radiology: Clinical Results Utilizing a New Imaging Acquisition and Processing Platform

    SciTech Connect

    Kohlbrenner, R; Kolli, KP; Taylor, A; Kohi, M; Fidelman, N; LaBerge, J; Kerlan, R; Gould, R

    2014-06-01

    Purpose: To quantify the patient radiation dose reduction achieved during transarterial chemoembolization (TACE) procedures performed in a body interventional radiology suite equipped with the Philips Allura Clarity imaging acquisition and processing platform, compared to TACE procedures performed in the same suite equipped with the Philips Allura Xper platform. Methods: Total fluoroscopy time, cumulative dose area product, and cumulative air kerma were recorded for the first 25 TACE procedures performed to treat hepatocellular carcinoma (HCC) in a Philips body interventional radiology suite equipped with Philips Allura Clarity. The same data were collected for the prior 85 TACE procedures performed to treat HCC in the same suite equipped with Philips Allura Xper. Mean values from these cohorts were compared using two-tailed t tests. Results: Following installation of the Philips Allura Clarity platform, a 42.8% reduction in mean cumulative dose area product (3033.2 versus 1733.6 mGycm∧2, p < 0.0001) and a 31.2% reduction in mean cumulative air kerma (1445.4 versus 994.2 mGy, p < 0.001) was achieved compared to similar procedures performed in the same suite equipped with the Philips Allura Xper platform. Mean total fluoroscopy time was not significantly different between the two cohorts (1679.3 versus 1791.3 seconds, p = 0.41). Conclusion: This study demonstrates a significant patient radiation dose reduction during TACE procedures performed to treat HCC after a body interventional radiology suite was converted to the Philips Allura Clarity platform from the Philips Allura Xper platform. Future work will focus on evaluation of patient dose reduction in a larger cohort of patients across a broader range of procedures and in specific populations, including obese patients and pediatric patients, and comparison of image quality between the two platforms. Funding for this study was provided by Philips Healthcare, with 5% salary support provided to authors K. Pallav

  11. Digital image processing.

    PubMed

    Lo, Winnie Y; Puchalski, Sarah M

    2008-01-01

    Image processing or digital image manipulation is one of the greatest advantages of digital radiography (DR). Preprocessing depends on the modality and corrects for system irregularities such as differential light detection efficiency, dead pixels, or dark noise. Processing is manipulation of the raw data just after acquisition. It is generally proprietary and specific to the DR vendor but encompasses manipulations such as unsharp mask filtering within two or more spatial frequency bands, histogram sliding and stretching, and gray scale rendition or lookup table application. These processing steps have a profound effect on the final appearance of the radiograph, but they can also lead to artifacts unique to digital systems. Postprocessing refers to manipulation of the final appearance of the radiograph by the end-user and does not involve alteration of the raw data.

  12. The ADIS advanced data acquisition, imaging, and storage system

    SciTech Connect

    Flaherty, J.W.

    1986-01-01

    The design and development of Automated Ultrasonic Scanning Systems (AUSS) by McDonnell Aircraft Company has provided the background for the development of the ADIS advanced data acquisition, imaging, and storage system. The ADIS provides state-of-the-art ultrasonic data processing and imaging features which can be utilized in both laboratory and production line composite evaluation applications. System features, such as, real-time imaging, instantaneous electronic rescanning, multitasking capability, histograms, and cross-sections, provide the tools necessary to inspect and evaluate composite parts quickly and consistently.

  13. Auditory Processing Disorder and Foreign Language Acquisition

    ERIC Educational Resources Information Center

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  14. Applications Of Digital Image Acquisition In Anthropometry

    NASA Astrophysics Data System (ADS)

    Woolford, Barbara; Lewis, James L.

    1981-10-01

    Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.

  15. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    ERIC Educational Resources Information Center

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  16. Processability Theory and German Case Acquisition

    ERIC Educational Resources Information Center

    Baten, Kristof

    2011-01-01

    This article represents the first attempt to formulate a hypothetical sequence for German case acquisition by Dutch-speaking learners on the basis of Processability Theory (PT). It will be argued that case forms emerge corresponding to a development from lexical over phrasal to interphrasal morphemes. This development, however, is subject to a…

  17. Probabilistic models of language processing and acquisition.

    PubMed

    Chater, Nick; Manning, Christopher D

    2006-07-01

    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.

  18. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  19. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  20. Camera settings for UAV image acquisition

    NASA Astrophysics Data System (ADS)

    O'Connor, James; Smith, Mike J.; James, Mike R.

    2016-04-01

    The acquisition of aerial imagery has become more ubiquitous than ever in the geosciences due to the advent of consumer-grade UAVs capable of carrying imaging devices. These allow the collection of high spatial resolution data in a timely manner with little expertise. Conversely, the cameras/lenses used to acquire this imagery are often given less thought, and can be unfit for purpose. Given weight constraints which are frequently an issue with UAV flights, low-payload UAVs (<1 kg) limit the types of cameras/lenses which could potentially be used for specific surveys, and therefore the quality of imagery which can be acquired. This contribution discusses these constraints, which need to be considered when selecting a camera/lens for conducting a UAV survey and how they can best be optimized. These include balancing of the camera exposure triangle (ISO, Shutter speed, Aperture) to ensure sharp, well exposed imagery, and its interactions with other camera parameters (Sensor size, Focal length, Pixel pitch) as well as UAV flight parameters (height, velocity).

  1. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  2. Reproducible high-resolution multispectral image acquisition in dermatology

    NASA Astrophysics Data System (ADS)

    Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir

    2015-07-01

    Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.

  3. Age of Acquisition and Imageability: A Cross-Task Comparison

    ERIC Educational Resources Information Center

    Ploetz, Danielle M.; Yates, Mark

    2016-01-01

    Previous research has reported an imageability effect on visual word recognition. Words that are high in imageability are recognised more rapidly than are those lower in imageability. However, later researchers argued that imageability was confounded with age of acquisition. In the current research, these two factors were manipulated in a…

  4. Star sensor image acquisition and preprocessing hardware system based on CMOS image sensor and FGPA

    NASA Astrophysics Data System (ADS)

    Hao, Xuetao; Jiang, Jie; Zhang, Guangjun

    2003-09-01

    Star Sensor is an avionics instrument used to provide the absolute 3-axis attitude of a spacecraft utilizing star observations. It consists of an electronic camera and associated processing electronics. As outcome of advancing state-of-the-art, new generation star sensor features faster, lower cost, power dissipation and size than the first generation star sensor. This paper describes a star sensor anterior image acquisition and pre-processing hardware system based on CMOS image-sensor and FPGA technology. Practically, star images are produced by a simple simulator on PC, acquired by CMOS image sensor, pre-processed by FPGA, saved in SRAM, read out by EPP protocol and validated by an image process software on PC. The hardware part of system acquires images thought CMOS image-sensor controlled by FPGA, then processes image data by a circuit module of FPGA, and save images to SRAM for test. Basic image data for star recognition and attitude determination of spacecrafts are provided by it. As an important reference for developing star sensor prototype, the system validates the performance advantages of new generation star sensor.

  5. Biomedical image processing

    SciTech Connect

    Huang, H.K.

    1981-01-01

    Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed.

  6. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  7. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  8. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  9. Acquisition by Processing Theory: A Theory of Everything?

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    2004-01-01

    Truscott and Sharwood Smith (henceforth T&SS) propose a novel theory of language acquisition, "Acquisition by Processing Theory" (APT), designed to account for both first and second language acquisition, monolingual and bilingual speech perception and parsing, and speech production. This is a tall order. Like any theoretically ambitious…

  10. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  11. Medical image processing system

    NASA Astrophysics Data System (ADS)

    Wang, Dezong; Wang, Jinxiang

    1994-12-01

    In this paper a medical image processing system is described. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. Principles and cases provided here. Many kinds of pictures are used in modern medical diagnoses, for example B-supersonic, X-ray, CT and MRI. Some times the pictures are not good enough for diagnoses. The noises interfere with real situation on these pictures. That means the image processing is needed. A medical image processing system is described in this paper. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. There are four functions in that system. The first part is image processing. More than thirty four programs are involved. The second part is calculating. The areas or volumes of single or multitissues are calculated. Three dimensional reconstruction is the third part. The stereo images of organs or tumors are reconstructed with cross-sections. The last part is image storage. All pictures can be transformed to digital images, then be stored in hard disk or soft disk. In this paper not only all functions of that system are introduced, also the basic principles of these functions are explained in detail. This system has been applied in hospitals. The images of hundreds of cases have been processed. We describe the functions combining real cases. Here we only introduce a few examples.

  12. Image processing in medicine

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans

    2001-12-01

    This article is divided into two parts: the first is an opinion, the second is a description. The opinion is that diagnostic medical imaging is not a detection problem. The description is of a specific medical image-processing program. Why the opinion? If medical imaging were a detection problem, then image processing would unimportant. However, image processing is crucial. We illustrate this fact using three examples ultrasound, magnetic resonance imaging and, most poignantly, computed radiography. Although the examples are anecdotal they are illustrative. The description is of the image-processing program ImprocRAD written by one of the authors (Dallas). First we will discuss the motivation for creating yet another image processing program including system characterization which is an area of expertise of one of the authors (Roehrig). We will then look at the structure of the program and finally, to the point, the specific application: mammographic diagnostic reading. We will mention rapid display of mammogram image sets and then discuss processing. In that context, we describe a real-time image-processing tool we term the MammoGlass.

  13. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  14. Acquisition by Processing: A Modular Perspective on Language Development

    ERIC Educational Resources Information Center

    Truscott, John; Smith, Mike Sharwood

    2004-01-01

    The paper offers a model of language development, first and second, within a processing perspective. We first sketch a modular view of language, in which competence is embodied in the processing mechanisms. We then propose a novel approach to language acquisition (Acquisition by Processing Theory, or APT), in which development of the module occurs…

  15. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  16. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  17. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  18. Chemical Applications of a Programmable Image Acquisition System

    NASA Astrophysics Data System (ADS)

    Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian

    2003-06-01

    Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.

  19. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  20. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    PubMed

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-12-05

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  1. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  2. 29. Perimeter acquisition radar building room #318, data processing system ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. Perimeter acquisition radar building room #318, data processing system area; data processor maintenance and operations center, showing data processing consoles - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  3. Spatial arrangement of color filter array for multispectral image acquisition

    NASA Astrophysics Data System (ADS)

    Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat

    2011-03-01

    In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.

  4. Single Acquisition Quantitative Single Point Electron Paramagnetic Resonance Imaging

    PubMed Central

    Jang, Hyungseok; Subramanian, Sankaran; Devasahayam, Nallathamby; Saito, Keita; Matsumoto, Shingo; Krishna, Murali C; McMillan, Alan B

    2013-01-01

    Purpose Electron paramagnetic resonance imaging (EPRI) has emerged as a promising non-invasive technology to dynamically image tissue oxygenation. Due to its extremely short spin-spin relaxation times, EPRI benefits from a single-point imaging (SPI) scheme where the entire FID signal is captured using pure phase encoding. However, direct T2*/pO2 quantification is inhibited due to constant magnitude gradients which result in time-decreasing FOV. Therefore, conventional acquisition techniques require repeated imaging experiments with differing gradient amplitudes (typically 3), which results in long acquisition time. Methods In this study, gridding was evaluated as a method to reconstruct images with equal FOV to enable direct T2*/pO2 quantification within a single imaging experiment. Additionally, an enhanced reconstruction technique that shares high spatial k-space regions throughout different phase encoding time delays was investigated (k-space extrapolation). Results The combined application of gridding and k-space extrapolation enables pixelwise quantification of T2* from a single acquisition with improved image quality across a wide range of phase encoding delay times. The calculated T2*/pO2 does not vary across this time range. Conclusion By utilizing gridding and k-space extrapolation, accurate T2*/pO2 quantification can be achieved within a single dataset to allow enhanced temporal resolution (by a factor of 3). PMID:23913515

  5. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  6. Process data acquisition: real-time and historical interfaces

    NASA Astrophysics Data System (ADS)

    Rice, Gordon; Moreno, Richard; King, Michael S.

    1997-01-01

    With the advent of touch probe technology, it was discovered that current closed architecture controllers do not provide adequate resources to support the implementation of process data acquisition on the shop floor. At AlliedSignal, a process data acquisition systems has been developed for a flexible manufacturing system utilizing touch probe and customized software which allows fixture and cutting tool related information for an entire process to be captured and stored for off-line analysis. The implementation of this system, the difficulties and pitfalls, will be presented along with the functionality required for an open architecture controller to properly support process data acquisition.

  7. Process data acquisition: Real time and historical interfaces

    SciTech Connect

    Rice, G.; Moreno, R.; King, M.

    1996-11-01

    With the advent of touch probe technology, it was discovered that current closed architecture controllers do not provide adequate resources to support the implementation of process data acquisition on the shop floor. At AlliedSignal Federal Manufacturing & Technologies, a process data acquisition system has been developed for a flexible manufacturing system utilizing touch probes and customized software which allows fixture and cutting tool related information for an entire process to be captured and stored for off-line analysis. The implementation of this system, the difficulties and pitfalls, will be presented along with the functionality required for an open architecture controller to properly support process data acquisition.

  8. BAOlab: Image processing program

    NASA Astrophysics Data System (ADS)

    Larsen, Søren S.

    2014-03-01

    BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

  9. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  10. Automatic image acquisition processor and method

    DOEpatents

    Stone, William J.

    1986-01-01

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  11. Automatic image acquisition processor and method

    DOEpatents

    Stone, W.J.

    1984-01-16

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  12. The Power of Imageability: How the Acquisition of Inflected Forms Is Facilitated in Highly Imageable Verbs and Nouns in Czech Children

    ERIC Educational Resources Information Center

    Smolík, Filip; Kríž, Adam

    2015-01-01

    Imageability is the ability of words to elicit mental sensory images of their referents. Recent research has suggested that imageability facilitates the processing and acquisition of inflected word forms. The present study examined whether inflected word forms are acquired earlier in highly imageable words in Czech children. Parents of 317…

  13. Imaging and Data Acquisition in Clinical Trials for Radiation Therapy.

    PubMed

    FitzGerald, Thomas J; Bishop-Jodoin, Maryann; Followill, David S; Galvin, James; Knopp, Michael V; Michalski, Jeff M; Rosen, Mark A; Bradley, Jeffrey D; Shankar, Lalitha K; Laurie, Fran; Cicchetti, M Giulia; Moni, Janaki; Coleman, C Norman; Deye, James A; Capala, Jacek; Vikram, Bhadrasain

    2016-02-01

    Cancer treatment evolves through oncology clinical trials. Cancer trials are multimodal and complex. Assuring high-quality data are available to answer not only study objectives but also questions not anticipated at study initiation is the role of quality assurance. The National Cancer Institute reorganized its cancer clinical trials program in 2014. The National Clinical Trials Network (NCTN) was formed and within it was established a Diagnostic Imaging and Radiation Therapy Quality Assurance Organization. This organization is Imaging and Radiation Oncology Core, the Imaging and Radiation Oncology Core Group, consisting of 6 quality assurance centers that provide imaging and radiation therapy quality assurance for the NCTN. Sophisticated imaging is used for cancer diagnosis, treatment, and management as well as for image-driven technologies to plan and execute radiation treatment. Integration of imaging and radiation oncology data acquisition, review, management, and archive strategies are essential for trial compliance and future research. Lessons learned from previous trials are and provide evidence to support diagnostic imaging and radiation therapy data acquisition in NCTN trials.

  14. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  15. Image processing occupancy sensor

    DOEpatents

    Brackney, Larry J.

    2016-09-27

    A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.

  16. Image acquisition planning for the CHRIS sensor onboard PROBA

    NASA Astrophysics Data System (ADS)

    Fletcher, Peter A.

    2004-10-01

    The CHRIS (Compact High Resolution Imaging Spectrometer) instrument was launched onboard the European Space Agency (ESA) PROBA satellite on 22 October 2001. CHRIS can acquire up to 63 bands of hyperspectral data at a ground spatial resolution of 36m. Alternatively, the instrument can be configured to acquire 18 bands of data with a spatial resolution of 17m. PROBA, by virtue of its agile pointing capability, enables CHRIS to acquire five different angle images of the selected site. Two sites can be acquired every 24 hours. The hyperspectral and multi-angle capability of CHRIS makes it an important resource for stydying BRDF phenomena of vegetation. Other applications include coastal and inland waters, wild fires, education and public relations. An effective data acquisition planning procedure has been implemented and since mid-2002 users have been receiving data for analysis. A cloud prediction routine has been adopted that maximises the image acquisition capacity of CHRIS-PROBA. Image acquisition planning is carried out by RSAC Ltd on behalf of ESA and in co-operation with Sira Technology Ltd and Redu, the ESA ground station in Belgium, responsible for CHRIS-PROBA.

  17. Image-processing pipelines: applications in magnetic resonance histology

    NASA Astrophysics Data System (ADS)

    Johnson, G. Allan; Anderson, Robert J.; Cook, James J.; Long, Christopher; Badea, Alexandra

    2016-03-01

    Image processing has become ubiquitous in imaging research—so ubiquitous that it is easy to loose track of how diverse this processing has become. The Duke Center for In Vivo Microscopy has pioneered the development of Magnetic Resonance Histology (MRH), which generates large multidimensional data sets that can easily reach into the tens of gigabytes. A series of dedicated image-processing workstations and associated software have been assembled to optimize each step of acquisition, reconstruction, post-processing, registration, visualization, and dissemination. This talk will describe the image-processing pipelines from acquisition to dissemination that have become critical to our everyday work.

  18. Q-ball imaging with PROPELLER EPI acquisition.

    PubMed

    Chou, Ming-Chung; Huang, Teng-Yi; Chung, Hsiao-Wen; Hsieh, Tsyh-Jyi; Chang, Hing-Chiu; Chen, Cheng-Yu

    2013-12-01

    Q-ball imaging (QBI) is an imaging technique that is capable of resolving intravoxel fiber crossings; however, the signal readout based on echo-planar imaging (EPI) introduces geometric distortions in the presence of susceptibility gradients. This study proposes an imaging technique that reduces susceptibility distortions in QBI by short-axis PROPELLER EPI acquisition. Conventional QBI and PROPELLER QBI data were acquired from two 3T MR scans of the brains of five healthy subjects. Prior to the PROPELLER reconstruction, residual distortions in single-blade low-resolution b0 and diffusion-weighted images (DWIs) were minimized by linear affine and nonlinear diffeomorphic demon registrations. Subsequently, the PROPELLER keyhole reconstruction was applied to the corrected DWIs to obtain high-resolution PROPELLER DWIs. The generalized fractional anisotropy and orientation distribution function maps contained fewer distortions in PROPELLER QBI than in conventional QBI, and the fiber tracts more closely matched the brain anatomy depicted by turbo spin-echo (TSE) T2-weighted imaging (T2WI). Furthermore, for fixed T(E), PROPELLER QBI enabled a shorter scan time than conventional QBI. We conclude that PROPELLER QBI can reduce susceptibility distortions without lengthening the acquisition time and is suitable for tracing neuronal fiber tracts in the human brain.

  19. Programmable Image Processing Element

    NASA Astrophysics Data System (ADS)

    Eversole, W. L.; Salzman, J. F.; Taylor, F. V.; Harland, W. L.

    1982-07-01

    The algorithmic solution to many image-processing problems frequently uses sums of products where each multiplicand is an input sample (pixel) and each multiplier is a stored coefficient. This paper presents a large-scale integrated circuit (LSIC) implementation that provides accumulation of nine products and discusses its evolution from design through application 'A read-only memory (ROM) accumulate algorithm is used to perform the multiplications and is the key to one-chip implementation. The ROM function is actually implemented with erasable programmable ROM (EPROM) to allow reprogramming of the circuit to a variety of different functions. A real-time brassboard is being constructed to demonstrate four different image-processing operations on TV images.

  20. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  1. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  2. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  3. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    PubMed

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. PMID:26248715

  4. The Gestalt Process Approach and Word Acquisition.

    ERIC Educational Resources Information Center

    McAllister, Elizabeth

    To whet the curiosity and interest of teachers who may be frustrated with the reading vocabulary achievement of pupils, an informal study compared Piaget's cognitive development theory, recent brain research, and the reading process, and examined how the theory and research apply to reading instruction. The Gestalt Process Approach to teaching…

  5. Language Processes and Second-Language Acquisition.

    ERIC Educational Resources Information Center

    Collins, Larry Lloyd

    A review of the literature and research concerning the language processes of listening, speaking, reading, and writing, and an analysis of the findings regarding the characteristics of these processes and their relationship to the second-language learner led to the following conclusions: (1) the circumstances under which the first language is…

  6. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  7. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  8. Design and characterization of an image acquisition system and its optomechanical module for chip defects inspection on chip sorters

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Fu; Huang, Po-Hsuan; Chen, Yung-Hsiang; Cheng, Yu-Cheng

    2011-08-01

    Chip sorter is one of packaging facilities in chip manufactory. Defects will occur for a few of chips during manufacturing processes. If the size of chip defects is larger than a criterion of impacting chip quality, these flawed chips have to be detected and removed. Defects inspection system is usually developed with frame CCD imagers. There're some drawbacks for this system, such as mechanism of pause type for image acquisition, complicated acquisition control, easy damage for moving components, etc. And acquired images per chip have to be processed in radiometry and geometry and then pieced together before inspection. These processes impact the accuracy and efficiency of defects inspection. So approaches of image acquisition system and its opto-mechanical module will be critical for inspection system. In this article, design and characterization of a new image acquisition system and its opto-mechanical module are presented. Defects with size of greater than 15μm have to be inspected. Inspection performance shall be greater than 0.6 m/sec. Thus image acquisition system shall have the characteristics of having (1) the resolution of 5μm and 10μm for optical lens and linear CCD imager respectively; (2) the lens magnification of 2; (3) the line rate of greater than 120 kHz for imager output. The design of structure and outlines for new system and module are also described in this work. Proposed system has advantages of such as transporting chips in constant speed to acquire images, using one image only per chip for inspection, no image-mosaic process, simplifying the control of image acquisition. And the inspection efficiency and accuracy will be substantially improved.

  9. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  10. Image processing for the Arcetri Solar Archive

    NASA Astrophysics Data System (ADS)

    Centrone, M.; Ermolli, I.; Giorgi, F.

    The modelling recently developed to "reconstruct" with high accuracy the measured Total Solar Irradiance (TSI) variations, based on semi-empirical atmosphere models and observed distribution of the solar magnetic regions, can be applied to "construct" the TSI variations back in time making use of observations stored on several historic photographic archives. However, the analyses of images obtained through these archives is not a straightforward task, because these images suffer of several defects originated by the acquisition techniques and the data storing. In this paper we summarize the processing applied to identify solar features on the images obtained by the digitization of the Arcetri solar archive.

  11. New developments in electron microscopy for serial image acquisition of neuronal profiles.

    PubMed

    Kubota, Yoshiyuki

    2015-02-01

    Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed.

  12. New developments in electron microscopy for serial image acquisition of neuronal profiles.

    PubMed

    Kubota, Yoshiyuki

    2015-02-01

    Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed. PMID:25564566

  13. Metrics for image-based modeling of target acquisition

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan D.

    2012-06-01

    This paper presents an image-based system performance model. The image-based system model uses an image metric to compare a given degraded image of a target, as seen through the modeled system, to the set of possible targets in the target set. This is repeated for all possible targets to generate a confusion matrix. The confusion matrix is used to determine the probability of identifying a target from the target set when using a particular system in a particular set of conditions. The image metric used in the image-based model should correspond closely to human performance. The image-based model performance is compared to human perception data on Contrast Threshold Function (CTF) tests, naked eye Triangle Orientation Discrimination (TOD), and TOD including an infrared camera system. Image-based system performance modeling is useful because it allows modeling of arbitrary image processing. Modern camera systems include more complex image processing, much of which is nonlinear. Existing linear system models, such as the TTP metric model implemented in NVESD models such as NV-IPM, assume that the entire system is linear and shift invariant (LSI). The LSI assumption makes modeling nonlinear processes difficult, such as local area processing/contrast enhancement (LAP/LACE), turbulence reduction, and image fusion.

  14. 360-degree dense multiview image acquisition system using time multiplexing

    NASA Astrophysics Data System (ADS)

    Yendo, Tomohiro; Fujii, Toshiaki; Panahpour Tehrani, Mehrdad; Tanimoto, Masayuki

    2010-02-01

    A novel 360-degree 3D image acquisition system that captures multi-view images with narrow view interval is proposed. The system consists of a scanning optics system and a high-speed camera. The scanning optics system is composed of a double-parabolic mirror shell and a rotating flat mirror tilted at 45 degrees to the horizontal plane. The mirror shell produces a real image of an object that is placed at the bottom of the shell. The mirror shell is modified from usual system which is used as 3D illusion toy so that the real image can be captured from right horizontal viewing direction. The rotating mirror in the real image reflects the image to the camera-axis direction. The reflected image observed from the camera varies according to the angle of the rotating mirror. This means that the camera can capture the object from various viewing directions that are determined by the angle of the rotating mirror. To acquire the time-varying reflected images, we use a high-speed camera that is synchronized with the angle of the rotating mirror. We have used a high-speed camera which resolution is 256×256 and the maximum frame rate is 10000fps at the resolution. Rotating speed of the tilted flat mirror is about 27 rev./sec. The number of views is 360. The focus length of parabolic mirrors is 73mm and diameter is 360mm. Objects which length is less than about 30mm can be acquired. Captured images are compensated rotation and distortion caused by double-parabolic mirror system, and reproduced as 3D moving images by Seelinder display.

  15. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  16. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.

  17. Face acquisition camera design using the NV-IPM image generation tool

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.; Choi, Hee-Sue; Reynolds, Joseph P.

    2015-05-01

    In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.

  18. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  19. Patient-adaptive reconstruction and acquisition in dynamic imaging with sensitivity encoding (PARADISE).

    PubMed

    Sharif, Behzad; Derbyshire, J Andrew; Faranesh, Anthony Z; Bresler, Yoram

    2010-08-01

    MRI of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional nongated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly accelerated nongated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject's heart rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high-resolution nongated cardiac MRI during short breath-hold. PMID:20665794

  20. A flexible high-rate USB2 data acquisition system for PET and SPECT imaging

    SciTech Connect

    J. Proffitt, W. Hammond, S. Majewski, V. Popov, R.R. Raylman, A.G. Weisenberger, R. Wojcik

    2006-02-01

    A new flexible data acquisition system has been developed to instrument gamma-ray imaging detectors designed by the Jefferson Lab Detector and Imaging Group. Hardware consists of 16-channel data acquisition modules installed on USB2 carrier boards. Carriers have been designed to accept one, two, and four modules. Application trigger rate and channel density determines the number of acquisition boards and readout computers used. Each channel has an independent trigger, gated integrator and a 2.5 MHz 12-bit ADC. Each module has an FPGA for analog control and signal processing. Processing includes a 5 ns 40-bit trigger time stamp and programmable triggering, gating, ADC timing, offset and gain correction, charge and pulse-width discrimination, sparsification, event counting, and event assembly. The carrier manages global triggering and transfers module data to a USB buffer. High-granularity time-stamped triggering is suitable for modular detectors. Time stamped events permit dynamic studies, complex offline event assembly, and high-rate distributed data acquisition. A sustained USB data rate of 20 Mbytes/s, a sustained trigger rate of 300 kHz for 32 channels, and a peak trigger rate of 2.5 MHz to FIFO memory were achieved. Different trigger, gating, processing, and event assembly techniques were explored. Target applications include >100 kHz coincidence rate PET detectors, dynamic SPECT detectors, miniature and portable gamma detectors for small-animal and clinical use.

  1. Status of RAISE, the Rapid Acquisition Imaging Spectrograph Experiment

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, D. M.; DeForest, C.; Ayres, T. R.; Davis, M.; De Pontieu, B.; Schuehle, U.; Warren, H.

    2013-07-01

    The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100 ms, with 1 arcsec spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a new class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. The telescope and grating are coated with B4C to enhance short wavelength (2nd order) reflectance, enabling the instrument to record the brightest lines between 602-622Å and 761-780Å at the same time. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. We present an overview of the project, a summary of the maiden flight results, and an update on instrument status.Abstract (2,250 Maximum Characters): The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar

  2. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  3. Multislice perfusion of the kidneys using parallel imaging: image acquisition and analysis strategies.

    PubMed

    Gardener, Alexander G; Francis, Susan T

    2010-06-01

    Flow-sensitive alternating inversion recovery arterial spin labeling with parallel imaging acquisition is used to acquire single-shot, multislice perfusion maps of the kidney. A considerable problem for arterial spin labeling methods, which are based on sequential subtraction, is the movement of the kidneys due to respiratory motion between acquisitions. The effects of breathing strategy (free, respiratory-triggered and breath hold) are studied and the use of background suppression is investigated. The application of movement correction by image registration is assessed and perfusion rates are measured. Postacquisition image realignment is shown to improve visual quality and subsequent perfusion quantification. Using such correction, data can be collected from free breathing alone, without the need for a good respiratory trace and in the shortest overall acquisition time, advantageous for patient comfort. The addition of background suppression to arterial spin labeling data is shown to reduce the perfusion signal-to-noise ratio and underestimate perfusion.

  4. RAISE (Rapid Acquisition Imaging Spectrograph Experiment): Results and Instrument Status

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald; DeForest, Craig; Ayres, Tom; Davis, Michael; DePontieu, Bart; Diller, Jed; Graham, Roy; Schule, Udo; Warren, Harry

    2015-04-01

    We present initial results from the successful November 2014 launch of the RAISE (Rapid Acquisition Imaging Spectrograph Experiment) sounding rocket program, including intensity maps, high-speed spectroheliograms and dopplergrams, as well as an update on instrument status. The RAISE sounding rocket payload is the fastest high-speed scanning-slit imaging spectrograph flown to date and is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to study small-scale multithermal dynamics in active region (AR) loops, explore the strength, spectrum and location of high frequency waves in the solar atmosphere, and investigate the nature of transient brightenings in the chromospheric network.

  5. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  6. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  7. Data acquisition system for harmonic motion microwave Doppler imaging.

    PubMed

    Tafreshi, Azadeh Kamali; Karadaş, Mürsel; Top, Can Barış; Gençer, Nevzat Güneri

    2014-01-01

    Harmonic Motion Microwave Doppler Imaging (HMMDI) is a hybrid method proposed for breast tumor detection, which images the coupled dielectric and elastic properties of the tissue. In this paper, the performance of a data acquisition system for HMMDI method is evaluated on breast phantom materials. A breast fat phantom including fibro-glandular and tumor phantom regions is produced. The phantom is excited using a focused ultrasound probe and a microwave transmitter. The received microwave signal level is measured on three different points inside the phantom (fat, fibro-glandular, and tumor regions). The experimental results using the designed homodyne receiver proved the effectiveness of the proposed setup. In tumor phantom region, the signal level decreased about 3 dB compared to the signal level obtained from the fibro-glandular phantom area, whereas this signal was about 4 dB higher than the received signal from the fat phantom.

  8. Data acquisition for a medical imaging MWPC detector

    NASA Astrophysics Data System (ADS)

    McKee, B. T. A.; Harvey, P. J.; MacPhail, J. D.

    1991-12-01

    Multiwire proportional chambers, combined with drilled Pb converter stacks, are used as position sensitive gamma-ray detectors for medical imaging at Queen's University. This paper describes novel features of the address readout and data acquisition system. To obtain the interaction position, induced charges from wires in each cathode plane are combined using a three-level encoding scheme into 16 channels for amplification and discrimination, and then decoded within 150 ns using a lookup table in a 64 Kbyte EPROM. A custom interface card in an AT-class personal computer provides handshaking, rate buffering, and diagnostic capabilities for the detector data. Real-time software controls the data transfer and provides extensive monitor and control functions. The data are then transferred through an Ethernet link to a workstation for subsequent image analysis.

  9. The experiment study of image acquisition system based on 3D machine vision

    NASA Astrophysics Data System (ADS)

    Zhou, Haiying; Xiao, Zexin; Zhang, Xuefei; Wei, Zhe

    2011-11-01

    Binocular vision is one of the key technology in three-dimensional reconstructed of scene of three-dimensional machine vision. Important information of three-dimensional image could be acquired by binocular vision. When use it, we first get two or more pictures by camera, then we could get three-dimensional imformation included in these pictures by geometry and other relationship. In order to measurement accuracy of image acquisition system improved, image acquisition system of binocular vision about scene three-dimensional reconstruction is studyed in this article. Base on parallax principle and human eye binocular imaging, image acquired system between double optical path and double CCD mothd is comed up with. Experiment could obtain the best angle of double optical path optical axis and the best operating distance of double optical path. Then, through the bset angle of optical axis of double optical path and the best operating distance of double optical path, the centre distance of double CCD could be made sure. The two images of the same scene with different viewpoints is shoot by double CCD. This two images could establish well foundation for three-dimensional reconstructed of image processing in the later period. Through the experimental data shows the rationality of this method.

  10. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  11. Reading Acquisition Enhances an Early Visual Process of Contour Integration

    ERIC Educational Resources Information Center

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched…

  12. Superimposed fringe projection for three-dimensional shape acquisition by image analysis.

    PubMed

    Sasso, Marco; Chiappini, Gianluca; Palmieri, Giacomo; Amodio, Dario

    2009-05-01

    The aim in this work is the development of an image analysis technique for 3D shape acquisition, based on luminous fringe projections. In more detail, the method is based on the simultaneous use of several projectors, which is desirable whenever the surface under inspection has a complex geometry, with undercuts or shadow areas. In these cases, the usual fringe projection technique needs to perform several acquisitions, each time moving the projector or using several projectors alternately. Besides the procedure of fringe projection and phase calculation, an unwrap algorithm has been developed in order to obtain continuous phase maps needed in following calculations for shape extraction. With the technique of simultaneous projections, oriented in such a way to cover all of the surface, it is possible to increase the speed of the acquisition process and avoid the postprocessing problems related to the matching of different point clouds.

  13. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  14. The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) Sounding Rocket Investigation

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald M.; Deforest, Craig; Slater, David D.; Thomas, Roger J.; Ayres, Thomas; Davis, Michael; de Pontieu, Bart; Diller, Jed; Graham, Roy; Michaelis, Harald; Schuele, Udo; Warren, Harry

    2016-03-01

    We present a summary of the solar observing Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket program including an overview of the design and calibration of the instrument, flight performance, and preliminary chromospheric results from the successful November 2014 launch of the RAISE instrument. The RAISE sounding rocket payload is the fastest scanning-slit solar ultraviolet imaging spectrograph flown to date. RAISE is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2km/s. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (first-order 1205-1251Å and 1524-1569Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10Hz, recording up to 1800 complete spectra (per detector) in a single 6-min rocket flight. This opens up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to observe small-scale multithermal dynamics in Active Region (AR) and quiet Sun loops, identify the strength, spectrum and location of high frequency waves in the solar atmosphere, and determine the nature of energy release in the chromospheric network.

  15. Automated Image Processing : An Efficient Pipeline Data-Flow Architecture

    NASA Astrophysics Data System (ADS)

    Barreault, G.; Rivoire, A.; Jourlin, M.; Laboure, M. J.; Ramon, S.; Zeboudj, R.; Pinoli, J. C.

    1987-10-01

    In the context of Expert-Systems there is a pressing need of efficient Image Processing algorithms to fit the various applications. This paper presents a new electronic card that performs Image Acquisition, Processing and Display, with an IBM-PC/XT or AT as a host computer. This card features a Pipeline data flow architecture, an efficient and cost effective solution to most of the Image Processing problems.

  16. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  17. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  18. Filter for biomedical imaging and image processing

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz

    2006-07-01

    Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

  19. Image processing in digital radiography.

    PubMed

    Freedman, M T; Artz, D S

    1997-01-01

    Image processing is a critical part of obtaining high-quality digital radiographs. Fortunately, the user of these systems does not need to understand image processing in detail, because the manufacturers provide good starting values. Because radiologists may have different preferences in image appearance, it is helpful to know that many aspects of image appearance can be changed by image processing, and a new preferred setting can be loaded into the computer and saved so that it can become the new standard processing method. Image processing allows one to change the overall optical density of an image and to change its contrast. Spatial frequency processing allows an image to be sharpened, improving its appearance. It also allows noise to be blurred so that it is less visible. Care is necessary to avoid the introduction of artifacts or the hiding of mediastinal tubes.

  20. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials

    NASA Astrophysics Data System (ADS)

    Fada, Justin S.; Wheeler, Nicholas R.; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J.; French, Roger H.

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  1. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    PubMed

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics. PMID:27587162

  2. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    PubMed

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  3. Payload Configurations for Efficient Image Acquisition - Indian Perspective

    NASA Astrophysics Data System (ADS)

    Samudraiah, D. R. M.; Saxena, M.; Paul, S.; Narayanababu, P.; Kuriakose, S.; Kiran Kumar, A. S.

    2014-11-01

    The world is increasingly depending on remotely sensed data. The data is regularly used for monitoring the earth resources and also for solving problems of the world like disasters, climate degradation, etc. Remotely sensed data has changed our perspective of understanding of other planets. With innovative approaches in data utilization, the demands of remote sensing data are ever increasing. More and more research and developments are taken up for data utilization. The satellite resources are scarce and each launch costs heavily. Each launch is also associated with large effort for developing the hardware prior to launch. It is also associated with large number of software elements and mathematical algorithms post-launch. The proliferation of low-earth and geostationary satellites has led to increased scarcity in the available orbital slots for the newer satellites. Indian Space Research Organization has always tried to maximize the utility of satellites. Multiple sensors are flown on each satellite. In each of the satellites, sensors are designed to cater to various spectral bands/frequencies, spatial and temporal resolutions. Bhaskara-1, the first experimental satellite started with 2 bands in electro-optical spectrum and 3 bands in microwave spectrum. The recent Resourcesat-2 incorporates very efficient image acquisition approach with multi-resolution (3 types of spatial resolution) multi-band (4 spectral bands) electro-optical sensors (LISS-4, LISS-3* and AWiFS). The system has been designed to provide data globally with various data reception stations and onboard data storage capabilities. Oceansat-2 satellite has unique sensor combination with 8 band electro-optical high sensitive ocean colour monitor (catering to ocean and land) along with Ku band scatterometer to acquire information on ocean winds. INSAT- 3D launched recently provides high resolution 6 band image data in visible, short-wave, mid-wave and long-wave infrared spectrum. It also has 19 band

  4. Towards Quantification of Functional Breast Images Using Dedicated SPECT With Non-Traditional Acquisition Trajectories

    PubMed Central

    Perez, Kristy L.; Cutler, Spencer J.; Madhav, Priti; Tornai, Martin P.

    2012-01-01

    Quantification of radiotracer uptake in breast lesions can provide valuable information to physicians in deciding patient care or determining treatment efficacy. Physical processes (e.g., scatter, attenuation), detector/collimator characteristics, sampling and acquisition trajectories, and reconstruction artifacts contribute to an incorrect measurement of absolute tracer activity and distribution. For these experiments, a cylinder with three syringes of varying radioactivity concentration, and a fillable 800 mL breast with two lesion phantoms containing aqueous 99mTc pertechnetate were imaged using the SPECT sub-system of the dual-modality SPECT-CT dedicated breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisitions including vertical axis of rotation, 30° tilted, and complex sinusoidal trajectories. Different energy windows around the photopeak were quantitatively compared, along with appropriate scatter energy windows, to determine the best quantification accuracy after attenuation and dual-window scatter correction. Measured activity concentrations in the reconstructed images for syringes with greater than 10 µCi /mL corresponded to within 10% of the actual dose calibrator measured activity concentration for ±4% and ±8% photopeak energy windows. The same energy windows yielded lesion quantification results within 10% in the breast phantom as well. Results for the more complete complex sinsusoidal trajectory are similar to the simple vertical axis acquisition, and additionally allows both anterior chest wall sampling, no image distortion, and reasonably accurate quantification. PMID:22262925

  5. Towards Quantification of Functional Breast Images Using Dedicated SPECT With Non-Traditional Acquisition Trajectories.

    PubMed

    Perez, Kristy L; Cutler, Spencer J; Madhav, Priti; Tornai, Martin P

    2011-10-01

    Quantification of radiotracer uptake in breast lesions can provide valuable information to physicians in deciding patient care or determining treatment efficacy. Physical processes (e.g., scatter, attenuation), detector/collimator characteristics, sampling and acquisition trajectories, and reconstruction artifacts contribute to an incorrect measurement of absolute tracer activity and distribution. For these experiments, a cylinder with three syringes of varying radioactivity concentration, and a fillable 800 mL breast with two lesion phantoms containing aqueous (99m)Tc pertechnetate were imaged using the SPECT sub-system of the dual-modality SPECT-CT dedicated breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisitions including vertical axis of rotation, 30° tilted, and complex sinusoidal trajectories. Different energy windows around the photopeak were quantitatively compared, along with appropriate scatter energy windows, to determine the best quantification accuracy after attenuation and dual-window scatter correction. Measured activity concentrations in the reconstructed images for syringes with greater than 10 µCi /mL corresponded to within 10% of the actual dose calibrator measured activity concentration for ±4% and ±8% photopeak energy windows. The same energy windows yielded lesion quantification results within 10% in the breast phantom as well. Results for the more complete complex sinsusoidal trajectory are similar to the simple vertical axis acquisition, and additionally allows both anterior chest wall sampling, no image distortion, and reasonably accurate quantification.

  6. Implementation of a laser beam analyzer using the image acquisition card IMAQ (NI)

    NASA Astrophysics Data System (ADS)

    Rojas-Laguna, R.; Avila-Garcia, M. S.; Alvarado-Mendez, Edgar; Andrade-Lucio, Jose A.; Obarra-Manzano, O. G.; Torres-Cisneros, Miguel; Castro-Sanchez, R.; Estudillo-Ayala, J. M.; Ibarra-Escamilla, Baldeamr

    2001-08-01

    In this work we address our attention to the implementation of a beam analyzer. The software was designed under LabView, platform and using the Image Acquisition Card IMAQ of National Instruments. The objective is to develop a graphic interface which has to include image processing tools such as characteristic enhancement such as bright, contrast and morphologic operations and quantification of dimensions. An application of this graphic interface is like laser beam analyzer of medium cost, versatile, precise and easily reconfigurable under this programing environment.

  7. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  8. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  9. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  10. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  11. Understanding the knowledge acquisition process about Earth and Space concepts

    NASA Astrophysics Data System (ADS)

    Frappart, Soren

    There exist two main theoretical views concerning the knowledge acquisition process in science. Those views are still in debate in the literature. On the one hand, knowledge is considered to be organized into coherent wholes (mental models). On the other hand knowledge is described as fragmented sets with no link between the fragments. Mental models have a predictive and explicative power and are constrained by universal presuppositions. They follow a universal gradual development in three steps from initial, synthetic to scientific models. On the contrary, the fragments are not organised and development is seen as a situated process where cultural transmission plays a fundamental role. After a presentation of those two theoretical positions, we will illustrate them with examples of studies related to the Earth Shape and gravity performed in different cultural contexts in order to enhance both the differences and the invariant cultural elements. We will show how those problematic are important to take into account and to question for space concepts, like gravity, orbits, weightlessness for instance. Indeed capturing the processes of acquisition and development of knowledge concerning specific space concepts can give us important information to develop relevant and adapted strategies for instruction. If the process of knowledge acquisition for Space concepts is fragmented then we have to think of how we could identify those fragments and help the learner organise links between them. If the knowledge is organised into coherent mental models, we have to think of how to destabilize a non relevant model and to prevent from the development of initial and synthetic models. Moreover the question of what is universal versus what is culture dependant in this acquisition process need to be explored. We will also present some main misconceptions that appeared about Space concepts. Indeed, additionally to the previous theoretical consideration, the collection and awareness of

  12. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  13. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.

  14. Design and DSP implementation of star image acquisition and star point fast acquiring and tracking

    NASA Astrophysics Data System (ADS)

    Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang

    2006-02-01

    Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.

  15. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  16. An Improved Susceptibility Weighted Imaging Method using Multi-Echo Acquisition

    PubMed Central

    Oh, Sung Suk; Oh, Se-Hong; Nam, Yoonho; Han, Dongyeob; Stafford, Randall B.; Hwang, Jinyoung; Kim, Dong-Hyun; Park, HyunWook; Lee, Jongho

    2013-01-01

    Purpose To introduce novel acquisition and post-processing approaches for susceptibility weighted imaging (SWI) to remove background field inhomogeneity artifacts in both magnitude and phase data. Method The proposed method acquires three echoes in a 3D gradient echo (GRE) sequence, with a field compensation gradient (z-shim gradient) applied to the third echo. The artifacts in the magnitude data are compensated by signal estimation from all three echoes. The artifacts in phase signals are removed by modeling the background phase distortions using Gaussians. The method was applied in vivo and compared with conventional SWI. Results The method successfully compensates for background field inhomogeneity artifacts in magnitude and phase images, and demonstrated improved SWI images. In particular, vessels in frontal lobe, which were not observed in conventional SWI, were identified in the proposed method. Conclusion The new method improves image quality in SWI by restoring signal in the frontal and temporal regions. PMID:24105838

  17. Target-acquisition performance in undersampled infrared imagers: static imagery to motion video.

    PubMed

    Krapels, Keith; Driggers, Ronald G; Teaney, Brian

    2005-11-20

    In this research we show that the target-acquisition performance of an undersampled imager improves with sensor or target motion. We provide an experiment designed to evaluate the improvement in observer performance as a function of target motion rate in the video. We created the target motion by mounting a thermal imager on a precision two-axis gimbal and varying the sensor motion rate from 0.25 to 1 instantaneous field of view per frame. A midwave thermal imager was used to permit short integration times and remove the effects of motion blur. It is shown that the human visual system performs a superresolution reconstruction that mitigates some aliasing and provides a higher (than static imagery) effective resolution. This process appears to be relatively independent of motion velocity. The results suggest that the benefits of superresolution reconstruction techniques as applied to imaging systems with motion may be limited. PMID:16318174

  18. 3D Image Acquisition System Based on Shape from Focus Technique

    PubMed Central

    Billiot, Bastien; Cointault, Frédéric; Journaux, Ludovic; Simon, Jean-Claude; Gouton, Pierre

    2013-01-01

    This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene. PMID:23591964

  19. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    SciTech Connect

    Castillo, S; Castillo, R; Castillo, E; Pan, T; Ibbott, G; Balter, P; Hobbs, B; Dai, J; Guerrero, T

    2014-06-15

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phase sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase

  20. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  1. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  2. NOTE: A method for controlling image acquisition in electronic portal imaging devices

    NASA Astrophysics Data System (ADS)

    Glendinning, A. G.; Hunt, S. G.; Bonnett, D. E.

    2001-02-01

    Certain types of camera-based electronic portal imaging devices (EPIDs) which initiate image acquisition based on sensing a change in video level have been observed to trigger unreliably at the beginning of dynamic multileaf collimation sequences. A simple, novel means of controlling image acquisition with an Elekta linear accelerator (Elekta Oncology Systems, Crawley, UK) is proposed which is based on illumination of a photodetector (ORP-12, Silonex Inc., Plattsburgh, NY, USA) by the electron gun of the accelerator. By incorporating a simple trigger circuit it is possible to derive a beam on/off status signal which changes at least 100 ms before any dose is measured by the accelerator. The status signal does not return to the beam-off state until all dose has been delivered and is suitable for accelerator pulse repetition frequencies of 50-400 Hz. The status signal is thus a reliable means of indicating the initiation and termination of radiation exposure, and thus controlling image acquisition of such EPIDs for this application.

  3. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  4. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  5. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  6. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  7. The logical syntax of number words: theory, acquisition and processing.

    PubMed

    Musolino, Julien

    2009-04-01

    Recent work on the acquisition of number words has emphasized the importance of integrating linguistic and developmental perspectives [Musolino, J. (2004). The semantics and acquisition of number words: Integrating linguistic and developmental perspectives. Cognition93, 1-41; Papafragou, A., Musolino, J. (2003). Scalar implicatures: Scalar implicatures: Experiments at the semantics-pragmatics interface. Cognition, 86, 253-282; Hurewitz, F., Papafragou, A., Gleitman, L., Gelman, R. (2006). Asymmetries in the acquisition of numbers and quantifiers. Language Learning and Development, 2, 76-97; Huang, Y. T., Snedeker, J., Spelke, L. (submitted for publication). What exactly do numbers mean?]. Specifically, these studies have shown that data from experimental investigations of child language can be used to illuminate core theoretical issues in the semantic and pragmatic analysis of number terms. In this article, I extend this approach to the logico-syntactic properties of number words, focusing on the way numerals interact with each other (e.g. Three boys are holding two balloons) as well as with other quantified expressions (e.g. Three boys are holding each balloon). On the basis of their intuitions, linguists have claimed that such sentences give rise to at least four different interpretations, reflecting the complexity of the linguistic structure and syntactic operations involved. Using psycholinguistic experimentation with preschoolers (n=32) and adult speakers of English (n=32), I show that (a) for adults, the intuitions of linguists can be verified experimentally, (b) by the age of 5, children have knowledge of the core aspects of the logical syntax of number words, (c) in spite of this knowledge, children nevertheless differ from adults in systematic ways, (d) the differences observed between children and adults can be accounted for on the basis of an independently motivated, linguistically-based processing model [Geurts, B. (2003). Quantifying kids. Language

  8. A multispectral three-dimensional acquisition technique for imaging near metal implants.

    PubMed

    Koch, Kevin M; Lorbiecki, John E; Hinks, R Scott; King, Kevin F

    2009-02-01

    Metallic implants used in bone and joint arthroplasty induce severe spatial perturbations to the B0 magnetic field used for high-field clinical magnetic resonance. These perturbations distort slice-selection and frequency encoding processes applied in conventional two-dimensional MRI techniques and hinder the diagnosis of complications from arthroplasty. Here, a method is presented whereby multiple three-dimensional fast-spin-echo images are collected using discrete offsets in RF transmission and reception frequency. It is demonstrated that this multi acquisition variable-resonance image combination technique can be used to generate a composite image that is devoid of slice-plane distortion and possesses greatly reduced distortions in the readout direction, even in the immediate vicinity of metallic implants.

  9. Matlab-based interface for the simultaneous acquisition of force measures and Doppler ultrasound muscular images.

    PubMed

    Ferrer-Buedo, José; Martínez-Sober, Marcelino; Alakhdar-Mohmara, Yasser; Soria-Olivas, Emilio; Benítez-Martínez, Josep C; Martínez-Martínez, José M

    2013-04-01

    This paper tackles the design of a graphical user interface (GUI) based on Matlab (MathWorks Inc., MA), a worldwide standard in the processing of biosignals, which allows the acquisition of muscular force signals and images from a ultrasound scanner simultaneously. Thus, it is possible to unify two key magnitudes for analyzing the evolution of muscular injuries: the force exerted by the muscle and section/length of the muscle when such force is exerted. This paper describes the modules developed to finally show its applicability with a case study to analyze the functioning capacity of the shoulder rotator cuff. PMID:23176896

  10. Instant super-resolution imaging in live cells and embryos via analog image processing

    PubMed Central

    York, Andrew G.; Chandris, Panagiotis; Nogare, Damian Dalle; Head, Jeffrey; Wawrzusin, Peter; Fischer, Robert S.; Chitnis, Ajay; Shroff, Hari

    2013-01-01

    Existing super-resolution fluorescence microscopes compromise acquisition speed to provide subdiffractive sample information. We report an analog implementation of structured illumination microscopy that enables 3D super-resolution imaging with 145 nm lateral and 350 nm axial resolution, at acquisition speeds up to 100 Hz. By performing image processing operations optically instead of digitally, we removed the need to capture, store, and combine multiple camera exposures, increasing data acquisition rates 10–100x over other super-resolution microscopes and acquiring and displaying super-resolution images in real-time. Low excitation intensities allow imaging over hundreds of 2D sections, and combined physical and computational sectioning allow similar depth penetration to confocal microscopy. We demonstrate the capability of our system by imaging fine, rapidly moving structures including motor-driven organelles in human lung fibroblasts and the cytoskeleton of flowing blood cells within developing zebrafish embryos. PMID:24097271

  11. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification

  12. Reading acquisition enhances an early visual process of contour integration.

    PubMed

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched in age, socioeconomic and cultural characteristics, on a contour integration task known to depend on early visual processing. Stimuli consisted of a closed egg-shaped contour made of disconnected Gabor patches, within a background of randomly oriented Gabor stimuli. Subjects had to decide whether the egg was pointing left or right. Difficulty was varied by jittering the orientation of the Gabor patches forming the contour. Contour integration performance was lower in illiterates than in both ex-illiterate and literate controls. We argue that this difference in contour perception must reflect a genuine difference in visual function. According to this view, the intensive perceptual training that accompanies reading acquisition also improves early visual abilities, suggesting that the impact of literacy on the visual system is more widespread than originally proposed.

  13. Optical Image Acquisition by Vibrating KNIFE Edge Techniques

    NASA Astrophysics Data System (ADS)

    Samson, Scott A.

    Traditional optical microscopes have inherent limitations in their attainable resolution. These shortcomings are a result of non-propagating evanescent waves being created by the small details in the specimen to be imaged. These problems are circumvented in the Near-field Scanning Optical Microscope (NSOM). Previous NSOMs use physical apertures to sample the optical field created by the specimen. By scanning a sub-wavelength-sized aperture past the specimen, very minute details may be imaged. In this thesis, a new method for obtaining images of various objects is studied. The method is a derivative of scanned knife edge techniques commonly used in optical laboratories. The general setup consists of illuminating a vibrating optically-opaque knife edge placed in close proximity to the object. By detecting only the time-varying optical power and utilizing various signal processing techniques, including computer-subtraction, beat frequency detection, and tomographic reconstruction, two-dimensional images of the object may be formed. In essence, a sampler similar to the aperture NSOMs is created. Mathematics, computer simulations, and low-resolution experiments are used to verify the thesis. Various aspects associated with improving the resolution with regards to NSOM are discussed, both theoretically and practically. The vibrating knife edge as a high- resolution sampler is compared to the physically -small NSOM aperture. Finally, future uses of the vibrating knife edge techniques and further research are introduced. Applicable references and computer programs are listed in appendices.

  14. Acoustic image-processing software

    NASA Astrophysics Data System (ADS)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  15. Dual-energy imaging of the chest: Optimization of image acquisition techniques for the 'bone-only' image

    SciTech Connect

    Shkumat, N. A.; Siewerdsen, J. H.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2008-02-15

    Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp{sup L}/kVp{sup H}]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5--i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis)

  16. Theory of Adaptive Acquisition Method for Image Reconstruction from Projections and Application to EPR Imaging

    NASA Astrophysics Data System (ADS)

    Placidi, G.; Alecci, M.; Sotgiu, A.

    1995-07-01

    An adaptive method for selecting the projections to be used for image reconstruction is presented. The method starts with the acquisition of four projections at angles of 0°, 45°, 90°, 135° and selects the new angles by computing a function of the previous projections. This makes it possible to adapt the selection of projections to the arbitrary shape of the sample, thus measuring a more informative set of projections. When the sample is smooth or has internal symmetries, this technique allows a reduction in the number of projections required to reconstruct the image without loss of information. The method has been tested on simulated data at different values of signal-to-noise ratio (S/N) and on experimental data recorded by an EPR imaging apparatus.

  17. Progress in the Development of a new Angiography Suite including the High Resolution Micro-Angiographic Fluoroscope (MAF), a Control, Acquisition, Processing, and Image Display System (CAPIDS), and a New Detector Changer Integrated into a Commercial C-Arm Angiography Unit to Enable Clinical Use.

    PubMed

    Wang, Weiyuan; Ionita, Ciprian N; Keleshis, Christos; Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel; Rudin, Stephen

    2010-03-23

    Due to the high-resolution needs of angiographic and interventional vascular imaging, a Micro-Angiographic Fluoroscope (MAF) detector with a Control, Acquisition, Processing, and Image Display System (CAPIDS) was installed on a detector changer which was attached to the C-arm of a clinical angiographic unit. The MAF detector provides high-resolution, high-sensitivity, and real-time imaging capabilities and consists of a 300 μm-thick CsI phosphor, a dual stage micro-channel plate light image intensifier (LII) coupled to a fiber optic taper (FOT), and a scientific grade frame-transfer CCD camera, providing an image matrix of 1024×1024 35 μm square pixels with 12 bit depth. The Solid-State X-Ray Image Intensifier (SSXII) is an EMCCD (Electron Multiplying charge-coupled device) based detector which provides an image matrix of 1k×1k 32 μm square pixels with 12 bit depth. The changer allows the MAF or a SSXII region-of-interest (ROI) detector to be inserted in front of the standard flat-panel detector (FPD) when higher resolution is needed during angiographic or interventional vascular imaging procedures. The CAPIDS was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF or SSXII including: fluoroscopy, roadmapping, radiography, and digital-subtraction-angiography (DSA). The total system has been used for image guidance during endovascular image-guided interventions (EIGI) using prototype self-expanding asymmetric vascular stents (SAVS) in over 10 rabbit aneurysm creation and treatment experiments which have demonstrated the system's potential benefits for future clinical use. PMID:21243037

  18. Progress in the Development of a new Angiography Suite including the High Resolution Micro-Angiographic Fluoroscope (MAF), a Control, Acquisition, Processing, and Image Display System (CAPIDS), and a New Detector Changer Integrated into a Commercial C-Arm Angiography Unit to Enable Clinical Use

    PubMed Central

    Wang, Weiyuan; Ionita, Ciprian N; Keleshis, Christos; Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel; Rudin, Stephen

    2010-01-01

    Due to the high-resolution needs of angiographic and interventional vascular imaging, a Micro-Angiographic Fluoroscope (MAF) detector with a Control, Acquisition, Processing, and Image Display System (CAPIDS) was installed on a detector changer which was attached to the C-arm of a clinical angiographic unit. The MAF detector provides high-resolution, high-sensitivity, and real-time imaging capabilities and consists of a 300 μm-thick CsI phosphor, a dual stage micro-channel plate light image intensifier (LII) coupled to a fiber optic taper (FOT), and a scientific grade frame-transfer CCD camera, providing an image matrix of 1024×1024 35 μm square pixels with 12 bit depth. The Solid-State X-Ray Image Intensifier (SSXII) is an EMCCD (Electron Multiplying charge-coupled device) based detector which provides an image matrix of 1k×1k 32 μm square pixels with 12 bit depth. The changer allows the MAF or a SSXII region-of-interest (ROI) detector to be inserted in front of the standard flat-panel detector (FPD) when higher resolution is needed during angiographic or interventional vascular imaging procedures. The CAPIDS was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF or SSXII including: fluoroscopy, roadmapping, radiography, and digital-subtraction-angiography (DSA). The total system has been used for image guidance during endovascular image-guided interventions (EIGI) using prototype self-expanding asymmetric vascular stents (SAVS) in over 10 rabbit aneurysm creation and treatment experiments which have demonstrated the system's potential benefits for future clinical use. PMID:21243037

  19. Progress in the development of a new angiography suite including the high resolution micro-angiographic fluoroscope (MAF): a control, acquisition, processing, and image display system (CAPIDS), and a new detector changer integrated into a commercial C-arm angiography unit to enable clinical use

    NASA Astrophysics Data System (ADS)

    Wang, Weiyuan; Ionita, Ciprian N.; Keleshis, Christos; Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel; Rudin, Stephen

    2010-04-01

    Due to the high-resolution needs of angiographic and interventional vascular imaging, a Micro-Angiographic Fluoroscope (MAF) detector with a Control, Acquisition, Processing, and Image Display System (CAPIDS) was installed on a detector changer which was attached to the C-arm of a clinical angiographic unit. The MAF detector provides high-resolution, high-sensitivity, and real-time imaging capabilities and consists of a 300 μm-thick CsI phosphor, a dual stage micro-channel plate light image intensifier (LII) coupled to a fiber optic taper (FOT), and a scientific grade frame-transfer CCD camera, providing an image matrix of 1024×1024 35 μm square pixels with 12 bit depth. The Solid-State X-Ray Image Intensifier (SSXII) is an EMCCD (Electron Multiplying charge-coupled device) based detector which provides an image matrix of 1k×1k 32 μm square pixels with 12 bit depth. The changer allows the MAF or a SSXII region-of-interest (ROI) detector to be inserted in front of the standard flat-panel detector (FPD) when higher resolution is needed during angiographic or interventional vascular imaging procedures. The CAPIDS was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF or SSXII including: fluoroscopy, roadmapping, radiography, and digital-subtraction-angiography (DSA). The total system has been used for image guidance during endovascular image-guided interventions (EIGI) using prototype self-expanding asymmetric vascular stents (SAVS) in over 10 rabbit aneurysm creation and treatment experiments which have demonstrated the system's potential benefits for future clinical use.

  20. Radio reflection imaging of asteroid and comet interiors I: Acquisition and imaging theory

    NASA Astrophysics Data System (ADS)

    Sava, Paul; Ittharat, Detchai; Grimm, Robert; Stillman, David

    2015-05-01

    Imaging the interior structure of comets and asteroids can provide insight into their formation in the early Solar System, and can aid in their exploration and hazard mitigation. Accurate imaging can be accomplished using broadband wavefield data penetrating deep inside the object under investigation. This can be done in principle using seismic systems (which is difficult since it requires contact with the studied object), or using radar systems (which is easier since it can be conducted from orbit). We advocate the use of radar systems based on instruments similar to the ones currently deployed in space, e.g. the CONSERT experiment of the Rosetta mission, but perform imaging using data reflected from internal interfaces, instead of data transmitted through the imaging object. Our core methodology is wavefield extrapolation using time-domain finite differences, a technique often referred to as reverse-time migration and proven to be effective in high-quality imaging of complex geologic structures. The novelty of our approach consists in the use of dual orbiters around the studied object, instead of an orbiter and a lander. Dual orbiter systems can provide multi-offset data that illuminate the target object from many different illumination angles. Multi-offset data improve image quality (a) by avoiding illumination shadows, (b) by attenuating coherent noise (image artifacts) caused by wavefield multi-pathing, and (c) by providing information necessary to infer the model parameters needed to simulate wavefields inside the imaging target. The images obtained using multi-offset are robust with respect to instrument noise comparable in strength with the reflected signal. Dual-orbiter acquisition leads to improved image quality which is directly dependent on the aperture between the transmitter and receiver antennas. We illustrate the proposed methodology using a complex model based on a scaled version of asteroid 433 Eros.

  1. Real-time digital design for an optical coherence tomography acquisition and processing system

    NASA Astrophysics Data System (ADS)

    Ralston, Tyler S.; Mayen, Jose A.; Marks, Dan L.; Boppart, Stephen A.

    2004-07-01

    We present a real-time, multi-dimensional, digital, optical coherence tomography (OCT) acquisition and imaging system. The system consists of conventional OCT optics, a rapid scanning optical delay (RSOD) line to support fast data acquisition rates, and a high-speed A/D converter for sampling the interference waveforms. A 1M-gate Virtex-II field programmable gate array (FPGA) is designed to perform digital down conversion. This is analogous to demodulating and low-pass filtering the continuous time signal. The system creates in-phase and quadrature-phase components using a tunable quadrature mixer. Multistage polyphase finite impulse response (FIR) filtering and down sampling is used to remove unneeded high frequencies. A floating-point digital signal processor (DSP) computes the magnitude and phase shifts. The data is read by a host machine and displayed on screen at real-time rates commensurate with the data acquisition rate. This system offers flexible acquisition and processing parameters for a wide range of multi-dimensional optical microscopy techniques.

  2. Contractor relationships and inter-organizational strategies in NASA's R and D acquisition process

    NASA Technical Reports Server (NTRS)

    Guiltinan, J.

    1976-01-01

    Interorganizational analysis of NASA's acquisition process for research and development systems is discussed. The importance of understanding the contractor environment, constraints, and motives in selecting an acquisition strategy is demonstrated. By articulating clear project goals, by utilizing information about the contractor and his needs at each stage in the acquisition process, and by thorough analysis of the inter-organizational relationship, improved selection of acquisition strategies and business practices is possible.

  3. Multiplex Mass Spectrometric Imaging with Polarity Switching for Concurrent Acquisition of Positive and Negative Ion Images

    NASA Astrophysics Data System (ADS)

    Korte, Andrew R.; Lee, Young Jin

    2013-06-01

    We have recently developed a multiplex mass spectrometry imaging (MSI) method which incorporates high mass resolution imaging and MS/MS and MS3 imaging of several compounds in a single data acquisition utilizing a hybrid linear ion trap-Orbitrap mass spectrometer (Perdian and Lee, Anal. Chem. 82, 9393-9400, 2010). Here we extend this capability to obtain positive and negative ion MS and MS/MS spectra in a single MS imaging experiment through polarity switching within spiral steps of each raster step. This methodology was demonstrated for the analysis of various lipid class compounds in a section of mouse brain. This allows for simultaneous imaging of compounds that are readily ionized in positive mode (e.g., phosphatidylcholines and sphingomyelins) and those that are readily ionized in negative mode (e.g., sulfatides, phosphatidylinositols and phosphatidylserines). MS/MS imaging was also performed for a few compounds in both positive and negative ion mode within the same experimental set-up. Insufficient stabilization time for the Orbitrap high voltage leads to slight deviations in observed masses, but these deviations are systematic and were easily corrected with a two-point calibration to background ions.

  4. Autonomous Closed-Loop Tasking, Acquisition, Processing, and Evaluation for Situational Awareness Feedback

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Dan; Cappelaere, Pat

    2016-01-01

    This presentation describes the closed loop satellite autonomy methods used to connect users and the assets on Earth Orbiter- 1 (EO-1) and similar satellites. The base layer is a distributed architecture based on Goddard Mission Services Evolution Concept (GMSEC) thus each asset still under independent control. Situational awareness is provided by a middleware layer through common Application Programmer Interface (API) to GMSEC components developed at GSFC. Users setup their own tasking requests, receive views into immediate past acquisitions in their area of interest, and into future feasibilities for acquisition across all assets. Automated notifications via pubsub feeds are returned to users containing published links to image footprints, algorithm results, and full data sets. Theme-based algorithms are available on-demand for processing.

  5. Image analysis and data-acquisition techniques for infrared and CCD cameras for ATF

    NASA Astrophysics Data System (ADS)

    Young, K. G.; Hillis, D. L.

    1988-08-01

    A multipurpose image processing system has been developed for the Advanced Toroidal Facility (ATF) stellarator experiment. This system makes it possible to investigate the complicated topology inherent in stellarator plasmas with conventional video technology. Infrared (IR) and charge-coupled device (CCD) cameras, operated at the standard video framing rate, are used on ATF to measure heat flux patterns to the vacuum vessel wall and visible-light emission from the ionized plasma. These video cameras are coupled with fast acquisition and display systems, developed for a MicroVAX-II, which allow between-shot observation of the dynamic temperature and spatial extent of the plasma generated by ATF. The IR camera system provides acquisition of one frame of 60×80 eight-bit pixels every 16.7 ms via storage in a CAMAC module. The CCD data acquisition proceeds automatically, storing the video frames until its 12-bit, 1-Mbyte CAMAC memory is filled. After analysis, transformation, and compression, selected portions of the data are stored on disk. Interactive display of experimental data and theoretical calculations are performed with software written in Interactive Data Language.

  6. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  7. Signal and Image Processing Operations

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  8. KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process

    NASA Technical Reports Server (NTRS)

    Gettig, Gary A.

    1988-01-01

    Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.

  9. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  10. Real-time multi-camera video acquisition and processing platform for ADAS

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  11. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  12. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  13. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  14. Repetition time and flip angle variation in SPRITE imaging for acquisition time and SAR reduction.

    PubMed

    Shah, N Jon; Kaffanke, Joachim B; Romanzetti, Sandro

    2009-08-01

    Single point imaging methods such as SPRITE are often the technique of choice for imaging fast-relaxing nuclei in solids. Single point imaging sequences based on SPRITE in their conventional form are ill-suited for in vivo applications since the acquisition time is long and the SAR is high. A new sequence design is presented employing variable repetition times and variable flip angles in order to improve the characteristics of SPRITE for in vivo applications. The achievable acquisition time savings as well as SAR reductions and/or SNR increases afforded by this approach were investigated using a resolution phantom as well as PSF simulations. Imaging results in phantoms indicate that acquisition times may be reduced by up to 70% and the SAR may be reduced by 40% without an appreciable loss of image quality. PMID:19447652

  15. Seismic Imaging Processing and Migration

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  16. The Logical Syntax of Number Words: Theory, Acquisition and Processing

    ERIC Educational Resources Information Center

    Musolino, Julien

    2009-01-01

    Recent work on the acquisition of number words has emphasized the importance of integrating linguistic and developmental perspectives [Musolino, J. (2004). The semantics and acquisition of number words: Integrating linguistic and developmental perspectives. "Cognition 93", 1-41; Papafragou, A., Musolino, J. (2003). Scalar implicatures: Scalar…

  17. Acquisition and Processing of Multi-Fold GPR Data for Characterization of Shallow Groundwater Systems

    NASA Astrophysics Data System (ADS)

    Bradford, J. H.

    2004-05-01

    Most ground-penetrating radar (GPR) data are acquired with a constant transmitter-receiver offset and often investigators apply little or no processing in generating a subsurface image. This mode of operation can provide useful information, but does not take full advantage of the information the GPR signal can carry. In continuous multi-offset (CMO) mode, one acquires several traces with varying source-receiver separations at each point along the survey. CMO acquisition is analogous to common-midpoint acquisition in exploration seismology and gives rise to improved subsurface characterization through three key features: 1) Processes such as stacking and velocity filtering significantly attenuate coherent and random noise resulting in subsurface images that are easier to interpret, 2) CMO data enable measurement of vertical and lateral velocity variations which leads to improved understanding of material distribution and more accurate depth estimates, and 3) CMO data enable observation of reflected wave behaviour (ie variations in amplitude and spectrum) at a common reflection point for various travel paths through the subsurface - quantification of these variations can be a valuable tool in material property characterization. Although there are a few examples in the literature, investigators rarely acquire CMO GPR data. This is, in large part, due to the fact that CMO acquisition with a single channel system is labor intensive and time consuming. At present, no multi-channel GPR systems designed for CMO acquisition are commercially available. Over the past 8 years I have designed, conducted, and processed numerous 2D and 3D CMO GPR surveys using a single channel GPR system. I have developed field procedures that enable a three man crew to acquire CMO GPR data at a rate comparable to a similar scale multi-channel seismic reflection survey. Additionally, many recent advances in signal processing developed in the oil and gas industry have yet to see significant

  18. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  19. Using image processing techniques on proximity probe signals in rotordynamics

    NASA Astrophysics Data System (ADS)

    Diamond, Dawie; Heyns, Stephan; Oberholster, Abrie

    2016-06-01

    This paper proposes a new approach to process proximity probe signals in rotordynamic applications. It is argued that the signal be interpreted as a one dimensional image. Existing image processing techniques can then be used to gain information about the object being measured. Some results from one application is presented. Rotor blade tip deflections can be calculated through localizing phase information in this one dimensional image. It is experimentally shown that the newly proposed method performs more accurately than standard techniques, especially where the sampling rate of the data acquisition system is inadequate by conventional standards.

  20. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  1. Linear algebra and image processing

    NASA Astrophysics Data System (ADS)

    Allali, Mohamed

    2010-09-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.

  2. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  3. Performance of reduced bit-depth acquisition for optical frequency domain imaging.

    PubMed

    Goldberg, Brian D; Vakoc, Benjamin J; Oh, Wang-Yuhl; Suter, Melissa J; Waxman, Sergio; Freilich, Mark I; Bouma, Brett E; Tearney, Guillermo J

    2009-09-14

    High-speed optical frequency domain imaging (OFDI) has enabled practical wide-field microscopic imaging in the biological laboratory and clinical medicine. The imaging speed of OFDI, and therefore the field of view, of current systems is limited by the rate at which data can be digitized and archived rather than the system sensitivity or laser performance. One solution to this bottleneck is to natively digitize OFDI signals at reduced bit depths, e.g., at 8-bit depth rather than the conventional 12-14 bit depth, thereby reducing overall bandwidth. However, the implications of reduced bit-depth acquisition on image quality have not been studied. In this paper, we use simulations and empirical studies to evaluate the effects of reduced depth acquisition on OFDI image quality. We show that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. Images of a human coronary artery acquired in vivo at 8-bit depth are presented and compared with images at higher bit-depth acquisition.

  4. Image processing applications in NDE

    SciTech Connect

    Morris, R.A.

    1980-01-01

    Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

  5. Breast image pre-processing for mammographic tissue segmentation.

    PubMed

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  6. Acquisition and evaluation of radiography images by digital camera.

    PubMed

    Cone, Stephen W; Carucci, Laura R; Yu, Jinxing; Rafiq, Azhar; Doarn, Charles R; Merrell, Ronald C

    2005-04-01

    To determine applicability of low-cost digital imaging for different radiographic modalities used in consultations from remote areas of the Ecuadorian rainforest with limited resources, both medical and financial. Low-cost digital imaging, consisting of hand-held digital cameras, was used for image capture at a remote location. Diagnostic radiographic images were captured in Ecuador by digital camera and transmitted to a password-protected File Transfer Protocol (FTP) server at VCU Medical Center in Richmond, Virginia, using standard Internet connectivity with standard security. After capture and subsequent transfer of images via low-bandwidth Internet connections, attending radiologists in the United States compared diagnoses to those from Ecuador to evaluate quality of image transfer. Corroborative diagnoses were obtained with the digital camera images for greater than 90% of the plain film and computed tomography studies. Ultrasound (U/S) studies demonstrated only 56% corroboration. Images of radiographs captured utilizing commercially available digital cameras can provide quality sufficient for expert consultation for many plain film studies for remote, underserved areas without access to advanced modalities.

  7. Constrained acquisition of ink spreading curves from printed color images.

    PubMed

    Bugnon, Thomas; Hersch, Roger D

    2011-02-01

    Today's spectral reflection prediction models are able to predict the reflection spectra of printed color images with an accuracy as high as the reproduction variability allows. However, to calibrate such models, special uniform calibration patches need to be printed. These calibration patches use space and have to be removed from the final product. The present contribution shows how to deduce the ink spreading behavior of the color halftones from spectral reflectances acquired within printed color images. Image tiles of a color as uniform as possible are selected within the printed images. The ink spreading behavior is fitted by relying on the spectral reflectances of the selected image tiles. A relevance metric specifies the impact of each ink spreading curve on the selected image tiles. These relevance metrics are used to constrain the corresponding ink spreading curves. Experiments performed on an inkjet printer demonstrate that the new constraint-based calibration of the spectral reflection prediction model performs well when predicting color halftones significantly different from the selected image tiles. For some prints, the proposed image based model calibration is more accurate than a classical calibration.

  8. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  9. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  10. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  11. Reengineering the Acquisition/Procurement Process: A Methodology for Requirements Collection

    NASA Technical Reports Server (NTRS)

    Taylor, Randall; Vanek, Thomas

    2011-01-01

    This paper captures the systematic approach taken by JPL's Acquisition Reengineering Project team, the methodology used, challenges faced, and lessons learned. It provides pragmatic "how-to" techniques and tools for collecting requirements and for identifying areas of improvement in an acquisition/procurement process or other core process of interest.

  12. Effect of temporal acquisition parameters on image quality of strain time constant elastography.

    PubMed

    Nair, Sanjay; Varghese, Joshua; Chaudhry, Anuj; Righetti, Raffaella

    2015-04-01

    Ultrasound methods to image the time constant (TC) of elastographic tissue parameters have been recently developed. Elastographic TC images from creep or stress relaxation tests have been shown to provide information on the viscoelastic and poroelastic behavior of tissues. However, the effect of temporal ultrasonic acquisition parameters and input noise on the image quality of the resultant strain TC elastograms has not been fully investigated yet. Understanding such effects could have important implications for clinical applications of these novel techniques. This work reports a simulation study aimed at investigating the effects of varying windows of observation, acquisition frame rate, and strain signal-to-noise ratio (SNR) on the image quality of elastographic TC estimates. A pilot experimental study was used to corroborate the simulation results in specific testing conditions. The results of this work suggest that the total acquisition time necessary for accurate strain TC estimates has a linear dependence to the underlying strain TC (as estimated from the theoretical strain-vs.-time curve). The results also indicate that it might be possible to make accurate estimates of the elastographic TC (within 10% error) using windows of observation as small as 20% of the underlying TC, provided sufficiently fast acquisition rates (>100 Hz for typical acquisition depths). The limited experimental data reported in this study statistically confirm the simulation trends, proving that the proposed model can be used as upper bound guidance for the correct execution of the experiments.

  13. Optimization of image acquisition techniques for dual-energy imaging of the chest

    SciTech Connect

    Shkumat, N. A.; Siewerdsen, J. H.; Dhanantwari, A. C.; Williams, D. B.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2007-10-15

    Experimental and theoretical studies were conducted to determine optimal acquisition techniques for a prototype dual-energy (DE) chest imaging system. Technique factors investigated included the selection of added x-ray filtration, kVp pair, and the allocation of dose between low- and high-energy projections, with total dose equal to or less than that of a conventional chest radiograph. Optima were computed to maximize lung nodule detectability as characterized by the signal-difference-to-noise ratio (SDNR) in DE chest images. Optimal beam filtration was determined by cascaded systems analysis of DE image SDNR for filter selections across the periodic table (Z{sub filter}=1-92), demonstrating the importance of differential filtration between low- and high-kVp projections and suggesting optimal high-kVp filters in the range Z{sub filter}=25-50. For example, added filtration of {approx}2.1 mm Cu, {approx}1.2 mm Zr, {approx}0.7 mm Mo, and {approx}0.6 mm Ag to the high-kVp beam provided optimal (and nearly equivalent) soft-tissue SDNR. Optimal kVp pair and dose allocation were investigated using a chest phantom presenting simulated lung nodules and ribs for thin, average, and thick body habitus. Low- and high-energy techniques ranged from 60-90 kVp and 120-150 kVp, respectively, with peak soft-tissue SDNR achieved at [60/120] kVp for all patient thicknesses and all levels of imaging dose. A strong dependence on the kVp of the low-energy projection was observed. Optimal allocation of dose between low- and high-energy projections was such that {approx}30% of the total dose was delivered by the low-kVp projection, exhibiting a fairly weak dependence on kVp pair and dose. The results have guided the implementation of a prototype DE imaging system for imaging trials in early-stage lung nodule detection and diagnosis.

  14. Developmental Stages in Receptive Grammar Acquisition: A Processability Theory Account

    ERIC Educational Resources Information Center

    Buyl, Aafke; Housen, Alex

    2015-01-01

    This study takes a new look at the topic of developmental stages in the second language (L2) acquisition of morphosyntax by analysing receptive learner data, a language mode that has hitherto received very little attention within this strand of research (for a recent and rare study, see Spinner, 2013). Looking at both the receptive and productive…

  15. Cognitive Skill Acquisition through a Meta-Knowledge Processing Model.

    ERIC Educational Resources Information Center

    McKay, Elspeth

    2002-01-01

    The purpose of this paper is to reopen the discourse on cognitive skill acquisition to focus on the interactive effect of differences in cognitive construct and instructional format. Reports an examination of the contextual issues involved in understanding the interactivity of instructional conditions and cognitive style as a meta-knowledge…

  16. Metadiscursive Processes in the Acquisition of a Second Language.

    ERIC Educational Resources Information Center

    Giacomi, Alain; Vion, Robert

    1986-01-01

    The acquisition of narrative competence in French by an Arabic-speaking migrant worker in interactions with target language speakers was explored, with hypotheses formed about the polyfunctional uses of certain forms to mark the chronology of events in the narrative or to introduce quoted speech. (Author/CB)

  17. Seismic acquisition and processing methodologies in overthrust areas: Some examples from Latin America

    SciTech Connect

    Tilander, N.G.; Mitchel, R..

    1996-08-01

    Overthrust areas represent some of the last frontiers in petroleum exploration today. Billion barrel discoveries in the Eastern Cordillera of Colombia and the Monagas fold-thrust belt of Venezuela during the past decade have highlighted the potential rewards for overthrust exploration. However the seismic data recorded in many overthrust areas is disappointingly poor. Challenges such as rough topography, complex subsurface structure, presence of high-velocity rocks at the surface, back-scattered energy and severe migration wavefronting continue to lower data quality and reduce interpretability. Lack of well/velocity control also reduces the reliability of depth estimations and migrated images. Failure to obtain satisfactory pre-drill structural images can easily result in costly wildcat failures. Advances in the methodologies used by Chevron for data acquisition, processing and interpretation have produced significant improvements in seismic data quality in Bolivia, Colombia and Trinidad. In this paper, seismic test results showing various swath geometries will be presented. We will also show recent examples of processing methods which have led to improved structural imaging. Rather than focusing on {open_quotes}black box{close_quotes} methodology, we will emphasize the cumulative effect of step-by-step improvements. Finally, the critical significance and interrelation of velocity measurements, modeling and depth migration will be explored. Pre-drill interpretations must ultimately encompass a variety of model solutions, and error bars should be established which realistically reflect the uncertainties in the data.

  18. Need for image processing in infrared camera design

    NASA Astrophysics Data System (ADS)

    Allred, Lloyd G.; Jones, Martin H.

    2000-03-01

    While the value of image processing has been longly recognized, this is usually done during post-processing. For scientific application, the presence of large noise errors, data drop out, and dead sensors would invalidate any conclusion made from the data until noise-removal and sensor calibration has been accomplished. With the growing need for ruggedized, real-time image acquisition systems, including applications to automotive and aerospace, post processing may not be an option. With post processing, the operator does not have the opportunity to view the cleaned-up image. Focal plane arrays are plagued by bad sensors, high manufacturing costs, and low yields, often forcing a six digit cost tag. Perhaps infrared camera design is too serious an issue to leave to the camera manufacturers. Alternative camera designs using a single spinning mirror can yield perfect infrared images at rates up to 12000 frames per second using a fraction of the hardware in the current focal-plane arrays. Using a 768 X 5 sensor array, redundant 2048 X 768 images are produced by each row of the sensor array. Sensor arrays with flawed sensors would no longer need to be discarded because data from dead sensors can be discarded, thus increasing manufacturing yields and reducing manufacturing costs. Furthermore, very rapid image processing chips are available, allowing for real-time morphological image processing (including real-time sensor calibration), thus significantly increasing thermal precision, making thermal imaging amenable for an increased variety of applications.

  19. Troubleshooting digital macro photography for image acquisition and the analysis of biological samples.

    PubMed

    Liepinsh, Edgars; Kuka, Janis; Dambrova, Maija

    2013-01-01

    For years, image acquisition and analysis have been an important part of life science experiments to ensure the adequate and reliable presentation of research results. Since the development of digital photography and digital planimetric methods for image analysis approximately 20 years ago, new equipment and technologies have emerged, which have increased the quality of image acquisition and analysis. Different techniques are available to measure the size of stained tissue samples in experimental animal models of disease; however, the most accurate method is digital macro photography with software that is based on planimetric analysis. In this study, we described the methodology for the preparation of infarcted rat heart and brain tissue samples before image acquisition, digital macro photography techniques and planimetric image analysis. These methods are useful in the macro photography of biological samples and subsequent image analysis. In addition, the techniques that are described in this study include the automated analysis of digital photographs to minimize user input and exclude the risk of researcher-generated errors or bias during image analysis.

  20. Noise-compensated homotopic non-local regularized reconstruction for rapid retinal optical coherence tomography image acquisitions

    PubMed Central

    2014-01-01

    Background Optical coherence tomography (OCT) is a minimally invasive imaging technique, which utilizes the spatial and temporal coherence properties of optical waves backscattered from biological material. Recent advances in tunable lasers and infrared camera technologies have enabled an increase in the OCT imaging speed by a factor of more than 100, which is important for retinal imaging where we wish to study fast physiological processes in the biological tissue. However, the high scanning rate causes proportional decrease of the detector exposure time, resulting in a reduction of the system signal-to-noise ratio (SNR). One approach to improving the image quality of OCT tomograms acquired at high speed is to compensate for the noise component in the images without compromising the sharpness of the image details. Methods In this study, we propose a novel reconstruction method for rapid OCT image acquisitions, based on a noise-compensated homotopic modified James-Stein non-local regularized optimization strategy. The performance of the algorithm was tested on a series of high resolution OCT images of the human retina acquired at different imaging rates. Results Quantitative analysis was used to evaluate the performance of the algorithm using two state-of-art denoising strategies. Results demonstrate significant SNR improvements when using our proposed approach when compared to other approaches. Conclusions A new reconstruction method based on a noise-compensated homotopic modified James-Stein non-local regularized optimization strategy was developed for the purpose of improving the quality of rapid OCT image acquisitions. Preliminary results show the proposed method shows considerable promise as a tool to improve the visualization and analysis of biological material using OCT. PMID:25319186

  1. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  2. Calibration of a flood inundation model using a SAR image: influence of acquisition time

    NASA Astrophysics Data System (ADS)

    Van Wesemael, Alexandra; Gobeyn, Sacha; Neal, Jeffrey; Lievens, Hans; Van Eerdenbrugh, Katrien; De Vleeschouwer, Niels; Schumann, Guy; Vernieuwe, Hilde; Di Baldassarre, Giuliano; De Baets, Bernard; Bates, Paul; Verhoest, Niko

    2016-04-01

    Flood risk management has always been in a search for effective prediction approaches. As such, the calibration of flood inundation models is continuously improved. In practice, this calibration process consists of finding the optimal roughness parameters, both channel and floodplain Manning coefficients, since these values considerably influence the flood extent in a catchment. In addition, Synthetic Aperture Radar (SAR) images have been proven to be a very useful tool in calibrating the flood extent. These images can distinguish between wet (flooded) and dry (non-flooded) pixels through the intensity of backscattered radio waves. To this date, however, satellite overpass often occurs only once during a flood event. Therefore, this study is specifically concerned with the effect of the timing of the SAR data acquisition on calibration results. In order to model the flood extent, the raster-based inundation model, LISFLOOD-FP, is used together with a high resolution synthetic aperture radar image (ERS-2 SAR) of a flood event of the river Dee, Wales, in December 2006. As only one satellite image of the considered case study is available, a synthetic framework is implemented in order to generate a time series of SAR observations. These synthetic observations are then used to calibrate the model at different time instants. In doing so, the sensitivity of the model output to the channel and floodplain Manning coefficients is studied through time. As results are examined, these suggest that there is a clear difference in the spatial variability to which water is held within the floodplain. Furthermore, these differences seem to be variable through time. Calibration by means of satellite flood observations obtained from the rising or receding limb, would generally lead to more reliable results rather than near peak flow observations.

  3. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    NASA Astrophysics Data System (ADS)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  4. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  5. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  6. Contrast medium administration and image acquisition parameters in renal CT angiography: what radiologists need to know

    PubMed Central

    Saade, Charbel; Deeb, Ibrahim Alsheikh; Mohamad, Maha; Al-Mohiy, Hussain; El-Merhi, Fadi

    2016-01-01

    Over the last decade, exponential advances in computed tomography (CT) technology have resulted in improved spatial and temporal resolution. Faster image acquisition enabled renal CT angiography to become a viable and effective noninvasive alternative in diagnosing renal vascular pathologies. However, with these advances, new challenges in contrast media administration have emerged. Poor synchronization between scanner and contrast media administration have reduced the consistency in image quality with poor spatial and contrast resolution. Comprehensive understanding of contrast media dynamics is essential in the design and implementation of contrast administration and image acquisition protocols. This review includes an overview of the parameters affecting renal artery opacification and current protocol strategies to achieve optimal image quality during renal CT angiography with iodinated contrast media, with current safety issues highlighted. PMID:26728701

  7. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera`s frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera`s focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 {mu}s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  8. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera's frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera's focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 [mu]s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  9. Variability in Fluoroscopic Image Acquisition During Operative Fixation of Ankle Fractures.

    PubMed

    Harris, Dorothy Y; Lindsey, Ronald W

    2015-10-01

    The goal of this study was to determine whether injury, level of surgeon training, and patient factors are associated with increased use of fluoroscopy during open reduction and internal fixation of ankle fractures. These relationships are not well defined. The study was a retrospective chart review of patients treated at an academic institution with primary open reduction and internal fixation of an ankle. Patient demographics, including sex, age, and body mass index, were collected, as was surgeon year of training (residency and fellowship). Image acquisition data included total number of images, total imaging time, and cumulative dose. Ankle fractures were classified according to the Weber and Lauge-Hansen classifications and the number of fixation points. Bivariate analysis and multiple regression models were used to predict increasing fluoroscopic image acquisition. Alpha was set at 0.05. Of 158 patients identified, 58 were excluded. After bivariate analysis, fracture complexity and year of training showed a significant correlation with increasing image acquisition. After multiple regression analysis, fracture complexity and year of training remained clinically significant and were independent predictors of increased image acquisition. Increasing fracture complexity resulted in 20 additional images, 16 additional seconds, and an increase in radiation of 0.7 mGy. Increasing year of training resulted in an additional 6 images and an increase of 0.35 mGy in cumulative dose. The findings suggest that protocols to educate trainee surgeons in minimizing the use of fluoroscopy would be beneficial at all levels of training and should target multiple fracture patterns.

  10. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  11. HASTE sequence with parallel acquisition and T2 decay compensation: application to carotid artery imaging.

    PubMed

    Zhang, Ling; Kholmovski, Eugene G; Guo, Junyu; Choi, Seong-Eun Kim; Morrell, Glen R; Parker, Dennis L

    2009-01-01

    T2-weighted carotid artery images acquired using the turbo spin-echo (TSE) sequence frequently suffer from motion artifacts due to respiration and blood pulsation. The possibility of using HASTE sequence to achieve motion-free carotid images was investigated. The HASTE sequence suffers from severe blurring artifacts due to signal loss in later echoes due to T2 decay. Combining HASTE with parallel acquisition (PHASTE) decreases the number of echoes acquired and thus effectively reduces the blurring artifact caused by T2 relaxation. Further improvement in image sharpness can be achieved by performing T2 decay compensation before reconstructing the PHASTE data. Preliminary results have shown successful suppression of motion artifacts with PHASTE imaging. The image quality was enhanced relative to the original HASTE image, but was still less sharp than a non-motion-corrupted TSE image.

  12. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages.

    PubMed

    Fautz, H P; Honal, M; Saueressig, U; Schäfer, O; Kannengiesser, S A R

    2007-01-01

    Two-dimensional (2D) axial continuously-moving-table imaging has to deal with artifacts due to gradient nonlinearity and breathing motion, and has to provide the highest scan efficiency. Parallel imaging techniques (e.g., generalized autocalibrating partially parallel acquisition GRAPPA)) are used to reduce such artifacts and avoid ghosting artifacts. The latter occur in T(2)-weighted multi-spin-echo (SE) acquisitions that omit an additional excitation prior to imaging scans for presaturation purposes. Multiple images are reconstructed from subdivisions of a fully sampled k-space data set, each of which is acquired in a single SE train. These images are then averaged. GRAPPA coil weights are estimated without additional measurements. Compared to conventional image reconstruction, inconsistencies between different subsets of k-space induce less artifacts when each k-space part is reconstructed separately and the multiple images are averaged afterwards. These inconsistencies may lead to inaccurate GRAPPA coil weights using the proposed intrinsic GRAPPA calibration. It is shown that aliasing artifacts in single images are canceled out after averaging. Phantom and in vivo studies demonstrate the benefit of the proposed reconstruction scheme for free-breathing axial continuously-moving-table imaging using fast multi-SE sequences.

  13. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages.

    PubMed

    Fautz, H P; Honal, M; Saueressig, U; Schäfer, O; Kannengiesser, S A R

    2007-01-01

    Two-dimensional (2D) axial continuously-moving-table imaging has to deal with artifacts due to gradient nonlinearity and breathing motion, and has to provide the highest scan efficiency. Parallel imaging techniques (e.g., generalized autocalibrating partially parallel acquisition GRAPPA)) are used to reduce such artifacts and avoid ghosting artifacts. The latter occur in T(2)-weighted multi-spin-echo (SE) acquisitions that omit an additional excitation prior to imaging scans for presaturation purposes. Multiple images are reconstructed from subdivisions of a fully sampled k-space data set, each of which is acquired in a single SE train. These images are then averaged. GRAPPA coil weights are estimated without additional measurements. Compared to conventional image reconstruction, inconsistencies between different subsets of k-space induce less artifacts when each k-space part is reconstructed separately and the multiple images are averaged afterwards. These inconsistencies may lead to inaccurate GRAPPA coil weights using the proposed intrinsic GRAPPA calibration. It is shown that aliasing artifacts in single images are canceled out after averaging. Phantom and in vivo studies demonstrate the benefit of the proposed reconstruction scheme for free-breathing axial continuously-moving-table imaging using fast multi-SE sequences. PMID:17191244

  14. Real-time microstructural and functional imaging and image processing in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Westphal, Volker

    Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new

  15. Imageability, age of acquisition, and frequency factors in acronym comprehension.

    PubMed

    Playfoot, David; Izura, Cristina

    2013-06-01

    In spite of their unusual orthographic and phonological form, acronyms (e.g., BBC, HIV, NATO) can become familiar to the reader, and their meaning can be accessed well enough that they are understood. The factors in semantic access for acronym stimuli were assessed using a word association task. Two analyses examined the time taken to generate a word association response to acronym cues. Responses were recorded more quickly to cues that elicited a large proportion of semantic responses, and those that were high in associative strength. Participants were shown to be faster to respond to cues which were imageable or early acquired. Frequency was not a significant predictor of word association responses. Implications for theories of lexical organisation are discussed. PMID:23153389

  16. Imageability, age of acquisition, and frequency factors in acronym comprehension.

    PubMed

    Playfoot, David; Izura, Cristina

    2013-06-01

    In spite of their unusual orthographic and phonological form, acronyms (e.g., BBC, HIV, NATO) can become familiar to the reader, and their meaning can be accessed well enough that they are understood. The factors in semantic access for acronym stimuli were assessed using a word association task. Two analyses examined the time taken to generate a word association response to acronym cues. Responses were recorded more quickly to cues that elicited a large proportion of semantic responses, and those that were high in associative strength. Participants were shown to be faster to respond to cues which were imageable or early acquired. Frequency was not a significant predictor of word association responses. Implications for theories of lexical organisation are discussed.

  17. 360 degree realistic 3D image display and image processing from real objects

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  18. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review.

    PubMed

    Woodfield, Julie; Kealey, Susan

    2015-08-01

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size.

  19. IMAGE FUSION OF RECONSTRUCTED DIGITAL TOMOSYNTHESIS VOLUMES FROM A FRONTAL AND A LATERAL ACQUISITION.

    PubMed

    Arvidsson, Jonathan; Söderman, Christina; Allansdotter Johnsson, Åse; Bernhardt, Peter; Starck, Göran; Kahl, Fredrik; Båth, Magnus

    2016-06-01

    Digital tomosynthesis (DTS) has been used in chest imaging as a low radiation dose alternative to computed tomography (CT). Traditional DTS shows limitations in the spatial resolution in the out-of-plane dimension. As a first indication of whether a dual-plane dual-view (DPDV) DTS data acquisition can yield a fair resolution in all three spatial dimensions, a manual registration between a frontal and a lateral image volume was performed. An anthropomorphic chest phantom was scanned frontally and laterally using a linear DTS acquisition, at 120 kVp. The reconstructed image volumes were resampled and manually co-registered. Expert radiologist delineations of the mediastinal soft tissues enabled calculation of similarity metrics in regard to delineations in a reference CT volume. The fused volume produced the highest total overlap, implying that the fused volume was a more isotropic 3D representation of the examined object than the traditional chest DTS volumes. PMID:26683464

  20. Acquisition of Flexible Image Recognition by Coupling of Reinforcement Learning and a Neural Network

    NASA Astrophysics Data System (ADS)

    Shibata, Katsunari; Kawano, Tomohiko

    The authors have proposed a very simple autonomous learning system consisting of one neural network (NN), whose inputs are raw sensor signals and whose outputs are directly passed to actuators as control signals, and which is trained by using reinforcement learning (RL). However, the current opinion seems that such simple learning systems do not actually work on complicated tasks in the real world. In this paper, with a view to developing higher functions in robots, the authors bring up the necessity to introduce autonomous learning of a massively parallel and cohesively flexible system with massive inputs based on the consideration about the brain architecture and the sequential property of our consciousness. The authors also bring up the necessity to place more importance on “optimization” of the total system under a uniform criterion than “understandability” for humans. Thus, the authors attempt to stress the importance of their proposed system when considering the future research on robot intelligence. The experimental result in a real-world-like environment shows that image recognition from as many as 6240 visual signals can be acquired through RL under various backgrounds and light conditions without providing any knowledge about image processing or the target object. It works even for camera image inputs that were not experienced in learning. In the hidden layer, template-like representation, division of roles between hidden neurons, and representation to detect the target uninfluenced by light condition or background were observed after learning. The autonomous acquisition of such useful representations or functions makes us feel the potential towards avoidance of the frame problem and the development of higher functions.

  1. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  2. Reducing respiratory effect in motion correction for EPI images with sequential slice acquisition order.

    PubMed

    Cheng, Hu; Puce, Aina

    2014-04-30

    Motion correction is critical for data analysis of fMRI time series. Most motion correction algorithms treat the head as a rigid body. Respiration of the subject, however, can alter the static magnetic field in the head and result in motion-like slice shifts for echo planar imaging (EPI). The delay of acquisition between slices causes a phase difference in respiration so that the shifts vary with slice positions. To characterize the effect of respiration on motion correction, we acquired fast sampled fMRI data using multi-band EPI and then simulated different acquisition schemes. Our results indicated that respiration introduces additional noise after motion correction. The signal variation between volumes after motion correction increases when the effective TR increases from 675ms to 2025ms. This problem can be corrected if slices are acquired sequentially. For EPI with a sequential acquisition scheme, we propose to divide the image volumes into several segments so that slices within each segment are acquired close in time and then perform motion correction on these segments separately. We demonstrated that the temporal signal-to-noise ratio (TSNR) was increased when the motion correction was performed on the segments separately rather than on the whole image. This enhancement of TSNR was not evenly distributed across the segments and was not observed for interleaved acquisition. The level of increase was higher for superior slices. On superior slices the percentage of TSNR gain was comparable to that using image based retrospective correction for respiratory noise. Our results suggest that separate motion correction on segments is highly recommended for sequential acquisition schemes, at least for slices distal to the chest.

  3. Whole Heart Coronary Imaging with Flexible Acquisition Window and Trigger Delay

    PubMed Central

    Kawaji, Keigo; Foppa, Murilo; Roujol, Sébastien; Akçakaya, Mehmet; Nezafat, Reza

    2015-01-01

    Coronary magnetic resonance imaging (MRI) requires a correctly timed trigger delay derived from a scout cine scan to synchronize k-space acquisition with the quiescent period of the cardiac cycle. However, heart rate changes between breath-held cine and free-breathing coronary imaging may result in inaccurate timing errors. Additionally, the determined trigger delay may not reflect the period of minimal motion for both left and right coronary arteries or different segments. In this work, we present a whole-heart coronary imaging approach that allows flexible selection of the trigger delay timings by performing k-space sampling over an enlarged acquisition window. Our approach addresses coronary motion in an interactive manner by allowing the operator to determine the temporal window with minimal cardiac motion for each artery region. An electrocardiogram-gated, k-space segmented 3D radial stack-of-stars sequence that employs a custom rotation angle is developed. An interactive reconstruction and visualization platform is then employed to determine the subset of the enlarged acquisition window for minimal coronary motion. Coronary MRI was acquired on eight healthy subjects (5 male, mean age = 37 ± 18 years), where an enlarged acquisition window of 166–220 ms was set 50 ms prior to the scout-derived trigger delay. Coronary visualization and sharpness scores were compared between the standard 120 ms window set at the trigger delay, and those reconstructed using a manually adjusted window. The proposed method using manual adjustment was able to recover delineation of five mid and distal right coronary artery regions that were otherwise not visible from the standard window, and the sharpness scores improved in all coronary regions using the proposed method. This paper demonstrates the feasibility of a whole-heart coronary imaging approach that allows interactive selection of any subset of the enlarged acquisition window for a tailored reconstruction for each branch

  4. Sequential CW-EPR image acquisition with 760-MHz surface coil array.

    PubMed

    Enomoto, Ayano; Hirata, Hiroshi

    2011-04-01

    This paper describes the development of a surface coil array that consists of two inductively coupled surface-coil resonators, for use in continuous-wave electron paramagnetic resonance (CW-EPR) imaging at 760 MHz. To make sequential EPR image acquisition possible, we decoupled the surface coils using PIN-diode switches, to enable the shifting of the resonators resonance frequency by more than 200 MHz. To assess the effectiveness of the surface coil array in CW-EPR imaging, two-dimensional images of a solution of nitroxyl radicals were measured with the developed coil array. Compared to equivalent single coil acquired images, we found the visualized area to be extended approximately 2-fold when using the surface coil array. The ability to visualize larger regions of interest through the use of a surface coil array, may offer great potential in future EPR imaging studies. PMID:21320789

  5. The Materials Acquisition Process at the University of Technology, Sydney: Equitable Transparent Allocation of Funds.

    ERIC Educational Resources Information Center

    O'Connor, Steve; Flynn, Ann; Lafferty, Susan

    1998-01-01

    Discusses the development of a library acquisition allocation formula at the University of Technology, Sydney. Covers the items included, consultative process adopted, details of the formulae derived and their implementation. (Author)

  6. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series

    NASA Astrophysics Data System (ADS)

    Nitze, Ingmar; Barrett, Brian; Cawkwell, Fiona

    2015-02-01

    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8-10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of

  7. High Dynamic Range Processing for Magnetic Resonance Imaging

    PubMed Central

    Sukerkar, Preeti A.; Meade, Thomas J.

    2013-01-01

    Purpose To minimize feature loss in T1- and T2-weighted MRI by merging multiple MR images acquired at different TR and TE to generate an image with increased dynamic range. Materials and Methods High Dynamic Range (HDR) processing techniques from the field of photography were applied to a series of acquired MR images. Specifically, a method to parameterize the algorithm for MRI data was developed and tested. T1- and T2-weighted images of a number of contrast agent phantoms and a live mouse were acquired with varying TR and TE parameters. The images were computationally merged to produce HDR-MR images. All acquisitions were performed on a 7.05 T Bruker PharmaScan with a multi-echo spin echo pulse sequence. Results HDR-MRI delineated bright and dark features that were either saturated or indistinguishable from background in standard T1- and T2-weighted MRI. The increased dynamic range preserved intensity gradation over a larger range of T1 and T2 in phantoms and revealed more anatomical features in vivo. Conclusions We have developed and tested a method to apply HDR processing to MR images. The increased dynamic range of HDR-MR images as compared to standard T1- and T2-weighted images minimizes feature loss caused by magnetization recovery or low SNR. PMID:24250788

  8. Quantitative evaluation of phase processing approaches in susceptibility weighted imaging

    NASA Astrophysics Data System (ADS)

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2012-03-01

    Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.

  9. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  10. The acquisition process of musical tonal schema: implications from connectionist modeling

    PubMed Central

    Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-ichi

    2015-01-01

    Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical ‘scale’ sensitivity early and ‘harmony’ sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music. PMID:26441725

  11. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  12. Partition-based acquisition model for speed up navigated beta-probe surface imaging

    NASA Astrophysics Data System (ADS)

    Monge, Frédéric; Shakir, Dzhoshkun I.; Navab, Nassir; Jannin, Pierre

    2016-03-01

    Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.

  13. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    NASA Astrophysics Data System (ADS)

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  14. Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging.

    PubMed

    Ke, Jun; Lam, Edmund Y

    2016-05-01

    Compressive measurements benefit low-light-level imaging (L3-imaging) due to the significantly improved measurement signal-to-noise ratio (SNR). However, as with other compressive imaging (CI) systems, compressive L3-imaging is slow. To accelerate the data acquisition, we develop an algorithm to compute the optimal binary sensing matrix that can minimize the image reconstruction error. First, we make use of the measurement SNR and the reconstruction mean square error (MSE) to define the optimal gray-value sensing matrix. Then, we construct an equality-constrained optimization problem to solve for a binary sensing matrix. From several experimental results, we show that the latter delivers a similar reconstruction performance as the former, while having a smaller dynamic range requirement to system sensors.

  15. Multiwavelength lidar: challenges of data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Duggirala, Ramakrishna Rao; Santhibhavan Vasudevanpillai, Mohankumar; Bhargavan, Presennakumar; Sivarama Pillai, Muraleedharen Nair; Malladi, Satyanarayana

    2006-12-01

    LIDAR operates by transmitting light pulses of few nanoseconds width into the atmosphere and receiving signals backscattered from different layers of aerosols and clouds from the atmosphere to derive vertical profiles of the physical and optical properties with good spatial resolution. The Data Acquisition System (DAS) of the LIDAR has to handle signals of wide dynamic range (of the order of 5 to 6 decades), and the data have to be sampled at high speeds (more than 10 MSPS) to get spatial resolution of few metre. This results in large amount of data to be collected in a short duration. The ground based Multiwavelength LIDAR built in Space Physics Laboratory, Vikram Sarabhai Space Centre, Trivandrum is capable of operating at four wavelengths namely 1064, 532, 355 and 266 nm with a PRF of 1 to 20 Hz. The LIDAR has been equipped with a computer controlled DAS. An Avalanche Photo Diode (APD) detector is used for the detection of return signal from different layers of atmosphere in 1064 nm channel. The signal is continuous in nature and is sampled and digitized at the required spatial resolution in the data acquisition window corresponding to the height region of 0 to 45 km. The return signal which is having wide dynamic range is handled by two fast, 12 bit A/D converters set to different full scale voltage ranges, and sampling upto 40 MSPS (corresponding to the range resolution of few metre). The other channels, namely 532, 355 and 266 nm are detected by Photo Multiplier Tubes (PMT), which have higher quantum efficiency at these wavelengths. The PMT output can be either continuous or discrete pulses depending upon the region of probing. Thick layers like clouds and dust generate continuous signal whereas molecular scattering from the higher altitude regions result in discrete signal pulses. The return signals are digitized using fast A/D converters (upto 40 MSPS) as well as counted using fast photon counters. The photon counting channels are capable of counting upto

  16. Data acquisition and analysis for the energy-subtraction Compton scatter camera for medical imaging

    NASA Astrophysics Data System (ADS)

    Khamzin, Murat Kamilevich

    In response to the shortcomings of the Anger camera currently being used in conventional SPECT, particularly the trade-off between sensitivity and spatial resolution, a novel energy-subtraction Compton scatter camera, or the ESCSC, has been proposed. A successful clinical implementation of the ESCSC could revolutionize the field of SPECT. Features of this camera include utilization of silicon and CdZnTe detectors in primary and secondary detector systems, list-mode time stamping data acquisition, modular architecture, and post-acquisition data analysis. Previous ESCSC studies were based on Monte Carlo modeling. The objective of this work is to test the theoretical framework developed in previous studies by developing the data acquisition and analysis techniques necessary to implement the ESCSC. The camera model working in list-mode with time stamping was successfully built and tested thus confirming potential of the ESCSC that was predicted in previous simulation studies. The obtained data were processed during the post-acquisition data analysis based on preferred event selection criteria. Along with the construction of a camera model and proving the approach, the post-acquisition data analysis was further extended to include preferred event weighting based on the likelihood of a preferred event to be a true preferred event. While formulated to show ESCSC capabilities, the results of this study are important for any Compton scatter camera implementation as well as for coincidence data acquisition systems in general.

  17. Xenbase; core features, data acquisition and data processing

    PubMed Central

    James-Zorn, Christina; Ponferrada, Virgillio G.; Burns, Kevin A.; Fortriede, Joshua D.; Lotay, Vaneet S.; Liu, Yu; Karpinka, J. Brad; Karimi, Kamran; Zorn, Aaron M.; Vize, Peter D.

    2015-01-01

    Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query, and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB and Ensembl. Here we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features and the curation of gene nomenclature and gene models. PMID:26150211

  18. Xenbase: Core features, data acquisition, and data processing.

    PubMed

    James-Zorn, Christina; Ponferrada, Virgillio G; Burns, Kevin A; Fortriede, Joshua D; Lotay, Vaneet S; Liu, Yu; Brad Karpinka, J; Karimi, Kamran; Zorn, Aaron M; Vize, Peter D

    2015-08-01

    Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web-accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high-quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB, and Ensembl. Here, we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features, and the curation of gene nomenclature and gene models.

  19. Xenbase: Core features, data acquisition, and data processing.

    PubMed

    James-Zorn, Christina; Ponferrada, Virgillio G; Burns, Kevin A; Fortriede, Joshua D; Lotay, Vaneet S; Liu, Yu; Brad Karpinka, J; Karimi, Kamran; Zorn, Aaron M; Vize, Peter D

    2015-08-01

    Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web-accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high-quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB, and Ensembl. Here, we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features, and the curation of gene nomenclature and gene models. PMID:26150211

  20. The selection of field acquisition parameters for dispersion images from multichannel surface wave data

    USGS Publications Warehouse

    Zhang, S.X.; Chan, L.S.; Xia, J.

    2004-01-01

    The accuracy and resolution of surface wave dispersion results depend on the parameters used for acquiring data in the field. The optimized field parameters for acquiring multichannel analysis of surface wave (MASW) dispersion images can be determined if preliminary information on the phase velocity range and interface depth is available. In a case study on a fill slope in Hong Kong, the optimal acquisition parameters were first determined from a preliminary seismic survey prior to a MASW survey. Field tests using different sets of receiver distances and array lengths showed that the most consistent and useful dispersion images were obtained from the optimal acquisition parameters predicted. The inverted S-wave velocities from the dispersion curve obtained at the optimal offset distance range also agreed with those obtained by using direct refraction survey.

  1. In application specific integrated circuit and data acquisition system for digital X-ray imaging

    NASA Astrophysics Data System (ADS)

    Beuville, E.; Cederström, B.; Danielsson, M.; Luo, L.; Nygren, D.; Oltman, E.; Vestlund, J.

    1998-02-01

    We have developed an Application Specific Integrated Circuit (ASIC) and data acquisition system for digital X-ray imaging. The chip consists of 16 parallel channels, each containing preamplifier, shaper, comparator and a 16 bit counter. We have demonstrated noiseless single-photon counting over a threshold of 7.2 keV using Silicon detectors and are presently capable of maximum counting rates of 2 MHz per channel. The ASIC is controlled by a personal computer through a commercial PCI card, which is also used for data acquisition. The content of the 16 bit counters are loaded into a shift register and transferred to the PC at any time at a rate of 20 MHz. The system is non-complicated, low cost and high performance and is optimised for digital X-ray imaging applications.

  2. The design of a distributed image processing and dissemination system

    SciTech Connect

    Rafferty, P.; Hower, L.

    1990-01-01

    The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

  3. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    NASA Astrophysics Data System (ADS)

    Devès, G.; Daudin, L.; Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V.; Michelet, C.; Seznec, H.; Barberet, P.

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  4. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  5. Learning (Not) to Predict: Grammatical Gender Processing in Second Language Acquisition

    ERIC Educational Resources Information Center

    Hopp, Holger

    2016-01-01

    In two experiments, this article investigates the predictive processing of gender agreement in adult second language (L2) acquisition. We test (1) whether instruction on lexical gender can lead to target predictive agreement processing and (2) how variability in lexical gender representations moderates L2 gender agreement processing. In a…

  6. Diffusion weighted inner volume imaging of lumbar disks based on turbo-STEAM acquisition.

    PubMed

    Hiepe, Patrick; Herrmann, Karl-Heinz; Ros, Christian; Reichenbach, Jürgen R

    2011-09-01

    A magnetic resonance imaging (MRI) technique for diffusion weighted imaging (DWI) is described which, in contrast to echo planar imaging (EPI), is insensitive to off-resonance effects caused by tissue susceptibility differences, magnetic field inhomogeneities, or chemical shifts. The sequence combines a diffusion weighted (DW) spin-echo preparation and a stimulated echo acquisition mode (STEAM) module. Inner volume imaging (IVI) allows reduced rectangular field-of-view (FoV) in the phase encode direction, while suppressing aliasing artifacts that are usually the consequence of reduced FoVs. Sagittal turbo-STEAM images of the lumbar spine were acquired at 3.0T with 2.0 × 2.0 mm² in-plane resolution and 7 mm slice thickness with acquisition times of 407 ms per image. To calculate the apparent diffusion coefficient (ADC) in lumbar intervertebral disks (IVDs), the DW gradients were applied in three orthogonal gradient directions with b-values of 0 and 300 s/mm². For initial assessment of the ADC of normal and abnormal IVDs a pilot study with 8 subjects was performed. Mean ADC values of all normal IVDs were (2.27±0.40)×10⁻³ mm²/s and (1.89±0.34)×10⁻³ mm²/s for turbo-STEAM IVI and SE-EPI acquisition, respectively. Corresponding mean ADC values, averaged over all abnormal disks, were (1.93±0.39)×10⁻³ mm²/s and (1.51±0.46)×10⁻³ mm²/s, respectively, indicating a substantial ADC decrease (p<0.001).

  7. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  8. System safety management lessons learned from the US Army acquisition process

    SciTech Connect

    Piatt, J.A.

    1989-05-01

    The Assistant Secretary of the Army for Research, Development and Acquisition directed the Army Safety Center to provide an audit of the causes of accidents and safety of use restrictions on recently fielded systems by tracking residual hazards back through the acquisition process. The objective was to develop lessons learned'' that could be applied to the acquisition process to minimize mishaps in fielded systems. System safety management lessons learned are defined as Army practices or policies, derived from past successes and failures, that are expected to be effective in eliminating or reducing specific systemic causes of residual hazards. They are broadly applicable and supportive of the Army structure and acquisition objectives. Pacific Northwest Laboratory (PNL) was given the task of conducting an independent, objective appraisal of the Army's system safety program in the context of the Army materiel acquisition process by focusing on four fielded systems which are products of that process. These systems included the Apache helicopter, the Bradley Fighting Vehicle (BFV), the Tube Launched, Optically Tracked, Wire Guided (TOW) Missile and the High Mobility Multipurpose Wheeled Vehicle (HMMWV). The objective of this study was to develop system safety management lessons learned associated with the acquisition process. The first step was to identify residual hazards associated with the selected systems. Since it was impossible to track all residual hazards through the acquisition process, certain well-known, high visibility hazards were selected for detailed tracking. These residual hazards illustrate a variety of systemic problems. Systemic or process causes were identified for each residual hazard and analyzed to determine why they exist. System safety management lessons learned were developed to address related systemic causal factors. 29 refs., 5 figs.

  9. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and

  10. High-Speed MALDI-TOF Imaging Mass Spectrometry: Rapid Ion Image Acquisition and Considerations for Next Generation Instrumentation

    PubMed Central

    Spraggins, Jeffrey M.; Caprioli, Richard M.

    2012-01-01

    A prototype matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometer has been used for high-speed ion image acquisition. The instrument incorporates a Nd:YLF solid state laser capable of pulse repetition rates up to 5 kHz and continuous laser raster sampling for high-throughput data collection. Lipid ion images of a sagittal rat brain tissue section were collected in 10 min with an effective acquisition rate of roughly 30 pixels/s. These results represent more than a 10-fold increase in throughput compared with current commercially available instrumentation. Experiments aimed at improving conditions for continuous laser raster sampling for imaging are reported, highlighting proper laser repetition rates and stage velocities to avoid signal degradation from significant oversampling. As new high spatial resolution and large sample area applications present themselves, the development of high-speed microprobe MALDI imaging mass spectrometry is essential to meet the needs of those seeking new technologies for rapid molecular imaging. PMID:21953043

  11. DDS-Suite - A Dynamic Data Acquisition, Processing, and Analysis System for Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Burnside, Jathan J.

    2012-01-01

    Wind Tunnels have optimized their steady-state data systems for acquisition and analysis and even implemented large dynamic-data acquisition systems, however development of near real-time processing and analysis tools for dynamic-data have lagged. DDS-Suite is a set of tools used to acquire, process, and analyze large amounts of dynamic data. Each phase of the testing process: acquisition, processing, and analysis are handled by separate components so that bottlenecks in one phase of the process do not affect the other, leading to a robust system. DDS-Suite is capable of acquiring 672 channels of dynamic data at rate of 275 MB / s. More than 300 channels of the system use 24-bit analog-to-digital cards and are capable of producing data with less than 0.01 of phase difference at 1 kHz. System architecture, design philosophy, and examples of use during NASA Constellation and Fundamental Aerodynamic tests are discussed.

  12. Cognitive processes during fear acquisition and extinction in animals and humans

    PubMed Central

    Hofmann, Stefan G.

    2007-01-01

    Anxiety disorders are highly prevalent. Fear conditioning and extinction learning in animals often serve as simple models of fear acquisition and exposure therapy of anxiety disorders in humans. This article reviews the empirical and theoretical literature on cognitive processes in fear acquisition, extinction, and exposure therapy. It is concluded that exposure therapy is a form of cognitive intervention that specifically changes the expectancy of harm. Implications for therapy research are discussed. PMID:17532105

  13. Off-axis quantitative phase imaging processing using CUDA: toward real-time applications

    PubMed Central

    Pham, Hoa; Ding, Huafeng; Sobh, Nahil; Do, Minh; Patel, Sanjay; Popescu, Gabriel

    2011-01-01

    We demonstrate real time off-axis Quantitative Phase Imaging (QPI) using a phase reconstruction algorithm based on NVIDIA’s CUDA programming model. The phase unwrapping component is based on Goldstein’s algorithm. By mapping the process of extracting phase information and unwrapping to GPU, we are able to speed up the whole procedure by more than 18.8× with respect to CPU processing and ultimately achieve video rate for mega-pixel images. Our CUDA implementation also supports processing of multiple images simultaneously. This enables our imaging system to support high speed, high throughput, and real-time image acquisition and visualization. PMID:21750757

  14. Evaluation of Acquisition Strategies for Image-Based Construction Site Monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2016-06-01

    Construction site monitoring is an essential task for keeping track of the ongoing construction work and providing up-to-date information for a Building Information Model (BIM). The BIM contains the as-planned states (geometry, schedule, costs, ...) of a construction project. For updating, the as-built state has to be acquired repeatedly and compared to the as-planned state. In the approach presented here, a 3D representation of the as-built state is calculated from photogrammetric images using multi-view stereo reconstruction. On construction sites one has to cope with several difficulties like security aspects, limited accessibility, occlusions or construction activity. Different acquisition strategies and techniques, namely (i) terrestrial acquisition with a hand-held camera, (ii) aerial acquisition using a Unmanned Aerial Vehicle (UAV) and (iii) acquisition using a fixed stereo camera pair at the boom of the crane, are tested on three test sites. They are assessed considering the special needs for the monitoring tasks and limitations on construction sites. The three scenarios are evaluated based on the ability of automation, the required effort for acquisition, the necessary equipment and its maintaining, disturbance of the construction works, and on the accuracy and completeness of the resulting point clouds. Based on the experiences during the test cases the following conclusions can be drawn: Terrestrial acquisition has the lowest requirements on the device setup but lacks on automation and coverage. The crane camera shows the lowest flexibility but the highest grade of automation. The UAV approach can provide the best coverage by combining nadir and oblique views, but can be limited by obstacles and security aspects. The accuracy of the point clouds is evaluated based on plane fitting of selected building parts. The RMS errors of the fitted parts range from 1 to a few cm for the UAV and the hand-held scenario. First results show that the crane camera

  15. Optimizing the processing and presentation of PPCR imaging

    NASA Astrophysics Data System (ADS)

    Davies, Andrew G.; Cowen, Arnold R.; Parkin, Geoff J. S.; Bury, Robert F.

    1996-03-01

    Photostimulable phosphor computed radiography (CR) is becoming an increasingly popular image acquisition system. The acceptability of this technique, both diagnostically, ergonomically and economically is highly influenced by the method by which the image data is presented to the user. Traditional CR systems utilize an 11' by 14' film hardcopy format, and can place two images per exposure onto this film, which does not correspond to sizes and presentations provided by conventional techniques. It is also the authors' experience that the image enhancement algorithms provided by traditional CR systems do not provide optimal image presentation. An alternative image enhancement algorithm was developed, along with a number of hardcopy formats, designed to match the requirements of the image reporting process. The new image enhancement algorithm, called dynamic range reduction (DRR), is designed to provide a single presentation per exposure, maintaining the appearance of a conventional radiograph, while optimizing the rendition of diagnostically relevant features within the image. The algorithm was developed on a Sun SPARCstation, but later ported to a Philips' EasyVisionRAD workstation. Print formats were developed on the EasyVision to improve the acceptability of the CR hardcopy. For example, for mammographic examinations, four mammograms (a cranio-caudal and medio-lateral view of each breast) are taken for each patient, with all images placed onto a single sheet of 14' by 17' film. The new composite format provides a more suitable image presentation for reporting, and is more economical to produce. It is the use of enhanced image processing and presentation which has enabled all mammography undertaken within the general infirmary to be performed using the CR/EasyVisionRAD DRR/3M 969 combination, without recourse to conventional film/screen mammography.

  16. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  17. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  18. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  19. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  20. A high speed PC-based data acquisition and control system for positron imaging

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.

    2009-06-01

    A modular positron camera with a flexible geometry suitable for performing Positron Emission Particle Tracking (PEPT) studies on a wide range of applications has been constructed. The demand for high speed list mode data storage required for these experiments has motivated the development of an improved data acquisition system to support the existing detectors. A high speed PC-based data acquisition system is presented. This device replaces the old dedicated hardware with a compact, flexible device with the same functionality and superior performance. Data acquisition rates of up to 80 MBytes per second allow coincidence data to be saved to disk for real-time analysis or post processing. The system supports the storage of time information with resolution of a half millisecond and remote trigger data support. Control of the detector system is provided by high-level software running on the same computer.

  1. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P. ); Elliott, A. )

    1992-01-01

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  2. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P.; Elliott, A.

    1992-12-31

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  3. Information Processing, Knowledge Acquisition and Learning: Developmental Perspectives.

    ERIC Educational Resources Information Center

    Hoyer, W. J.

    1980-01-01

    Several different conceptions of the relationship between learning and development are considered in this article. It is argued that dialectical and ecological developmental orientations might provide a useful basis for synthesizing the contrasting frameworks of the operant, information processing, learning theory, and knowledge acquisition…

  4. Semantic Context and Graphic Processing in the Acquisition of Reading.

    ERIC Educational Resources Information Center

    Thompson, G. B.

    1981-01-01

    Two experiments provided tests of predictions about children's use of semantic contextual information in reading, under conditions of minimal experience with graphic processes. Subjects, aged 6 1/2, 8, and 11, orally read passages of continuous text with normal and with low semantic constraints under various graphic conditions, including cursive…

  5. Executive and Phonological Processes in Second-Language Acquisition

    ERIC Educational Resources Information Center

    Engel de Abreu, Pascale M. J.; Gathercole, Susan E.

    2012-01-01

    This article reports a latent variable study exploring the specific links among executive processes of working memory, phonological short-term memory, phonological awareness, and proficiency in first (L1), second (L2), and third (L3) languages in 8- to 9-year-olds experiencing multilingual education. Children completed multiple L1-measures of…

  6. An Overview of the Mars Science Laboratory Sample Acquisition, Sample Processing and Handling System

    NASA Astrophysics Data System (ADS)

    Beegle, L. W.; Anderson, R. C.; Hurowitz, J. A.; Jandura, L.; Limonadi, D.

    2012-12-01

    The Mars Science Laboratory Mission (MSL), landed on Mars on August 5. The rover and a scientific payload are designed to identify and assess the habitability, geological, and environmental histories of Gale crater. Unraveling the geologic history of the region and providing an assessment of present and past habitability requires an evaluation of the physical and chemical characteristics of the landing site; this includes providing an in-depth examination of the chemical and physical properties of Martian regolith and rocks. The MSL Sample Acquisition, Processing, and Handling (SA/SPaH) subsystem is the first in-situ system designed to acquire interior rock and soil samples from Martian surface materials. These samples are processed and separated into fine particles and distributed to two onboard analytical science instruments SAM (Sample Analysis at Mars Instrument Suite) and CheMin (Chemistry and Mineralogy) or to a sample analysis tray for visual inspection. The SA/SPaH subsystem is also responsible for the placement of the two contact instruments, Alpha Particle X-Ray Spectrometer (APXS), and the Mars Hand Lens Imager (MAHLI), on rock and soil targets. Finally, there is a Dust Removal Tool (DRT) to remove dust particles from rock surfaces for subsequent analysis by the contact and or mast mounted instruments (e.g. Mast Cameras (MastCam) and the Chemistry and Micro-Imaging instruments (ChemCam)). It is expected that the SA/SPaH system will have produced a scooped system and possibility a drilled sample in the first 90 sols of the mission. Results from these activities and the ongoing testing program will be presented.

  7. Axially elongated field-free point data acquisition in magnetic particle imaging.

    PubMed

    Kaethner, Christian; Ahlborg, Mandy; Bringout, Gael; Weber, Matthias; Buzug, Thorsten M

    2015-02-01

    The magnetic particle imaging (MPI) technology is a new imaging technique featuring an excellent possibility to detect iron oxide based nanoparticle accumulations in vivo. The excitation of the particles and in turn the signal generation in MPI are achieved by using oscillating magnetic fields. In order to realize a spatial encoding, a field-free point (FFP) is steered through the field of view (FOV). Such a positioning of the FFP can thereby be achieved by mechanical or electromagnetical movement. Conventionally, the data acquisition path is either a planar 2-D or a 3-D FFP trajectory. Assuming human applications, the size of the FOV sampled by such trajectories is strongly limited by heating of the body and by nerve stimulations. In this work, a new approach acquiring MPI data based on the axial elongation of a 2-D FFP trajectory is proposed. It is shown that such an elongation can be used as a data acquisition path to significantly increase the acquisition speed, with negligible loss of spatial resolution.

  8. Possible Overlapping Time Frames of Acquisition and Consolidation Phases in Object Memory Processes: A Pharmacological Approach

    ERIC Educational Resources Information Center

    Akkerman, Sven; Blokland, Arjan; Prickaerts, Jos

    2016-01-01

    In previous studies, we have shown that acetylcholinesterase inhibitors and phosphodiesterase inhibitors (PDE-Is) are able to improve object memory by enhancing acquisition processes. On the other hand, only PDE-Is improve consolidation processes. Here we show that the cholinesterase inhibitor donepezil also improves memory performance when…

  9. Data acquisition and online processing requirements for experimentation at the Superconducting Super Collider

    SciTech Connect

    Lankford, A.J.; Barsotti, E.; Gaines, I.

    1989-07-01

    Differences in scale between data acquisition and online processing requirements for detectors at the Superconducting Super Collider and systems for existing large detectors will require new architectures and technological advances in these systems. Emerging technologies will be employed for data transfer, processing, and recording. 9 refs., 3 figs.

  10. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  11. A dual process account of coarticulation in motor skill acquisition.

    PubMed

    Shah, Ashvin; Barto, Andrew G; Fagg, Andrew H

    2013-01-01

    Many tasks, such as typing a password, are decomposed into a sequence of subtasks that can be accomplished in many ways. Behavior that accomplishes subtasks in ways that are influenced by the overall task is often described as "skilled" and exhibits coarticulation. Many accounts of coarticulation use search methods that are informed by representations of objectives that define skilled. While they aid in describing the strategies the nervous system may follow, they are computationally complex and may be difficult to attribute to brain structures. Here, the authors present a biologically- inspired account whereby skilled behavior is developed through 2 simple processes: (a) a corrective process that ensures that each subtask is accomplished, but does not do so skillfully and (b) a reinforcement learning process that finds better movements using trial and error search that is not informed by representations of any objectives. We implement our account as a computational model controlling a simulated two-armed kinematic "robot" that must hit a sequence of goals with its hands. Behavior displays coarticulation in terms of which hand was chosen, how the corresponding arm was used, and how the other arm was used, suggesting that the account can participate in the development of skilled behavior. PMID:24116847

  12. Phases of learning: How skill acquisition impacts cognitive processing.

    PubMed

    Tenison, Caitlin; Fincham, Jon M; Anderson, John R

    2016-06-01

    This fMRI study examines the changes in participants' information processing as they repeatedly solve the same mathematical problem. We show that the majority of practice-related speedup is produced by discrete changes in cognitive processing. Because the points at which these changes take place vary from problem to problem, and the underlying information processing steps vary in duration, the existence of such discrete changes can be hard to detect. Using two converging approaches, we establish the existence of three learning phases. When solving a problem in one of these learning phases, participants can go through three cognitive stages: Encoding, Solving, and Responding. Each cognitive stage is associated with a unique brain signature. Using a bottom-up approach combining multi-voxel pattern analysis and hidden semi-Markov modeling, we identify the duration of that stage on any particular trial from participants brain activation patterns. For our top-down approach we developed an ACT-R model of these cognitive stages and simulated how they change over the course of learning. The Solving stage of the first learning phase is long and involves a sequence of arithmetic computations. Participants transition to the second learning phase when they can retrieve the answer, thereby drastically reducing the duration of the Solving stage. With continued practice, participants then transition to the third learning phase when they recognize the problem as a single unit and produce the answer as an automatic response. The duration of this third learning phase is dominated by the Responding stage.

  13. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  14. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications.

    PubMed

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  15. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications

    PubMed Central

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  16. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    SciTech Connect

    Yip, Stephen Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  17. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  18. Lossy cardiac x-ray image compression based on acquisition noise

    NASA Astrophysics Data System (ADS)

    de Bruijn, Frederik J.; Slump, Cornelis H.

    1997-05-01

    In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially since, in reality, human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors. Therefore, we investigate the possibility to develop alternative measures for data loss, based on the characteristics of the acquisition system, in our case, a digital cardiac imaging system. In general, due to the low exposure, cardiac x-ray images tend to be relatively noisy. The main noise contributions are quantum noise and electrical noise. The electrical noise is not correlated with the signal. In addition, the signal can be transformed such that the correlated Poisson-distributed quantum noise is transformed into an additional zero-mean Gaussian noise source which is uncorrelated with the signal. Furthermore, the systems modulation transfer function imposes a known spatial-frequency limitation to the output signal. In the assumption that noise which is not correlated with the signal contains no diagnostic information, we have derived a compression measure based on the acquisition parameters of a digital cardiac imaging system. The measure is used for bit- assignment and quantization of transform coefficients. We present a blockwise-DCT compression algorithm which is based on the conventional JPEG-standard. However, the bit- assignment to the transform coefficients is now determined by an assumed noise variance for each coefficient, for a given set of acquisition parameters. Experiments with the algorithm indicate that a bit rate of 0.6 bit/pixel is feasible, without apparent loss of clinical information.

  19. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  20. Task-driven image acquisition and reconstruction in cone-beam CT.

    PubMed

    Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H

    2015-04-21

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt

  1. An Independent Workstation For CT Image Processing And Analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1988-06-01

    This manuscript describes an independent workstation which consists of a data acquisition and transfer system, a host computer, and a display and record system. The main tasks of the workstation include the collecting and managing of a vast amount of data, creating and processing 2-D and 3-D images, conducting quantitative data analysis, and recording and exchanging information. This workstation not only meets the requirements for routine clinical applications, but it is also used extensively for research purposes. It is stand-alone and works as a physician's workstation; moreover, it can be easily linked into a computer-network and serve as a component of PACS (Picture Archiving and Communication System).

  2. Dynamic whole-body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-10-01

    Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ˜15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ˜45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically

  3. Is Children's Acquisition of the Passive a Staged Process? Evidence from Six- and Nine-Year-Olds' Production of Passives

    ERIC Educational Resources Information Center

    Messenger, Katherine; Branigan, Holly P.; McLean, Janet F.

    2012-01-01

    We report a syntactic priming experiment that examined whether children's acquisition of the passive is a staged process, with acquisition of constituent structure preceding acquisition of thematic role mappings. Six-year-olds and nine-year-olds described transitive actions after hearing active and passive prime descriptions involving the same or…

  4. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  5. Design and construction of the front-end electronics data acquisition for the SLD CRID (Cherenkov Ring Imaging Detector)

    SciTech Connect

    Hoeflich, J.; McShurley, D.; Marshall, D.; Oxoby, G.; Shapiro, S.; Stiles, P. ); Spencer, E. . Inst. for Particle Physics)

    1990-10-01

    We describe the front-end electronics for the Cherenkov Ring Imaging Detector (CRID) of the SLD at the Stanford Linear Accelerator Center. The design philosophy and implementation are discussed with emphasis on the low-noise hybrid amplifiers, signal processing and data acquisition electronics. The system receives signals from a highly efficient single-photo electron detector. These signals are shaped and amplified before being stored in an analog memory and processed by a digitizing system. The data from several ADCs are multiplexed and transmitted via fiber optics to the SLD FASTBUS system. We highlight the technologies used, as well as the space, power dissipation, and environmental constraints imposed on the system. 16 refs., 10 figs.

  6. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  7. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  8. The acquisition of integrated science process skills in a web-based learning environment

    NASA Astrophysics Data System (ADS)

    Saat, Rohaida Mohd.

    2004-01-01

    Web-based learning is becoming prevalent in science learning. Some use specially designed programs, while others use materials available on the Internet. This qualitative case study examined the process of acquisition of integrated science process skills, particularly the skill of controlling variables, in a web-based learning environment among grade 5 children. Data were gathered primarily from children's conversations and teacher-student conversations. Analysis of the data revealed that the children acquired the skill in three phases: from the phase of recognition to the phase of familiarization and finally to the phase of automation. Nevertheless, the acquisition of the skill only involved the acquisition of certain subskills of the skill of controlling variables. This progression could be influenced by the web-based instructional material that provided declarative knowledge, concrete visualization and opportunities for practise.

  9. A Future Vision of a Data Acquisition: Distributed Sensing, Processing, and Health Monitoring

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Solano, Wanda; Thurman, Charles; Schmalzel, John

    2000-01-01

    This paper presents a vision fo a highly enhanced data acquisition and health monitoring system at NASA Stennis Space Center (SSC) rocket engine test facility. This vision includes the use of advanced processing capabilities in conjunction with highly autonomous distributed sensing and intelligence, to monitor and evaluate the health of data in the context of it's associated process. This method is expected to significantly reduce data acquisitions costs and improve system reliability. A Universal Signal Conditioning Amplifier (USCA) based system, under development at Kennedy Space Center, is being evaluated for adaptation to the SSC testing infrastructure. Kennedy's USCA architecture offers many advantages including flexible and auto-configuring data acquisition with improved calibration and verifiability. Possible enhancements at SSC may include multiplexing the distributed USCAs to reduce per channel cost, and the use of IEEE-485 to Allen-Bradley Control Net Gateways for interfacing with the resident control systems.

  10. Investigations on the efficiency of cardiac-gated methods for the acquisition of diffusion-weighted images

    NASA Astrophysics Data System (ADS)

    Nunes, Rita G.; Jezzard, Peter; Clare, Stuart

    2005-11-01

    Diffusion-weighted images are inherently very sensitive to motion. Pulsatile motion of the brain can give rise to artifactual signal attenuation leading to over-estimation of the apparent diffusion coefficients, even with snapshot echo planar imaging. Such miscalculations can result in erroneous estimates of the principal diffusion directions. Cardiac gating can be performed to confine acquisition to the quiet portion of the cycle. Although effective, this approach leads to significantly longer acquisition times. On the other hand, it has been demonstrated that pulsatile motion is not significant in regions above the corpus callosum. To reduce acquisition times and improve the efficiency of whole brain cardiac-gated acquisitions, the upper slices of the brain can be imaged during systole, reserving diastole for those slices most affected by pulsatile motion. The merits and disadvantages of this optimized approach are investigated here, in comparison to a more standard gating method and to the non-gated approach.

  11. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    PubMed Central

    GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT

    2014-01-01

    Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be

  12. A user report on the trial use of gesture commands for image manipulation and X-ray acquisition.

    PubMed

    Li, Ellis Chun Fai; Lai, Christopher Wai Keung

    2016-07-01

    Touchless environment for image manipulation and X-ray acquisition may enhance the current infection control measure during X-ray examination simply by avoiding any touch on the control panel. The present study is intended at designing and performing a trial experiment on using motion-sensing technology to perform image manipulation and X-ray acquisition function (the activities a radiographer frequently performs during an X-ray examination) under an experimental setup. Based on the author's clinical experience, several gesture commands were designed carefully to complete a single X-ray examination. Four radiographers were randomly recruited for the study. They were asked to perform gesture commands in front of a computer integrated with a gesture-based touchless controller. The translational movements of the tip of their thumb and index finger while performing different gesture commands were recorded for analysis. Although individual operators were free to decide the extent of movement and the speed at which their fingers and thumbs moved while performing these gesture commands, the result of our study demonstrated that all operators could perform our proposed gesture commands with good consistency, suggesting that motion-sensing technology could, in practice, be integrated into radiographic examinations. To summarize, although the implementation of motion-sensing technology as an input command in radiographic examination might inevitably slow down the examination throughput considering that extra procedural steps are required to trigger specific gesture commands in sequence, it is advantageous in minimizing the potential of the pathogen contamination during image operation and image processing that leads to cross infection. PMID:27230385

  13. Wide-field flexible endoscope for simultaneous color and NIR fluorescence image acquisition during surveillance colonoscopy

    NASA Astrophysics Data System (ADS)

    García-Allende, P. Beatriz; Nagengast, Wouter B.; Glatz, Jürgen; Ntziachristos, Vasilis

    2013-03-01

    Colorectal cancer (CRC) is the third most common form of cancer and, despite recent declines in both incidence and mortality, it still remains the second leading cause of cancer-related deaths in the western world. Colonoscopy is the standard for detection and removal of premalignant lesions to prevent CRC. The major challenges that physicians face during surveillance colonoscopy are the high adenoma miss-rates and the lack of functional information to facilitate decision-making concerning which lesions to remove. Targeted imaging with NIR fluorescence would address these limitations. Tissue penetration is increased in the NIR range while the combination with targeted NIR fluorescent agents provides molecularly specific detection of cancer cells, i.e. a red-flag detection strategy that allows tumor imaging with optimal sensitivity and specificity. The development of a flexible endoscopic fluorescence imaging method that can be integrated with standard medical endoscopes and facilitates the clinical use of this potential is described in this work. A semi-disposable coherent fiber optic imaging bundle that is traditionally employed in the exploration of biliary and pancreatic ducts is proposed, since it is long and thin enough to be guided through the working channel of a traditional video colonoscope allowing visualization of proximal lesions in the colon. A custom developed zoom system magnifies the image of the proximal end of the imaging bundle to fill the dimensions of two cameras operating in parallel providing the simultaneous color and fluorescence video acquisition.

  14. Four-channel surface coil array for sequential CW-EPR image acquisition.

    PubMed

    Enomoto, Ayano; Emoto, Miho; Fujii, Hirotada; Hirata, Hiroshi

    2013-09-01

    This article describes a four-channel surface coil array to increase the area of visualization for continuous-wave electron paramagnetic resonance (CW-EPR) imaging. A 776-MHz surface coil array was constructed with four independent surface coil resonators and three kinds of switches. Control circuits for switching the resonators were also built to sequentially perform EPR image acquisition for each resonator. The resonance frequencies of the resonators were shifted using PIN diode switches to decouple the inductively coupled coils. To investigate the area of visualization with the surface coil array, three-dimensional EPR imaging was performed using a glass cell phantom filled with a solution of nitroxyl radicals. The area of visualization obtained with the surface coil array was increased approximately 3.5-fold in comparison to that with a single surface coil resonator. Furthermore, to demonstrate the applicability of this surface coil array to animal imaging, three-dimensional EPR imaging was performed in a living mouse with an exogenously injected nitroxyl radical imaging agent. PMID:23832070

  15. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  16. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-07-08

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  17. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-01-01

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  18. Parallel asynchronous hardware implementation of image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  19. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  20. An algorithm to unveil the inner structure of objects concealed by beam divergence in radiographic image acquisition systems

    SciTech Connect

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    2014-11-11

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessing a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.

  1. An algorithm to unveil the inner structure of objects concealed by beam divergence in radiographic image acquisition systems

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    2014-11-01

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessing a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.

  2. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  3. Learning and Individual Differences: An Ability/Information-Processing Framework for Skill Acquisition. Final Report.

    ERIC Educational Resources Information Center

    Ackerman, Phillip L.

    A program of theoretical and empirical research focusing on the ability determinants of individual differences in skill acquisition is reviewed. An integrative framework for information-processing and cognitive ability determinants of skills is reviewed, along with principles for ability-skill relations. Experimental manipulations were used to…

  4. The Priority of Listening Comprehension over Speaking in the Language Acquisition Process

    ERIC Educational Resources Information Center

    Xu, Fang

    2011-01-01

    By elaborating the definition of listening comprehension, the characteristic of spoken discourse, the relationship between STM and LTM and Krashen's comprehensible input, the paper puts forward the point that the priority of listening comprehension over speaking in the language acquisition process is very necessary.

  5. Processes of Language Acquisition in Children with Autism: Evidence from Preferential Looking

    ERIC Educational Resources Information Center

    Swensen, Lauren D.; Kelley, Elizabeth; Fein, Deborah; Naigles, Letitia R.

    2007-01-01

    Two language acquisition processes (comprehension preceding production of word order, the noun bias) were examined in 2- and 3-year-old children (n=10) with autistic spectrum disorder and in typically developing 21-month-olds (n=13). Intermodal preferential looking was used to assess comprehension of subject-verb-object word order and the tendency…

  6. Individual Variation in Infant Speech Processing: Implications for Language Acquisition Theories

    ERIC Educational Resources Information Center

    Cristia, Alejandrina

    2009-01-01

    To what extent does language acquisition recruit domain-general processing mechanisms? In this dissertation, evidence concerning this question is garnered from the study of individual differences in infant speech perception and their predictive value with respect to language development in early childhood. In the first experiment, variation in the…

  7. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  8. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  9. Optimizing Uas Image Acquisition and Geo-Registration for Precision Agriculture

    NASA Astrophysics Data System (ADS)

    Hearst, A. A.; Cherkauer, K. A.; Rainey, K. M.

    2014-12-01

    Unmanned Aircraft Systems (UASs) can acquire imagery of crop fields in various spectral bands, including the visible, near-infrared, and thermal portions of the spectrum. By combining techniques of computer vision, photogrammetry, and remote sensing, these images can be stitched into precise, geo-registered maps, which may have applications in precision agriculture and other industries. However, the utility of these maps will depend on their positional accuracy. Therefore, it is important to quantify positional accuracy and consider the tradeoffs between accuracy, field site setup, and the computational requirements for data processing and analysis. This will enable planning of data acquisition and processing to obtain the required accuracy for a given project. This study focuses on developing and evaluating methods for geo-registration of raw aerial frame photos acquired by a small fixed-wing UAS. This includes visual, multispectral, and thermal imagery at 3, 6, and 14 cm/pix resolutions, respectively. The study area is 10 hectares of soybean fields at the Agronomy Center for Research and Education (ACRE) at Purdue University. The dataset consists of imagery from 6 separate days of flights (surveys) and supporting ground measurements. The Direct Sensor Orientation (DiSO) and Integrated Sensor Orientation (InSO) methods for geo-registration are tested using 16 Ground Control Points (GCPs). Subsets of these GCPs are used to test for the effects of different numbers and spatial configurations of GCPs on positional accuracy. The horizontal and vertical Root Mean Squared Error (RMSE) is used as the primary metric of positional accuracy. Preliminary results from 1 of the 6 surveys show that the DiSO method (0 GCPs used) achieved an RMSE in the X, Y, and Z direction of 2.46 m, 1.04 m, and 1.91 m, respectively. InSO using 5 GCPs achieved an RMSE of 0.17 m, 0.13 m, and 0.44 m. InSO using 10 GCPs achieved an RMSE of 0.10 m, 0.09 m, and 0.12 m. Further analysis will identify

  10. Validation of a target acquisition model for active imager using perception experiments

    NASA Astrophysics Data System (ADS)

    Lapaz, Frédéric; Canevet, Loïc

    2007-10-01

    Active night vision systems based on laser diodes emitters have now reached a technology level allowing military applications. In order to predict the performance of observers using such systems, we built an analytic model including sensor, atmosphere, visualization and eye effects. The perception task has been modelled using the Targeting Task Performance metric (TTP metric) developed by R. Vollmerhausen from the Night Vision and Electronic Sensors Directorate (NVESD). Sensor and atmosphere models have been validated separately. In order to validate the whole model, two identification tests have been set up. The first set submitted to trained observers was made of hybrid images. The target to background contrast, the blur and the noise were added to armoured vehicles signatures in accordance to sensor and atmosphere models. The second set of images was made with the same targets, sensed by a real active sensor during field trials. Images were recorded, showing different vehicles, at different ranges and orientations, under different illumination and acquisition configurations. Indeed, this set of real images was built with three different types of gating: wide illumination, illumination of the background and illumination of the target. Analysis of the perception experiments results showed a good concordance between the two sets of images. The calculation of an identification criterion, related to this set of vehicles in the near infrared, gave the same results in both cases. The impact of gating on observer's performance was also evaluated.

  11. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  12. Cardiovascular Magnetic Resonance in Cardiology Practice: A Concise Guide to Image Acquisition and Clinical Interpretation.

    PubMed

    Valbuena-López, Silvia; Hinojar, Rocío; Puntmann, Valentina O

    2016-02-01

    Cardiovascular magnetic resonance plays an increasingly important role in routine cardiology clinical practice. It is a versatile imaging modality that allows highly accurate, broad and in-depth assessment of cardiac function and structure and provides information on pertinent clinical questions in diseases such as ischemic heart disease, nonischemic cardiomyopathies, and heart failure, as well as allowing unique indications, such as the assessment and quantification of myocardial iron overload or infiltration. Increasing evidence for the role of cardiovascular magnetic resonance, together with the spread of knowledge and skill outside expert centers, has afforded greater access for patients and wider clinical experience. This review provides a snapshot of cardiovascular magnetic resonance in modern clinical practice by linking image acquisition and postprocessing with effective delivery of the clinical meaning.

  13. Reference radiochromic film dosimetry in kilovoltage photon beams during CBCT image acquisition

    SciTech Connect

    Tomic, Nada; Devic, Slobodan; DeBlois, Francois; Seuntjens, Jan

    2010-03-15

    Purpose: A common approach for dose assessment during cone beam computed tomography (CBCT) acquisition is to use thermoluminescent detectors for skin dose measurements (on patients or phantoms) or ionization chamber (in phantoms) for body dose measurements. However, the benefits of a daily CBCT image acquisition such as margin reduction in planning target volume and the image quality must be weighted against the extra dose received during CBCT acquisitions. Methods: The authors describe a two-dimensional reference dosimetry technique for measuring dose from CBCT scans using the on-board imaging system on a Varian Clinac-iX linear accelerator that employs the XR-QA radiochromic film model, specifically designed for dose measurements at low energy photons. The CBCT dose measurements were performed for three different body regions (head and neck, pelvis, and thorax) using humanoid Rando phantom. Results: The authors report on both surface dose and dose profiles measurements during clinical CBCT procedures carried out on a humanoid Rando phantom. Our measurements show that the surface doses per CBCT scan can range anywhere between 0.1 and 4.7 cGy, with the lowest surface dose observed in the head and neck region, while the highest surface dose was observed for the Pelvis spot light CBCT protocol in the pelvic region, on the posterior side of the Rando phantom. The authors also present results of the uncertainty analysis of our XR-QA radiochromic film dosimetry system. Conclusions: Radiochromic film dosimetry protocol described in this work was used to perform dose measurements during CBCT acquisitions with the one-sigma dose measurement uncertainty of up to 3% for doses above 1 cGy. Our protocol is based on film exposure calibration in terms of ''air kerma in air,'' which simplifies both the calibration procedure and reference dosimetry measurements. The results from a full Monte Carlo investigation of the dose conversion of measured XR-QA film dose at the surface into

  14. Transient Decline in Hippocampal Theta Activity during the Acquisition Process of the Negative Patterning Task

    PubMed Central

    Sakimoto, Yuya; Okada, Kana; Takeda, Kozue; Sakata, Shogo

    2013-01-01

    Hippocampal function is important in the acquisition of negative patterning but not of simple discrimination. This study examined rat hippocampal theta activity during the acquisition stages (early, middle, and late) of the negative patterning task (A+, B+, AB-). The results showed that hippocampal theta activity began to decline transiently (for 500 ms after non-reinforced stimulus presentation) during the late stage of learning in the negative patterning task. In addition, this transient decline in hippocampal theta activity in the late stage was lower in the negative patterning task than in the simple discrimination task. This transient decline during the late stage of task acquisition may be related to a learning process distinctive of the negative patterning task but not the simple discrimination task. We propose that the transient decline of hippocampal theta activity reflects inhibitory learning and/or response inhibition after the presentation of a compound stimulus specific to the negative patterning task. PMID:23936249

  15. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  16. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments.

    PubMed

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  17. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  18. Health Hazard Assessment and Toxicity Clearances in the Army Acquisition Process

    NASA Technical Reports Server (NTRS)

    Macko, Joseph A., Jr.

    2000-01-01

    The United States Army Materiel Command, Army Acquisition Pollution Prevention Support Office (AAPPSO) is responsible for creating and managing the U.S. Army Wide Acquisition Pollution Prevention Program. They have established Integrated Process Teams (IPTs) within each of the Major Subordinate Commands of the Army Materiel Command. AAPPSO provides centralized integration, coordination, and oversight of the Army Acquisition Pollution Prevention Program (AAPPP) , and the IPTs provide the decentralized execution of the AAPPSO program. AAPPSO issues policy and guidance, provides resources and prioritizes P2 efforts. It is the policy of the (AAPPP) to require United States Army Surgeon General approval of all materials or substances that will be used as an alternative to existing hazardous materials, toxic materials and substances, and ozone-depleting substances. The Army has a formal process established to address this effort. Army Regulation 40-10 requires a Health Hazard Assessment (HHA) during the Acquisition milestones of a new Army system. Army Regulation 40-5 addresses the Toxicity Clearance (TC) process to evaluate new chemicals and materials prior to acceptance as an alternative. U.S. Army Center for Health Promotion and Preventive Medicine is the Army's matrixed medical health organization that performs the HHA and TC mission.

  19. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  20. Performance of a VME-based parallel processing LIDAR data acquisition system (summary)

    SciTech Connect

    Moore, K.; Buttler, B.; Caffrey, M.; Soriano, C.

    1995-05-01

    It may be possible to make accurate real time, autonomous, 2 and 3 dimensional wind measurements remotely with an elastic backscatter Light Detection and Ranging (LIDAR) system by incorporating digital parallel processing hardware into the data acquisition system. In this paper, we report the performance of a commercially available digital parallel processing system in implementing the maximum correlation technique for wind sensing using actual LIDAR data. Timing and numerical accuracy are benchmarked against a standard microprocessor impementation.

  1. IT system supporting acquisition of image data used in the identification of grasslands

    NASA Astrophysics Data System (ADS)

    Mueller, Wojciech; Nowakowski, Krzysztof; Tomczak, Robert J.; Kujawa, Sebastian; Rudowicz-Nawrocka, Janina; Idziaszek, Przemysław; Zawadzki, Adrian

    2013-07-01

    A complex research project was undertaken by the authors to develop a method for the automatic identification of grasslands using the neural analysis of aerial photographs made from relative low altitude. The development of such method requires the collection of large amount of various data. To control them and also to automate the process of their acquisition, an appropriate information system was developed in this study with the use of a variety of commercial and free technologies. Technologies for processing and storage of data in the form of raster and vector graphics were pivotal in the development of the research tool.

  2. Software-Based Real-Time Acquisition and Processing of PET Detector Raw Data.

    PubMed

    Goldschmidt, Benjamin; Schug, David; Lerche, Christoph W; Salomon, André; Gebhardt, Pierre; Weissler, Bjoern; Wehner, Jakob; Dueppenbecker, Peter M; Kiessling, Fabian; Schulz, Volkmar

    2016-02-01

    In modern positron emission tomography (PET) readout architectures, the position and energy estimation of scintillation events (singles) and the detection of coincident events (coincidences) are typically carried out on highly integrated, programmable printed circuit boards. The implementation of advanced singles and coincidence processing (SCP) algorithms for these architectures is often limited by the strict constraints of hardware-based data processing. In this paper, we present a software-based data acquisition and processing architecture (DAPA) that offers a high degree of flexibility for advanced SCP algorithms through relaxed real-time constraints and an easily extendible data processing framework. The DAPA is designed to acquire detector raw data from independent (but synchronized) detector modules and process the data for singles and coincidences in real-time using a center-of-gravity (COG)-based, a least-squares (LS)-based, or a maximum-likelihood (ML)-based crystal position and energy estimation approach (CPEEA). To test the DAPA, we adapted it to a preclinical PET detector that outputs detector raw data from 60 independent digital silicon photomultiplier (dSiPM)-based detector stacks and evaluated it with a [(18)F]-fluorodeoxyglucose-filled hot-rod phantom. The DAPA is highly reliable with less than 0.1% of all detector raw data lost or corrupted. For high validation thresholds (37.1 ± 12.8 photons per pixel) of the dSiPM detector tiles, the DAPA is real time capable up to 55 MBq for the COG-based CPEEA, up to 31 MBq for the LS-based CPEEA, and up to 28 MBq for the ML-based CPEEA. Compared to the COG-based CPEEA, the rods in the image reconstruction of the hot-rod phantom are only slightly better separable and less blurred for the LS- and ML-based CPEEA. While the coincidence time resolution (∼ 500 ps) and energy resolution (∼12.3%) are comparable for all three CPEEA, the system sensitivity is up to 2.5 × higher for the LS- and ML-based CPEEA

  3. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results. PMID:15062948

  4. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  5. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. PMID:26720258

  6. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing.

  7. IECON '87: Signal acquisition and processing; Proceedings of the 1987 International Conference on Industrial Electronics, Control, and Instrumentation, Cambridge, MA, Nov. 3, 4, 1987

    NASA Astrophysics Data System (ADS)

    Niederjohn, Russell J.

    1987-01-01

    Theoretical and applications aspects of signal processing are examined in reviews and reports. Topics discussed include speech processing methods, algorithms, and architectures; signal-processing applications in motor and power control; digital signal processing; signal acquisition and analysis; and processing algorithms and applications. Consideration is given to digital coding of speech algorithms, an algorithm for continuous-time processes in discrete-time measurement, quantization noise and filtering schemes for digital control systems, distributed data acquisition for biomechanics research, a microcomputer-based differential distance and velocity measurement system, velocity observations from discrete position encoders, a real-time hardware image preprocessor, and recognition of partially occluded objects by a knowledge-based system.

  8. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  9. Corn tassel detection based on image processing

    NASA Astrophysics Data System (ADS)

    Tang, Wenbing; Zhang, Yane; Zhang, Dongxing; Yang, Wei; Li, Minzan

    2012-01-01

    Machine vision has been widely applied in facility agriculture, and played an important role in obtaining environment information. In this paper, it is studied that application of image processing to recognize and locate corn tassel for corn detasseling machine. The corn tassel identification and location method was studied based on image processing and automated technology guidance information was provided for the actual production of corn emasculation operation. The system is the application of image processing to recognize and locate corn tassel for corn detasseling machine. According to the color characteristic of corn tassel, image processing techniques was applied to identify corn tassel of the images under HSI color space and Image segmentation was applied to extract the part of corn tassel, the feature of corn tassel was analyzed and extracted. Firstly, a series of preprocessing procedures were done. Then, an image segmentation algorithm based on HSI color space was develop to extract corn tassel from background and region growing method was proposed to recognize the corn tassel. The results show that this method could be effective for extracting corn tassel parts from the collected picture and can be used for corn tassel location information; this result could provide theoretical basis guidance for corn intelligent detasseling machine.

  10. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and

  11. A robust adaptive sampling method for faster acquisition of MR images.

    PubMed

    Vellagoundar, Jaganathan; Machireddy, Ramasubba Reddy

    2015-06-01

    A robust adaptive k-space sampling method is proposed for faster acquisition and reconstruction of MR images. In this method, undersampling patterns are generated based on magnitude profile of a fully acquired 2-D k-space data. Images are reconstructed using compressive sampling reconstruction algorithm. Simulation experiments are done to assess the performance of the proposed method under various signal-to-noise ratio (SNR) levels. The performance of the method is better than non-adaptive variable density sampling method when k-space SNR is greater than 10dB. The method is implemented on a fully acquired multi-slice raw k-space data and a quality assurance phantom data. Data reduction of up to 60% is achieved in the multi-slice imaging data and 75% is achieved in the phantom imaging data. The results show that reconstruction accuracy is improved over non-adaptive or conventional variable density sampling method. The proposed sampling method is signal dependent and the estimation of sampling locations is robust to noise. As a result, it eliminates the necessity of mathematical model and parameter tuning to compute k-space sampling patterns as required in non-adaptive sampling methods.

  12. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  13. SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.

    PubMed

    Lee, Hyunyeol; Park, Jaeseok

    2013-07-01

    Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging.

  14. An intelligent pre-processing framework for standardizing medical images for CAD and other post-processing applications

    NASA Astrophysics Data System (ADS)

    Raghupathi, Lakshminarasimhan; Devarakota, Pandu R.; Wolf, Matthias

    2012-03-01

    There is an increasing need to provide end-users with seamless and secure access to healthcare information acquired from a diverse range of sources. This might include local and remote hospital sites equipped with different vendors and practicing varied acquisition protocols and also heterogeneous external sources such as the Internet cloud. In such scenarios, image post-processing tools such as CAD (computer-aided diagnosis) which were hitherto developed using a smaller set of images may not always work optimally on newer set of images having entirely different characteristics. In this paper, we propose a framework that assesses the quality of a given input image and automatically applies an appropriate pre-processing method in such a manner that the image characteristics are normalized regardless of its source. We focus mainly on medical images, and the objective of the said preprocessing method is to standardize the performance of various image processing and workflow applications like CAD to perform in a consistent manner. First, our system consists of an assessment step wherein an image is evaluated based on criteria such as noise, image sharpness, etc. Depending on the measured characteristic, we then apply an appropriate normalization technique thus giving way to our overall pre-processing framework. A systematic evaluation of the proposed scheme is carried out on large set of CT images acquired from various vendors including images reconstructed with next generation iterative methods. Results demonstrate that the images are normalized and thus suitable for an existing LungCAD prototype1.

  15. Multibeam Sonar Backscatter Data Acquisition and Processing: Guidelines and Recommendations from the GEOHAB Backscatter Working Group

    NASA Astrophysics Data System (ADS)

    Heffron, E.; Lurton, X.; Lamarche, G.; Brown, C.; Lucieer, V.; Rice, G.; Schimel, A.; Weber, T.

    2015-12-01

    Backscatter data acquired with multibeam sonars are now commonly used for the remote geological interpretation of the seabed. The systems hardware, software, and processing methods and tools have grown in numbers and improved over the years, yet many issues linger: there are no standard procedures for acquisition, poor or absent calibration, limited understanding and documentation of processing methods, etc. A workshop organized at the GeoHab (a community of geoscientists and biologists around the topic of marine habitat mapping) annual meeting in 2013 was dedicated to seafloor backscatter data from multibeam sonars and concluded that there was an overwhelming need for better coherence and agreement on the topics of acquisition, processing and interpretation of data. The GeoHab Backscatter Working Group (BSWG) was subsequently created with the purpose of documenting and synthetizing the state-of-the-art in sensors and techniques available today and proposing methods for best practice in the acquisition and processing of backscatter data. Two years later, the resulting document "Backscatter measurements by seafloor-mapping sonars: Guidelines and Recommendations" was completed1. The document provides: An introduction to backscatter measurements by seafloor-mapping sonars; A background on the physical principles of sonar backscatter; A discussion on users' needs from a wide spectrum of community end-users; A review on backscatter measurement; An analysis of best practices in data acquisition; A review of data processing principles with details on present software implementation; and finally A synthesis and key recommendations. This presentation reviews the BSWG mandate, structure, and development of this document. It details the various chapter contents, its recommendations to sonar manufacturers, operators, data processing software developers and end-users and its implication for the marine geology community. 1: Downloadable at https://www.niwa.co.nz/coasts-and-oceans/research-projects/backscatter-measurement-guidelines

  16. [Software development of multi-element transient signal acquisition and processing with multi-channel ICP-AES].

    PubMed

    Zhang, Y; Zhuang, Z; Wang, X; Zhu, E; Liu, J

    2000-02-01

    A software for multi-channel ICP-AES multi-element transient signal acquisition and processing were developed in this paper. It has been successfully applied to signal acquisition and processing in many transient introduction techniques on-line hyphenated with multi-channel ICP-AES.

  17. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  18. Real-time digital signal processing for live electro-optic imaging.

    PubMed

    Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro

    2009-08-31

    We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.

  19. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  20. Applications of image processing technologies to fine arts

    NASA Astrophysics Data System (ADS)

    Bartolini, Franco; Cappellini, Vito; Del Mastio, Andrea; Piva, Alessandro

    2003-10-01

    Over the past years the progresses of electronic imaging have encouraged researchers to develop applications for the fine arts sector. In particular the aspects that have been mostly investigated have regarded, the high quality acquisition of paintings (both from the point of view of spatial resolution and of color calibration), the actual restoration of the works (for giving to restorers an aid to forecast the results of the tasks they choose), the virtual restoration (to try to build a digital copy of the painting as it was at the origin), and the diagnosis (to automatically highlights, evaluate and monitor the possible damages that a work has suffered). Partially related to image processing are also the technologies for 3D acquisition and modeling of statues. Finally particular care has been given recently also to the distribution of the digital copies of cultural heritage objects over the Internet, thus posing novel problems regarding the effective browsing of digital multimedia archives, and the protection of the Intellectual Property connected to art-works reproductions. The goal of this paper is to review the research results that have been obtained in the past in this field, and to present some problems that are still open and can represent a challenging research field for the future.

  1. Methods of Hematoxylin and Erosin Image Information Acquisition and Optimization in Confocal Microscopy

    PubMed Central

    Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin

    2016-01-01

    Objectives We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. Methods We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. Results An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. Conclusions The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis. PMID:27525165

  2. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  3. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  4. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  5. SENTINEL-2 Level 1 Products and Image Processing Performances

    NASA Astrophysics Data System (ADS)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m) and multi-temporal (better than 0.3 pixels) cases. Then, the prototyped image processing techniques (both radiometric and geometric) will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …). Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

  6. Transaction recording in medical image processing

    NASA Astrophysics Data System (ADS)

    Riedel, Christian H.; Ploeger, Andreas; Onnasch, Dietrich G. W.; Mehdorn, Hubertus M.

    1999-07-01

    In medical image processing original image data on archive servers may absolutely not be modified directly. On the other hand, images from read-only devices like CD-ROM cannot be changed and saved on the same storage medium. In both cases the modified data have to be stored as a second version and large amounts of storage volume are needed. We avoid these problems by using a program which records only each transaction prescribed to images. Each transaction is stored and used for further utilization and for renewed submission of the modified data. Conventionally, every time an image is viewed or printed, the modified version has to be saved in addition to the recorded data, either automatically or by the user. Compared to these approaches which not only squander storage space but area also time consuming our program has the following and advantages: First, the original image data which may not be modified are protected against manipulation. Second, small amounts of storage volume and network range are needed. Third, approved image operations can be automated by macros derived from transaction recordings. Finally, operations on the original data can always be controlled and traced back. As the handling of images gets easier with this concept, security for original image data is granted.

  7. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  8. Quantitative assessment of susceptibility weighted imaging processing methods

    PubMed Central

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2013-01-01

    Purpose To evaluate different susceptibility weighted imaging (SWI) phase processing methods and parameter selection, thereby improving understanding of potential artifacts, as well as facilitating choice of methodology in clinical settings. Materials and Methods Two major phase processing methods, Homodyne-filtering and phase unwrapping-high pass (HP) filtering, were investigated with various phase unwrapping approaches, filter sizes, and filter types. Magnitude and phase images were acquired from a healthy subject and brain injury patients on a 3T clinical Siemens MRI system. Results were evaluated based on image contrast to noise ratio and presence of processing artifacts. Results When using a relatively small filter size (32 pixels for the matrix size 512 × 512 pixels), all Homodyne-filtering methods were subject to phase errors leading to 2% to 3% masked brain area in lower and middle axial slices. All phase unwrapping-filtering/smoothing approaches demonstrated fewer phase errors and artifacts compared to the Homodyne-filtering approaches. For performing phase unwrapping, Fourier-based methods, although less accurate, were 2–4 orders of magnitude faster than the PRELUDE, Goldstein and Quality-guide methods. Conclusion Although Homodyne-filtering approaches are faster and more straightforward, phase unwrapping followed by HP filtering approaches perform more accurately in a wider variety of acquisition scenarios. PMID:24923594

  9. Fundamental concepts of digital image processing

    SciTech Connect

    Twogood, R.E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  10. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  11. Angiographic imaging using an 18.9 MHz swept-wavelength laser that is phase-locked to the data acquisition clock and resonant scanners (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tozburun, Serhat; Blatter, Cedric; Siddiqui, Meena; Nam, Ahhyun S.; Vakoc, Benjamin J.

    2016-03-01

    In this study, we present an angiographic system comprised from a novel 18.9 MHz swept wavelength source integrated with a MEMs-based 23.7 kHz fast-axis scanner. The system provides rapid acquisition of frames and volumes on which a range of Doppler and intensity-based angiographic analyses can be performed. Interestingly, the source and data acquisition computer can be directly phase-locked to provide an intrinsically phase stable imaging system supporting Doppler measurements without the need for individual A-line triggers or post-processing phase calibration algorithms. The system is integrated with a 1.8 Gigasample (GS) per second acquisition card supporting continuous acquisition to computer RAM for 10 seconds. Using this system, we demonstrate phase-stable acquisitions across volumes acquired at 60 Hz frequency. We also highlight the ability to perform c-mode angiography providing volume perfusion measurements with 30 Hz temporal resolution. Ultimately, the speed and phase-stability of this laser and MEMs scanner platform can be leveraged to accelerate OCT-based angiography and both phase-sensitive and phase-insensitive extraction of blood flow velocity.

  12. How to crack nuts: acquisition process in captive chimpanzees (Pan troglodytes) observing a model.

    PubMed

    Hirata, Satoshi; Morimura, Naruki; Houki, Chiharu

    2009-10-01

    Stone tool use for nut cracking consists of placing a hard-shelled nut onto a stone anvil and then cracking the shell open by pounding it with a stone hammer to get to the kernel. We investigated the acquisition of tool use for nut cracking in a group of captive chimpanzees to clarify what kind of understanding of the tools and actions will lead to the acquisition of this type of tool use in the presence of a skilled model. A human experimenter trained a male chimpanzee until he mastered the use of a hammer and anvil stone to crack open macadamia nuts. He was then put in a nut-cracking situation together with his group mates, who were naïve to this tool use; we did not have a control group without a model. The results showed that the process of acquisition could be broken down into several steps, including recognition of applying pressure to the nut,emergence of the use of a combination of three objects, emergence of the hitting action, using a tool for hitting, and hitting the nut. The chimpanzees recognized these different components separately and practiced them one after another. They gradually united these factors in their behavior leading to their first success. Their behavior did not clearly improve immediately after observing successful nut cracking by a peer, but observation of a skilled group member seemed to have a gradual, long-term influence on the acquisition of nut cracking by naïve chimpanzees.

  13. Summary of the activities of the subgroup on data acquisition and processing

    SciTech Connect

    Connolly, P.L.; Doughty, D.C.; Elias, J.E.

    1981-01-01

    A data acquisition and handling subgroup consisting of approximately 20 members met during the 1981 ISABELLE summer study. Discussions were led by members of the BNL ISABELLE Data Acquisition Group (DAG) with lively participation from outside users. Particularly large contributions were made by representatives of BNL experiments 734, 735, and the MPS, as well as the Fermilab Colliding Detector Facility and the SLAC LASS Facility. In contrast to the 1978 study, the subgroup did not divide its activities into investigations of various individual detectors, but instead attempted to review the current state-of-the-art in the data acquisition, trigger processing, and data handling fields. A series of meetings first reviewed individual pieces of the problem, including status of the Fastbus Project, the Nevis trigger processor, the SLAC 168/E and 3081/E emulators, and efforts within DAG. Additional meetings dealt with the question involving specifying and building complete data acquisition systems. For any given problem, a series of possible solutions was proposed by the members of the subgroup. In general, any given solution had both advantages and disadvantages, and there was never any consensus on which approach was best. However, there was agreement that certain problems could only be handled by systems of a given power or greater. what will be given here is a review of various solutions with associated powers, costs, advantages, and disadvantages.

  14. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  15. Data acquisition system to interface between imaging instruments and the network: Applications in electron microscopy and ultrasound

    NASA Astrophysics Data System (ADS)

    Kapp, Oscar H.; Ruan, Shengyang

    1997-09-01

    A system for data acquisition for imaging instruments utilizing a computer network was created. Two versions of this system, both with the same basic design, were separately installed in conjunction with an electron microscope and a clinical ultrasound device. They serve the functions of data acquisition and data server to manage and to transfer images from these instruments. The virtues of this system are its simplicity of design, universality, cost effectiveness, ease of management, security for data, and instrument protection. This system, with little or no modification, may be used in conjunction with a broad range of data acquiring instruments in scientific, industrial, and medical laboratories.

  16. Image gathering and processing - Information and fidelity

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  17. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  18. Acquisition of priori tissue optical structure based on non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Wan, Wenbo; Li, Jiao; Liu, Lingling; Wang, Yihan; Zhang, Yan; Gao, Feng

    2015-03-01

    Shape-parameterized diffuse optical tomography (DOT), which is based on a priori that assumes the uniform distribution of the optical properties in the each region, shows the effectiveness of complex biological tissue optical heterogeneities reconstruction. The priori tissue optical structure could be acquired with the assistance of anatomical imaging methods such as X-ray computed tomography (XCT) which suffers from low-contrast for soft tissues including different optical characteristic regions. For the mouse model, a feasible strategy of a priori tissue optical structure acquisition is proposed based on a non-rigid image registration algorithm. During registration, a mapping matrix is calculated to elastically align the XCT image of reference mouse to the XCT image of target mouse. Applying the matrix to the reference atlas which is a detailed mesh of organs/tissues in reference mouse, registered atlas can be obtained as the anatomical structure of target mouse. By assigning the literature published optical parameters of each organ to the corresponding anatomical structure, optical structure of the target organism can be obtained as a priori information for DOT reconstruction algorithm. By applying the non-rigid image registration algorithm to a target mouse which is transformed from the reference mouse, the results show that the minimum correlation coefficient can be improved from 0.2781 (before registration) to 0.9032 (after fine registration), and the maximum average Euclid distances can be decreased from 12.80mm (before registration) to 1.02mm (after fine registration), which has verified the effectiveness of the algorithm.

  19. VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS

    NASA Technical Reports Server (NTRS)

    Rizzi, S. A.

    1994-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech

  20. Optimized acquisition time for x-ray fluorescence imaging of gold nanoparticles: a preliminary study using photon counting detector

    NASA Astrophysics Data System (ADS)

    Ren, Liqiang; Wu, Di; Li, Yuhua; Chen, Wei R.; Zheng, Bin; Liu, Hong

    2016-03-01

    X-ray fluorescence (XRF) is a promising spectroscopic technique to characterize imaging contrast agents with high atomic numbers (Z) such as gold nanoparticles (GNPs) inside small objects. Its utilization for biomedical applications, however, is greatly limited to experimental research due to longer data acquisition time. The objectives of this study are to apply a photon counting detector array for XRF imaging and to determine an optimized XRF data acquisition time, at which the acquired XRF image is of acceptable quality to allow the maximum level of radiation dose reduction. A prototype laboratory XRF imaging configuration consisting of a pencil-beam X-ray and a photon counting detector array (1 × 64 pixels) is employed to acquire the XRF image through exciting the prepared GNP/water solutions. In order to analyze the signal to noise ratio (SNR) improvement versus the increased exposure time, all the XRF photons within the energy range of 63 - 76KeV that include two Kα gold fluorescence peaks are collected for 1s, 2s, 3s, and so on all the way up to 200s. The optimized XRF data acquisition time for imaging different GNP solutions is determined as the moment when the acquired XRF image just reaches a quality with a SNR of 20dB which corresponds to an acceptable image quality.

  1. In-situ Image Acquisition Strategy on Asteroid Surface by MINERVA Rover in HAYABUSA Mission

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, T.; Sasaki, S.; Yanagisawa, M.

    Institute of Space and Astronautical Science (ISAS) has launched the engineering test spacecraft ``HAYABUSA'' (formerly called ``MUSES-C'') to the near Earth asteroid ``ITOKAWA (1998SF36)'' on May 9, 2003. HAYABUSA will go to the target asteroid after two years' interplanetary cruise and will descend onto the asteroid surface in 2005 to acquire some fragments, which will be brought back to the Earth in 2007. A tiny rover called ``MINERVA'' has boarded the HAYABUSA spacecraft. MINERVA is the first asteroid rover in the world. It will be deployed onto the surface immediately before the spacecraft touches the asteroid to acquire some fragments. Then it will autonomously move over the surface by hopping for a couple of days and the obtained data on multiple places are transmitted to the Earth via the mother spacecraft. Small cameras and thermometers are installed in the rover. This paper describes the image acquisition strategy by the cameras installed in the rover.

  2. A digital receiver module with direct data acquisition for magnetic resonance imaging systems

    NASA Astrophysics Data System (ADS)

    Tang, Weinan; Sun, Hongyu; Wang, Weimin

    2012-10-01

    A digital receiver module for magnetic resonance imaging (MRI) with detailed hardware implementations is presented. The module is based on a direct sampling scheme using the latest mixed-signal circuit design techniques. A single field-programmable gate array chip is employed to perform software-based digital down conversion for radio frequency signals. The modular architecture of the receiver allows multiple acquisition channels to be implemented on a highly integrated printed circuit board. To maintain the phase coherence of the receiver and the exciter in the context of direct sampling, an effective phase synchronization method was proposed to achieve a phase deviation as small as 0.09°. The performance of the described receiver module was verified in the experiments for both low- and high-field (0.5 T and 1.5 T) MRI scanners and was compared to a modern commercial MRI receiver system.

  3. Recovery of phase inconsistencies in continuously moving table extended field of view magnetic resonance imaging acquisitions.

    PubMed

    Kruger, David G; Riederer, Stephen J; Rossman, Phillip J; Mostardi, Petrice M; Madhuranthakam, Ananth J; Hu, Houchun H

    2005-09-01

    MR images formed using extended FOV continuously moving table data acquisition can have signal falloff and loss of lateral spatial resolution at localized, periodic positions along the direction of table motion. In this work we identify the origin of these artifacts and provide a means for correction. The artifacts are due to a mismatch of the phase of signals acquired from contiguous sampling fields of view and are most pronounced when the central k-space views are being sampled. Correction can be performed using the phase information from a periodically sampled central view to adjust the phase of all other views of that view cycle, making the net phase uniform across each axial plane. Results from experimental phantom and contrast-enhanced peripheral MRA studies show that the correction technique substantially eliminates the artifact for a variety of phase encode orders.

  4. A digital receiver module with direct data acquisition for magnetic resonance imaging systems.

    PubMed

    Tang, Weinan; Sun, Hongyu; Wang, Weimin

    2012-10-01

    A digital receiver module for magnetic resonance imaging (MRI) with detailed hardware implementations is presented. The module is based on a direct sampling scheme using the latest mixed-signal circuit design techniques. A single field-programmable gate array chip is employed to perform software-based digital down conversion for radio frequency signals. The modular architecture of the receiver allows multiple acquisition channels to be implemented on a highly integrated printed circuit board. To maintain the phase coherence of the receiver and the exciter in the context of direct sampling, an effective phase synchronization method was proposed to achieve a phase deviation as small as 0.09°. The performance of the described receiver module was verified in the experiments for both low- and high-field (0.5 T and 1.5 T) MRI scanners and was compared to a modern commercial MRI receiver system.

  5. Data acquisition, processing and firing aid software for multichannel EMP simulation

    NASA Astrophysics Data System (ADS)

    Eumurian, Gregoire; Arbaud, Bruno

    1986-08-01

    Electromagnetic compatibility testing yields a large quantity of data for systematic analysis. An automated data acquisition system has been developed. It is based on standard EMP instrumentation which allows a pre-established program to be followed whilst orientating the measurements according to the results obtained. The system is controlled by a computer running interactive programs (multitask windows, scrollable menus, mouse, etc.) which handle the measurement channels, files, displays and process data in addition to providing an aid to firing.

  6. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  7. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  8. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  9. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  10. Technique for real-time frontal face image acquisition using stereo system

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.; Kudryashov, Yuri I.

    2013-04-01

    Most part of existing systems for face recognition is usually based on two-dimensional images. And the quality of recognition is rather high for frontal images of face. But for other kind of images the quality decreases significantly. It is necessary to compensate for the effect of a change in the posture of a person (the camera angle) for correct operation of such systems. There are methods of transformation of 2D image of the person to the canonical orientation. The efficiency of these methods depends on the accuracy of determination of specific anthropometric points. Problems can arise for cases of partly occlusion of the person`s face. Another approach is to have a set of person images for different view angles for the further processing. But a need for storing and processing a large number of two-dimensional images makes this method considerably time-consuming. The proposed technique uses stereo system for fast generation of person face 3D model and obtaining face image in given orientation using this 3D model. Real-time performance is provided by implementing and graph cut methods for face surface 3D reconstruction and applying CUDA software library for parallel calculation.

  11. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  12. From acoustic segmentation to language processing: evidence from optical imaging.

    PubMed

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use "anchors" to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, "guide" the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development.

  13. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  14. Signal displacement in spiral-in acquisitions: simulations and implications for imaging in SFG regions.

    PubMed

    Brewer, Kimberly D; Rioux, James A; Klassen, Martyn; Bowen, Chris V; Beyea, Steven D

    2012-07-01

    Susceptibility field gradients (SFGs) cause problems for functional magnetic resonance imaging (fMRI) in regions like the orbital frontal lobes, leading to signal loss and image artifacts (signal displacement and "pile-up"). Pulse sequences with spiral-in k-space trajectories are often used when acquiring fMRI in SFG regions such as inferior/medial temporal cortex because it is believed that they have improved signal recovery and decreased signal displacement properties. Previously postulated theories explain differing reasons why spiral-in appears to perform better than spiral-out; however it is clear that multiple mechanisms are occurring in parallel. This study explores differences in spiral-in and spiral-out images using human and phantom empirical data, as well as simulations consistent with the phantom model. Using image simulations, the displacement of signal was characterized using point spread functions (PSFs) and target maps, the latter of which are conceptually inverse PSFs describing which spatial locations contribute signal to a particular voxel. The magnitude of both PSFs and target maps was found to be identical for spiral-out and spiral-in acquisitions, with signal in target maps being displaced from distant regions in both cases. However, differences in the phase of the signal displacement patterns that consequently lead to changes in the intervoxel phase coherence were found to be a significant mechanism explaining differences between the spiral sequences. The results demonstrate that spiral-in trajectories do preserve more total signal in SFG regions than spiral-out; however, spiral-in does not in fact exhibit decreased signal displacement. Given that this signal can be displaced by significant distances, its recovery may not be preferable for all fMRI applications.

  15. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain

  16. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-09-25

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.

  17. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  18. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  19. Process to process communication over Fastbus in the data acquisition system of the ALEPH TPC

    SciTech Connect

    Lusiani, A. . Division PPE Scuola Normale Superiore, Pisa )

    1994-02-01

    The data acquisition system of the ALEPH TPC includes a VAX/VMS computer cluster and 36 intelligent Fastbus modules (ALEPH TPPS) running the OS9 multitasking real-time operating system. Dedicated software has been written in order to reliably exchange information over Fastbus between the VAX/VMS cluster and the 36 TPPs to initialize and co-ordinate the microprocessors, and to monitor and debug their operation. The functionality and the performance of this software will be presented together with an overview of the application that rely on it.

  20. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  1. Cardiac imaging in diagnostic VCT using multi-sector data acquisition and image reconstruction: step-and-shoot scan vs. helical scan

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Seamans, John L.; Dong, Fang; Okerlund, Darin

    2008-03-01

    Since the advent of multi-slice CT, helical scan has played an increasingly important role in cardiac imaging. With the availability of diagnostic volumetric CT, step-and-shoot scan has been becoming popular recently. Step-and-shoot scan decouples patient table motion from heart beating, and thus the temporal window for data acquisition and image reconstruction can be optimized, resulting in significantly reduced radiation dose, improved tolerance to heart beat rate variation and inter-cycle cardiac motion inconsistency. Multi-sector data acquisition and image reconstruction have been utilized in helical cardiac imaging to improve temporal resolution, but suffers from the coupling of heart beating and patient table motion. Recognizing the clinical demands, the multi-sector data acquisition scheme for step-and-shoot scan is investigated in this paper. The most outstanding feature of the multi-sector data acquisition combined with the stepand- shoot scan is the decoupling of patient table proceeding from heart beating, which offers the opportunities of employing prospective ECG-gating to improve dose efficiency and fine adjusting cardiac imaging phase to suppress artifacts caused by inter-cycle cardiac motion inconsistency. The improvement in temporal resolution and the resultant suppression of motion artifacts are evaluated via motion phantoms driven by artificial ECG signals. Both theoretical analysis and experimental evaluation show promising results for multi-sector data acquisition scheme to be employed with the step-and-shoot scan. With the ever-increasing gantry rotation speed and detector longitudinal coverage in stateof- the-art VCT scanners, it is expected that the step-and-shoot scan with multi-sector data acquisition scheme would play an increasingly important role in cardiac imaging using diagnostic VCT scanners.

  2. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  3. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  4. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  5. A knowledge acquisition process to analyse operational problems in solid waste management facilities.

    PubMed

    Dokas, Ioannis M; Panagiotakopoulos, Demetrios C

    2006-08-01

    The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions.

  6. APNEA list mode data acquisition and real-time event processing

    SciTech Connect

    Hogle, R.A.; Miller, P.; Bramblett, R.L.

    1997-11-01

    The LMSC Active Passive Neutron Examinations and Assay (APNEA) Data Logger is a VME-based data acquisition system using commercial-off-the-shelf hardware with the application-specific software. It receives TTL inputs from eighty-eight {sup 3}He detector tubes and eight timing signals. Two data sets are generated concurrently for each acquisition session: (1) List Mode recording of all detector and timing signals, timestamped to 3 microsecond resolution; (2) Event Accumulations generated in real-time by counting events into short (tens of microseconds) and long (seconds) time bins following repetitive triggers. List Mode data sets can be post-processed to: (1) determine the optimum time bins for TRU assay of waste drums, (2) analyze a given data set in several ways to match different assay requirements and conditions and (3) confirm assay results by examining details of the raw data. Data Logger events are processed and timestamped by an array of 15 TMS320C40 DSPs and delivered to an embedded controller (PowerPC604) for interim disk storage. Three acquisition modes, corresponding to different trigger sources are provided. A standard network interface to a remote host system (Windows NT or SunOS) provides for system control, status, and transfer of previously acquired data. 6 figs.

  7. Phonological processing in deaf signers and the impact of age of first language acquisition.

    PubMed

    MacSweeney, Mairéad; Waters, Dafydd; Brammer, Michael J; Woll, Bencie; Goswami, Usha

    2008-04-15

    Just as words can rhyme, the signs of a signed language can share structural properties, such as location. Linguistic description at this level is termed phonology. We report that a left-lateralised fronto-parietal network is engaged during phonological similarity judgements made in both English (rhyme) and British Sign Language (BSL; location). Since these languages operate in different modalities, these data suggest that the neural network supporting phonological processing is, to some extent, supramodal. Activation within this network was however modulated by language (BSL/English), hearing status (deaf/hearing), and age of BSL acquisition (native/non-native). The influence of language and hearing status suggests an important role for the posterior portion of the left inferior frontal gyrus in speech-based phonological processing in deaf people. This, we suggest, is due to increased reliance on the articulatory component of speech when the auditory component is absent. With regard to age of first language acquisition, non-native signers activated the left inferior frontal gyrus more than native signers during the BSL task, and also during the task performed in English, which both groups acquired late. This is the first neuroimaging demonstration that age of first language acquisition has implications not only for the neural systems supporting the first language, but also for networks supporting languages learned subsequently.

  8. On the Contrastive Analysis of Features in Second Language Acquisition: Uninterpretable Gender on Past Participles in English-French Processing

    ERIC Educational Resources Information Center

    Dekydtspotter, Laurent; Renaud, Claire

    2009-01-01

    Lardiere's discussion raises important questions about the use of features in second language (L2) acquisition. This response examines predictions for processing of a feature-valuing model vs. a frequency-sensitive, associative model in explaining the acquisition of French past participle agreement. Results from a reading-time experiment support…

  9. A Psychometric Study of Reading Processes in L2 Acquisition: Deploying Deep Processing to Push Learners' Discourse Towards Syntactic Processing-Based Constructions

    ERIC Educational Resources Information Center

    Manuel, Carlos J.

    2009-01-01

    This study assesses reading processes and/or strategies needed to deploy deep processing that could push learners towards syntactic-based constructions in L2 classrooms. Research has found L2 acquisition to present varying degrees of success and/or fossilization (Bley-Vroman 1989, Birdsong 1992 and Sharwood Smith 1994). For example, learners have…

  10. Functional optoacoustic imaging of moving objects using microsecond-delay acquisition of multispectral three-dimensional tomographic data.

    PubMed

    Deán-Ben, Xosé Luís; Bay, Erwin; Razansky, Daniel

    2014-07-30

    The breakthrough capacity of optoacoustics for three-dimensional visualization of dynamic events in real time has been recently showcased. Yet, efficient spectral unmixing for functional imaging of entire volumetric regions is significantly challenged by motion artifacts in concurrent acquisitions at multiple wavelengths. Here, we introduce a method for simultaneous acquisition of multispectral volumetric datasets by introducing a microsecond-level delay between excitation laser pulses at different wavelengths. Robust performance is demonstrated by real-time volumetric visualization of functional blood parametrers in human vasculature with a handheld matrix array optoacoustic probe. This approach can avert image artifacts imposed by velocities greater than 2 m/s, thus, does not only facilitate imaging influenced by respiratory, cardiac or other intrinsic fast movements in living tissues, but can achieve artifact-free imaging in the presence of more significant motion, e.g. abrupt displacements during handheld-mode operation in a clinical environment.

  11. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  12. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  13. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  14. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  15. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  16. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  17. Age of acquisition and imageability ratings for a large set of words, including verbs and function words.

    PubMed

    Bird, H; Franklin, S; Howard, D

    2001-02-01

    Age of acquisition and imageability ratings were collected for 2,645 words, including 892 verbs and 213 function words. Words that were ambiguous as to grammatical category were disambiguated: Verbs were shown in their infinitival form, and nouns (where appropriate) were preceded by the indefinite article (such as to crack and a crack). Subjects were speakers of British English selected from a wide age range, so that differences in the responses across age groups could be compared. Within the subset of early acquired noun/verb homonyms, the verb forms were rated as later acquired than the nouns, and the verb homonyms of high-imageability nouns were rated as significantly less imageable than their noun counterparts. A small number of words received significantly earlier or later age of acquisition ratings when the 20-40 years and 50-80 years age groups were compared. These tend to comprise words that have come to be used more frequently in recent years (either through technological advances or social change), or those that have fallen out of common usage. Regression analyses showed that although word length, familiarity, and concreteness make independent contributions to the age of acquisition measure, frequency and imageability are the most important predictors of rated age of acquisition.

  18. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  19. Measurement of eye lens dose for Varian On-Board Imaging with different cone-beam computed tomography acquisition techniques.

    PubMed

    Deshpande, Sudesh; Dhote, Deepak; Thakur, Kalpna; Pawar, Amol; Kumar, Rajesh; Kumar, Munish; Kulkarni, M S; Sharma, S D; Kannan, V

    2016-01-01

    The objective of this work was to measure patient eye lens dose for different cone-beam computed tomography (CBCT) acquisition protocols of Varian's On-Board Imaging (OBI) system using optically stimulated luminescence dosimeter (OSLD) and to study the variation in eye lens dose with patient geometry and distance of isocenter to the eye lens. During the experimental measurements, OSLD was placed on the patient between the eyebrows of both eyes in line of nose during CBCT image acquisition to measure eye lens doses. The eye lens dose measurements were carried out for three different cone-beam acquisition protocols (standard dose head, low-dose head [LDH], and high-quality head [HQH]) of Varian OBI. Measured doses were correlated with patient geometry and distance between isocenter and eye lens. Measured eye lens doses for standard head and HQH protocols were in the range of 1.8-3.2 mGy and 4.5-9.9 mGy, respectively. However, the measured eye lens dose for the LDH protocol was in the range of 0.3-0.7 mGy. The measured data indicate that eye lens dose to patient depends on the selected imaging protocol. It was also observed that eye lens dose does not depend on patient geometry but strongly depends on distance between eye lens and treatment field isocenter. However, undoubted advantages of imaging system should not be counterbalanced by inappropriate selection of imaging protocol, especially for very intense imaging protocol. PMID:27651564

  20. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  1. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  2. Novel ultrahigh resolution data acquisition and image reconstruction for multi-detector row CT

    SciTech Connect

    Flohr, T. G.; Stierstorfer, K.; Suess, C.; Schmidt, B.; Primak, A. N.; McCollough, C. H.

    2007-05-15

    We present and evaluate a special ultrahigh resolution mode providing considerably enhanced spatial resolution both in the scan plane and in the z-axis direction for a routine medical multi-detector row computed tomography (CT) system. Data acquisition is performed by using a flying focal spot both in the scan plane and in the z-axis direction in combination with tantalum grids that are inserted in front of the multi-row detector to reduce the aperture of the detector elements both in-plane and in the z-axis direction. The dose utilization of the system for standard applications is not affected, since the grids are moved into place only when needed and are removed for standard scanning. By means of this technique, image slices with a nominal section width of 0.4 mm (measured full width at half maximum=0.45 mm) can be reconstructed in spiral mode on a CT system with a detector configuration of 32x0.6 mm. The measured 2% value of the in-plane modulation transfer function (MTF) is 20.4 lp/cm, the measured 2% value of the longitudinal (z axis) MTF is 21.5 lp/cm. In a resolution phantom with metal line pair test patterns, spatial resolution of 20 lp/cm can be demonstrated both in the scan plane and along the z axis. This corresponds to an object size of 0.25 mm that can be resolved. The new mode is intended for ultrahigh resolution bone imaging, in particular for wrists, joints, and inner ear studies, where a higher level of image noise due to the reduced aperture is an acceptable trade-off for the clinical benefit brought about by the improved spatial resolution.

  3. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  4. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.

  5. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.

  6. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  7. Users' perceptions of the impact of electronic aids to daily living throughout the acquisition process.

    PubMed

    Ripat, Jacquie; Strock, Anne

    2004-01-01

    This study investigated the experience of seven new users of a particular type of assistive technology through the stages of anticipating, acquiring, and using an electronic aid to daily living. A mixed methods research approach was used to explore each of these stages. The Psychosocial Impact of Assistive Devices Scale was used to measure the perceived impact of the new assistive technology on users' quality of life, and findings were further explored and developed through open-ended questioning of the participants. Results indicated that preacquisition of the device, users predicted that the electronic aid to daily living would have a positive impact on their feelings of competence and confidence and that the device would enable them in a positive way. One month after acquiring the device a reduced, yet still positive, impact was observed. By 3 and 6 months after acquisition, perceived impact returned to the same positive high level as preacquisition. It is suggested that prior to receiving the device, potential users have positive expectations for the device that are not based in experience. At the early acquisition time, users adjust expectations of the role of the assistive technology in their lives and strive to balance expectations with reality. Three to 6 months after acquiring an electronic aid to daily living, the participants have a high positive view of how the device impacts on their lives based in experience and reality. A model illustrating the electronic aids to daily living acquisition process is proposed, and suggestions for future study are provided.

  8. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  9. How to crack nuts: acquisition process in captive chimpanzees (Pan troglodytes) observing a model.

    PubMed

    Hirata, Satoshi; Morimura, Naruki; Houki, Chiharu

    2009-10-01

    Stone tool use for nut cracking consists of placing a hard-shelled nut onto a stone anvil and then cracking the shell open by pounding it with a stone hammer to get to the kernel. We investigated the acquisition of tool use for nut cracking in a group of captive chimpanzees to clarify what kind of understanding of the tools and actions will lead to the acquisition of this type of tool use in the presence of a skilled model. A human experimenter trained a male chimpanzee until he mastered the use of a hammer and anvil stone to crack open macadamia nuts. He was then put in a nut-cracking situation together with his group mates, who were naïve to this tool use; we did not have a control group without a model. The results showed that the process of acquisition could be broken down into several steps, including recognition of applying pressure to the nut,emergence of the use of a combination of three objects, emergence of the hitting action, using a tool for hitting, and hitting the nut. The chimpanzees recognized these different components separately and practiced them one after another. They gradually united these factors in their behavior leading to their first success. Their behavior did not clearly improve immediately after observing successful nut cracking by a peer, but observation of a skilled group member seemed to have a gradual, long-term influence on the acquisition of nut cracking by naïve chimpanzees. PMID:19727866

  10. Processing strategies and software solutions for data-independent acquisition in mass spectrometry.

    PubMed

    Bilbao, Aivett; Varesio, Emmanuel; Luban, Jeremy; Strambio-De-Castillia, Caterina; Hopfgartner, Gérard; Müller, Markus; Lisacek, Frédérique

    2015-03-01

    Data-independent acquisition (DIA) offers several advantages over data-dependent acquisition (DDA) schemes for characterizing complex protein digests analyzed by LC-MS/MS. In contrast to the sequential detection, selection, and analysis of individual ions during DDA, DIA systematically parallelizes the fragmentation of all detectable ions within a wide m/z range regardless of intensity, thereby providing broader dynamic range of detected signals, improved reproducibility for identification, better sensitivity, and accuracy for quantification, and, potentially, enhanced proteome coverage. To fully exploit these advantages, composite or multiplexed fragment ion spectra generated by DIA require more elaborate processing algorithms compared to DDA. This review examines different DIA schemes and, in particular, discusses the concepts applied to and related to data processing. Available software implementations for identification and quantification are presented as comprehensively as possible and examples of software usage are cited. Processing workflows, including complete proprietary frameworks or combinations of modules from different open source data processing packages are described and compared in terms of software availability and usability, programming language, operating system support, input/output data formats, as well as the main principles employed in the algorithms used for identification and quantification. This comparative study concludes with further discussion of current limitations and expectable improvements in the short- and midterm future.

  11. Processing strategies and software solutions for data-independent acquisition in mass spectrometry.

    PubMed

    Bilbao, Aivett; Varesio, Emmanuel; Luban, Jeremy; Strambio-De-Castillia, Caterina; Hopfgartner, Gérard; Müller, Markus; Lisacek, Frédérique

    2015-03-01

    Data-independent acquisition (DIA) offers several advantages over data-dependent acquisition (DDA) schemes for characterizing complex protein digests analyzed by LC-MS/MS. In contrast to the sequential detection, selection, and analysis of individual ions during DDA, DIA systematically parallelizes the fragmentation of all detectable ions within a wide m/z range regardless of intensity, thereby providing broader dynamic range of detected signals, improved reproducibility for identification, better sensitivity, and accuracy for quantification, and, potentially, enhanced proteome coverage. To fully exploit these advantages, composite or multiplexed fragment ion spectra generated by DIA require more elaborate processing algorithms compared to DDA. This review examines different DIA schemes and, in particular, discusses the concepts applied to and related to data processing. Available software implementations for identification and quantification are presented as comprehensively as possible and examples of software usage are cited. Processing workflows, including complete proprietary frameworks or combinations of modules from different open source data processing packages are described and compared in terms of software availability and usability, programming language, operating system support, input/output data formats, as well as the main principles employed in the algorithms used for identification and quantification. This comparative study concludes with further discussion of current limitations and expectable improvements in the short- and midterm future. PMID:25430050

  12. A Practical Approach to Quantitative Processing and Analysis of Small Biological Structures by Fluorescent Imaging

    PubMed Central

    Noller, Crystal M.; Boulina, Maria; McNamara, George; Szeto, Angela; McCabe, Philip M.

    2016-01-01

    Standards in quantitative fluorescent imaging are vaguely recognized and receive insufficient discussion. A common best practice is to acquire images at Nyquist rate, where highest signal frequency is assumed to be the highest obtainable resolution of the imaging system. However, this particular standard is set to insure that all obtainable information is being collected. The objective of the current study was to demonstrate that for quantification purposes, these correctly set acquisition rates can be redundant; instead, linear size of the objects of interest can be used to calculate sufficient information density in the image. We describe optimized image acquisition parameters and unbiased methods for processing and quantification of medium-size cellular structures. Sections of rabbit aortas were immunohistochemically stained to identify and quantify sympathetic varicosities, >2 μm in diameter. Images were processed to reduce background noise and segment objects using free, open-access software. Calculations of the optimal sampling rate for the experiment were based on the size of the objects of interest. The effect of differing sampling rates and processing techniques on object quantification was demonstrated. Oversampling led to a substantial increase in file size, whereas undersampling hindered reliable quantification. Quantification of raw and incorrectly processed images generated false structures, misrepresenting the underlying data. The current study emphasizes the importance of defining image-acquisition parameters based on the structure(s) of interest. The proposed postacquisition processing steps effectively removed background and noise, allowed for reliable quantification, and eliminated user bias. This customizable, reliable method for background subtraction and structure quantification provides a reproducible tool for researchers across biologic disciplines. PMID:27182204

  13. A Practical Approach to Quantitative Processing and Analysis of Small Biological Structures by Fluorescent Imaging.

    PubMed

    Noller, Crystal M; Boulina, Maria; McNamara, George; Szeto, Angela; McCabe, Philip M; Mendez, Armando J

    2016-09-01

    Standards in quantitative fluorescent imaging are vaguely recognized and receive insufficient discussion. A common best practice is to acquire images at Nyquist rate, where highest signal frequency is assumed to be the highest obtainable resolution of the imaging system. However, this particular standard is set to insure that all obtainable information is being collected. The objective of the current study was to demonstrate that for quantification purposes, these correctly set acquisition rates can be redundant; instead, linear size of the objects of interest can be used to calculate sufficient information density in the image. We describe optimized image acquisition parameters and unbiased methods for processing and quantification of medium-size cellular structures. Sections of rabbit aortas were immunohistochemically stained to identify and quantify sympathetic varicosities, >2 μm in diameter. Images were processed to reduce background noise and segment objects using free, open-access software. Calculations of the optimal sampling rate for the experiment were based on the size of the objects of interest. The effect of differing sampling rates and processing techniques on object quantification was demonstrated. Oversampling led to a substantial increase in file size, whereas undersampling hindered reliable quantification. Quantification of raw and incorrectly processed images generated false structures, misrepresenting the underlying data. The current study emphasizes the importance of defining image-acquisition parameters based on the structure(s) of interest. The proposed postacquisition processing steps effectively removed background and noise, allowed for reliable quantification, and eliminated user bias. This customizable, reliable method for background subtraction and structure quantification provides a reproducible tool for researchers across biologic disciplines. PMID:27182204

  14. Human movement analysis with image processing in real time

    NASA Astrophysics Data System (ADS)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.

    1991-04-01

    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  15. Memory acquisition and retrieval impact different epigenetic processes that regulate gene expression

    PubMed Central

    2015-01-01

    Background A fundamental question in neuroscience is how memories are stored and retrieved in the brain. Long-term memory formation requires transcription, translation and epigenetic processes that control gene expression. Thus, characterizing genome-wide the transcriptional changes that occur after memory acquisition and retrieval is of broad interest and importance. Genome-wide technologies are commonly used to interrogate transcriptional changes in discovery-based approaches. Their ability to increase scientific insight beyond traditional candidate gene approaches, however, is usually hindered by batch effects and other sources of unwanted variation, which are particularly hard to control in the study of brain and behavior. Results We examined genome-wide gene expression after contextual conditioning in the mouse hippocampus, a brain region essential for learning and memory, at all the time-points in which inhibiting transcription has been shown to impair memory formation. We show that most of the variance in gene expression is not due to conditioning and that by removing unwanted variance through additional normalization we are able provide novel biological insights. In particular, we show that genes downregulated by memory acquisition and retrieval impact different functions: chromatin assembly and RNA processing, respectively. Levels of histone 2A variant H2AB are reduced only following acquisition, a finding we confirmed using quantitative proteomics. On the other hand, splicing factor Rbfox1 and NMDA receptor-dependent microRNA miR-219 are only downregulated after retrieval, accompanied by an increase in protein levels of miR-219 target CAMKIIγ. Conclusions We provide a thorough characterization of coding and non-coding gene expression during long-term memory formation. We demonstrate that unwanted variance dominates the signal in transcriptional studies of learning and memory and introduce the removal of unwanted variance through normalization as a

  16. Thermal infrared pushbroom imagery acquisition and processing. [of NASA's Advanced Land Observing System

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Corbett, F. J.; Spera, T. J.; Andrada, T.

    1982-01-01

    A 9-element focal plane detector array and signal processing electronics was developed and delivered in December 1977. It was integrated into a thermal infrared imaging system using LSI microprocessor image processing and CRT display. After three years of laboratory operation, the focal plane has demonstrated high reliability and performance. On the basis of the 9-channel breadboard, the 90-element Aircraft Pushbroom IR/CCD Focal Plane Development Program was funded in October 1977. A follow-on program was awarded in July 1979, for the construction of a field test instrument and image processing facility. The objective of this project was to demonstrate thermal infrared pushbroom hard-copy imagery. It is pointed out that the successful development of the 9-element and 90-element thermal infrared hybrid imaging systems using photoconductive (Hg,Cd)Te has verified the operational concept of 8 to 14 micrometer pushbroom scanners.

  17. Collecting Samples in Gale Crater, Mars; an Overview of the Mars Science Laboratory Sample Acquisition, Sample Processing and Handling System

    NASA Astrophysics Data System (ADS)

    Anderson, R. C.; Jandura, L.; Okon, A. B.; Sunshine, D.; Roumeliotis, C.; Beegle, L. W.; Hurowitz, J.; Kennedy, B.; Limonadi, D.; McCloskey, S.; Robinson, M.; Seybold, C.; Brown, K.

    2012-09-01

    The Mars Science Laboratory Mission (MSL), scheduled to land on Mars in the summer of 2012, consists of a rover and a scientific payload designed to identify and assess the habitability, geological, and environmental histories of Gale crater. Unraveling the geologic history of the region and providing an assessment of present and past habitability requires an evaluation of the physical and chemical characteristics of the landing site; this includes providing an in-depth examination of the chemical and physical properties of Martian regolith and rocks. The MSL Sample Acquisition, Processing, and Handling (SA/SPaH) subsystem will be the first in-situ system designed to acquire interior rock and soil samples from Martian surface materials. These samples are processed and separated into fine particles and distributed to two onboard analytical science instruments SAM (Sample Analysis at Mars Instrument Suite) and CheMin (Chemistry and Mineralogy) or to a sample analysis tray for visual inspection. The SA/SPaH subsystem is also responsible for the placement of the two contact instruments, Alpha Particle X-Ray Spectrometer (APXS), and the Mars Hand Lens Imager (MAHLI), on rock and soil targets. Finally, there is a Dust Removal Tool (DRT) to remove dust particles from rock surfaces for subsequent analysis by the contact and or mast mounted instruments (e.g. Mast Cameras (MastCam) and the Chemistry and Micro-Imaging instruments (ChemCam)).

  18. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.

  19. Three-dimensional fast imaging employing steady-state acquisition MRI and its diagnostic value for lumbar foraminal stenosis.

    PubMed

    Nemoto, Osamu; Fujikawa, Akira; Tachibana, Atsuko

    2014-07-01

    The aim of this study was to evaluate the usefulness of three-dimensional (3D) fast imaging employing steady-state acquisition (3D FIESTA) in the diagnosis of lumbar foraminal stenosis (LFS). Fifteen patients with LFS and 10 healthy volunteers were studied. All patients met the following criteria: (1) single L5 radiculopathy without compressive lesion in the spinal canal, (2) pain reproduction during provocative radiculography, and (3) improvement of symptoms after surgery. We retrospectively compared the symptomatic nerve roots to the asymptomatic nerve roots on fast spin-echo (FSE) T1 sagittal, FSE T2 axial and reconstituted 3D FIESTA images. The κ values for interobserver agreement in determining the presence of LFS were 0.525 for FSE T1 sagittal images, 0.735 for FSE T2 axial images, 0.750 for 3D FIESTA sagittal, 0.733 for axial images, and 0.953 for coronal images. The sensitivities and specificities were 60 and 86 % for FSE T1 sagittal images, 27 and 91 % for FSE T2 axial images, 60 and 97 % for 3D FIESTA sagittal images, 60 and 94 % for 3D FIESTA axial images, and 100 and 97 % for 3D FIESTA coronal images, respectively. 3D FIESTA can provide more reliable and additional information for the running course of lumbar nerve root, compared with conventional magnetic resonance imaging. Particularly, use of 3D FIESTA coronal images enables accurate diagnosis for LFS.

  20. Architecture for web-based image processing

    NASA Astrophysics Data System (ADS)

    Srini, Vason P.; Pini, David; Armstrong, Matt D.; Alalusi, Sayf H.; Thendean, John; Ueng, Sain-Zee; Bushong, David P.; Borowski, Erek S.; Chao, Elaine; Rabaey, Jan M.

    1997-09-01

    A computer systems architecture for processing medical images and other data coming over the Web is proposed. The architecture comprises a Java engine for communicating images over the Internet, storing data in local memory, doing floating point calculations, and a coprocessor MIMD parallel DSP for doing fine-grained operations found in video, graphics, and image processing applications. The local memory is shared between the Java engine and the parallel DSP. Data coming from the Web is stored in the local memory. This approach avoids the frequent movement of image data between a host processor's memory and an image processor's memory, found in many image processing systems. A low-power and high performance parallel DSP architecture containing lots of processors interconnected by a segmented hierarchical network has been developed. The instruction set of the 16-bit processor supports video, graphics, and image processing calculations. Two's complement arithmetic, saturation arithmetic, and packed instructions are supported. Higher data precision such as 32-bit and 64-bit can be achieved by cascading processors. A VLSI chip implementation of the architecture containing 64 processors organized in 16 clusters and interconnected by a statically programmable hierarchical bus is in progress. The buses are segmentable by programming switches on the bus. The instruction memory of each processor has sixteen 40-bit words. Data streaming through the processor is manipulated by the instructions. Multiple operations can be performed in a single cycle in a processor. A low-power handshake protocol is used for synchronization between the sender and the receiver of data. Temporary storage for data and filter coefficients is provided in each chip. A 256 by 16 memory unit is included in each of the 16 clusters. The memory unit can be used as a delay line, FIFO, lookup table or random access memory. The architecture is scalable with technology. Portable multimedia terminals like U

  1. Problem solving in nursing practice: application, process, skill acquisition and measurement.

    PubMed

    Roberts, J D; While, A E; Fitzpatrick, J M

    1993-06-01

    This paper analyses the role of problem solving in nursing practice including the process, acquisition and measurement of problem-solving skills. It is argued that while problem-solving ability is acknowledged as critical if today's nurse practitioner is to maintain effective clinical practice, to date it retains a marginal place in nurse education curricula. Further, it has attracted limited empirical study. Such an omission, it is argued, requires urgent redress if the nursing profession is to meet effectively the challenges of the next decade and beyond.

  2. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  3. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  4. Improving in situ data acquisition using training images and a Bayesian mixture model

    NASA Astrophysics Data System (ADS)

    Abdollahifard, Mohammad Javad; Mariethoz, Gregoire; Pourfard, Mohammadreza

    2016-06-01

    Estimating the spatial distribution of physical processes using a minimum number of samples is of vital importance in earth science applications where sampling is costly. In recent years, training image-based methods have received a lot of attention for interpolation and simulation. However, training images have never been employed to optimize spatial sampling process. In this paper, a sequential compressive sampling method is presented which decides the location of new samples based on a training image. First, a Bayesian mixture model is developed based on the training patterns. Then, using this model, unknown values are estimated based on a limited number of random samples. Since the model is probabilistic, it allows estimating local uncertainty conditionally to the available samples. Based on this, new samples are sequentially extracted from the locations with maximum uncertainty. Experiments show that compared to a random sampling strategy, the proposed supervised sampling method significantly reduces the number of samples needed to achieve the same level of accuracy, even when the training image is not optimally chosen. The method has the potential to reduce the number of observations necessary for the characterization of environmental processes.

  5. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  6. Acquisition of quantitative physiological data and computerized image reconstruction using a single scan TV system

    NASA Technical Reports Server (NTRS)

    Baily, N. A.

    1975-01-01

    Single scan operation of television X-ray fluoroscopic systems allow both analog and digital reconstruction of tomographic sections from single plan images. This type of system combined with a minimum of statistical processing showed excellent capabilities for delineating small changes in differential X-ray attenuation. Patient dose reduction is significant when compared to normal operation or film recording. Flat screen, low light level systems were both rugged and light in weight, making them applicable for a variety of special purposes. Three dimensional information was available from the tomographic methods and the recorded data was sufficient when used with appropriate computer display devices to give representative 3D images.

  7. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  8. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  9. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  10. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  11. Gradient-based correction of chromatic aberration in the joint acquisition of color and near-infrared images

    NASA Astrophysics Data System (ADS)

    Sadeghipoor, Zahra; Lu, Yue M.; Süsstrunk, Sabine

    2015-02-01

    Chromatic aberration distortions such as wavelength-dependent blur are caused by imperfections in photographic lenses. These distortions are much more severe in the case of color and near-infrared joint acquisition, as a wider band of wavelengths is captured. In this paper, we consider a scenario where the color image is in focus, and the NIR image captured with the same lens and same focus settings is out-of-focus and blurred. To reduce chromatic aberration distortions, we propose an algorithm that estimates the blur kernel and deblurs the NIR image using the sharp color image as a guide in both steps. In the deblurring step, we retrieve the lost details of the NIR image by exploiting the sharp edges of the color image, as the gradients of color and NIR images are often correlated. However, differences of scene reflections and light in visible and NIR bands cause the gradients of color and NIR images to be different in some regions of the image. To handle this issue, our algorithm measures the similarities and differences between the gradients of the NIR and color channels. The similarity measures guide the deblurring algorithm to efficiently exploit the gradients of the color image in reconstructing high-frequency details of NIR, without discarding the inherent differences between these images. Simulation results verify the effectiveness of our algorithm, both in estimating the blur kernel and deblurring the NIR image, without producing ringing artifacts inherent to the results of most deblurring methods.

  12. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  13. Sorting Olive Batches for the Milling Process Using Image Processing.

    PubMed

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  14. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  15. An Integrated Data Acquisition / User Request/ Processing / Delivery System for Airborne Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Chapman, B.; Chu, A.; Tung, W.

    2003-12-01

    Airborne science data has historically played an important role in the development of the scientific underpinnings for spaceborne missions. When the science community determines the need for new types of spaceborne measurements, airborne campaigns are often crucial in risk mitigation for these future missions. However, full exploitation of the acquired data may be difficult due to its experimental and transitory nature. Externally to the project, most problematic (in particular, for those not involved in requesting the data acquisitions) may be the difficulty in searching for, requesting, and receiving the data, or even knowing the data exist. This can result in a rather small, insular community of users for these data sets. Internally, the difficulty for the project is in maintaining a robust processing and archival system during periods of changing mission priorities and evolving technologies. The NASA/JPL Airborne Synthetic Aperture Radar (AIRSAR) has acquired data for a large and varied community of scientists and engineers for 15 years. AIRSAR is presently supporting current NASA Earth Science Enterprise experiments, such as the Soil Moisture EXperiment (SMEX) and the Cold Land Processes experiment (CLPX), as well as experiments conducted as many as 10 years ago. During that time, it's processing, data ordering, and data delivery system has undergone evolutionary change as the cost and capability of resources has improved. AIRSAR now has a fully integrated data acquisition/user request/processing/delivery system through which most components of the data fulfillment process communicate via shared information within a database. The integration of these functions has reduced errors and increased throughput of processed data to customers.

  16. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  17. A multiple process solution to the logical problem of language acquisition*

    PubMed Central

    MACWHINNEY, BRIAN

    2006-01-01

    Many researchers believe that there is a logical problem at the center of language acquisition theory. According to this analysis, the input to the learner is too inconsistent and incomplete to determine the acquisition of grammar. Moreover, when corrective feedback is provided, children tend to ignore it. As a result, language learning must rely on additional constraints from universal grammar. To solve this logical problem, theorists have proposed a series of constraints and parameterizations on the form of universal grammar. Plausible alternatives to these constraints include: conservatism, item-based learning, indirect negative evidence, competition, cue construction, and monitoring. Careful analysis of child language corpora has cast doubt on claims regarding the absence of positive exemplars. Using demonstrably available positive data, simple learning procedures can be formulated for each of the syntactic structures that have traditionally motivated invocation of the logical problem. Within the perspective of emergentist theory (MacWhinney, 2001), the operation of a set of mutually supportive processes is viewed as providing multiple buffering for developmental outcomes. However, the fact that some syntactic structures are more difficult to learn than others can be used to highlight areas of intense grammatical competition and processing load. PMID:15658750

  18. A sophisticated, multi-channel data acquisition and processing system for high frequency noise research

    NASA Technical Reports Server (NTRS)

    Hall, David G.; Bridges, James

    1992-01-01

    A sophisticated, multi-channel computerized data acquisition and processing system was developed at the NASA LeRC for use in noise experiments. This technology, which is available for transfer to industry, provides a convenient, cost-effective alternative to analog tape recording for high frequency acoustic measurements. This system provides 32-channel acquisition of microphone signals with an analysis bandwidth up to 100 kHz per channel. Cost was minimized through the use of off-the-shelf components. Requirements to allow for future expansion were met by choosing equipment which adheres to established industry standards for hardware and software. Data processing capabilities include narrow band and 1/3 octave spectral analysis, compensation for microphone frequency response/directivity, and correction of acoustic data to standard day conditions. The system was used successfully in a major wind tunnel test program at NASA LeRC to acquire and analyze jet noise data in support of the High Speed Civil Transport (HSCT) program.

  19. Mars Science Laboratory Sample Acquisition, Sample Processing and Handling: Subsystem Design and Test Challenges

    NASA Technical Reports Server (NTRS)

    Jandura, Louise

    2010-01-01

    The Sample Acquisition/Sample Processing and Handling subsystem for the Mars Science Laboratory is a highly-mechanized, Rover-based sampling system that acquires powdered rock and regolith samples from the Martian surface, sorts the samples into fine particles through sieving, and delivers small portions of the powder into two science instruments inside the Rover. SA/SPaH utilizes 17 actuated degrees-of-freedom to perform the functions needed to produce 5 sample pathways in support of the scientific investigation on Mars. Both hardware redundancy and functional redundancy are employed in configuring this sampling system so some functionality is retained even with the loss of a degree-of-freedom. Intentional dynamic environments are created to move sample while vibration isolators attenuate this environment at the sensitive instruments located near the dynamic sources. In addition to the typical flight hardware qualification test program, two additional types of testing are essential for this kind of sampling system: characterization of the intentionally-created dynamic environment and testing of the sample acquisition and processing hardware functions using Mars analog materials in a low pressure environment. The overall subsystem design and configuration are discussed along with some of the challenges, tradeoffs, and lessons learned in the areas of fault tolerance, intentional dynamic environments, and special testing

  20. A multiple process solution to the logical problem of language acquisition.

    PubMed

    MacWhinney, Brian

    2004-11-01

    Many researchers believe that there is a logical problem at the centre of language acquisition theory. According to this analysis, the input to the learner is too inconsistent and incomplete to determine the acquisition of grammar. Moreover, when corrective feedback is provided, children tend to ignore it. As a result, language learning must rely on additional constraints from universal grammar. To solve this logical problem, theorists have proposed a series of constraints and parameterizations on the form of universal grammar. Plausible alternatives to these constraints include: conservatism, item-based learning, indirect negative evidence, competition, cue construction, and monitoring. Careful analysis of child language corpora has cast doubt on claims regarding the absence of positive exemplars. Using demonstrably available positive data, simple learning procedures can be formulated for each of the syntactic structures that have traditionally motivated invocation of the logical problem. Within the perspective of emergentist theory (MacWhinney, 2001), the operation of a set of mutually supportive processes is viewed as providing multiple buffering for developmental outcomes. However, the fact that some syntactic structures are more difficult to learn than others can be used to highlight areas of intense grammatical competition and processing load.

  1. Compressed sensing reconstruction for whole-heart imaging with 3D radial trajectories: a graphics processing unit implementation.

    PubMed

    Nam, Seunghoon; Akçakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J; Tarokh, Vahid; Nezafat, Reza

    2013-01-01

    A disadvantage of three-dimensional (3D) isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this article, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit is presented. The execution time of the graphics processing unit-implemented CS reconstruction was compared with that of the C++ implementation, and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm, and the graphics processing unit implementation greatly reduces the execution time of CS reconstruction yielding 34-54 times speed-up compared with C++ implementation. PMID:22392604

  2. FITSH- a software package for image processing

    NASA Astrophysics Data System (ADS)

    Pál, András.

    2012-04-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

  3. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  4. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  5. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images

    NASA Technical Reports Server (NTRS)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

    1986-01-01

    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  6. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  7. Image processing on MPP-like arrays

    SciTech Connect

    Coletti, N.B.

    1983-01-01

    The desirability and suitability of using very large arrays of processors such as the Massively Parallel Processor (MPP) for processing remotely sensed images is investigated. The dissertation can be broken into two areas. The first area is the mathematical analysis of emultating the Bitonic Sorting Network on an array of processors. This sort is useful in histogramming images that have a very large number of pixel values (or gray levels). The optimal number of routing steps required to emulate a N = 2/sup k/ x 2/sup k/ element network on a 2/sup n/ x 2/sup n/ array (k less than or equal to n less than or equal to 7), provided each processor contains one element before and after every merge sequence, is proved to be 14 ..sqrt..N - 4log/sub 2/N - 14. Several already existing emulations achieve this lower bound. The number of elements sorted dictates a particular sorting network, and hence the number of routing steps. It is established that the cardinality N = 3/4 x 2/sup 2n/ elements used the absolute minimum routing steps, 8 ..sqrt..3 ..sqrt..N -4log/sub 2/N - (20 - 4log/sub 2/3). An algorithm achieving this bound is presented. The second area covers the implementations of the image processing tasks. In particular the histogramming of large numbers of gray-levels, geometric distortion determination and its efficient correction, fast Fourier transforms, and statistical clustering are investigated.

  8. Low-Dose Micro-CT Imaging for Vascular Segmentation and Analysis Using Sparse-View Acquisitions

    PubMed Central

    Vandeghinste, Bert; Vandenberghe, Stefaan; Vanhove, Chris; Staelens, Steven; Van Holen, Roel

    2013-01-01

    The aim of this study is to investigate whether reliable and accurate 3D geometrical models of the murine aortic arch can be constructed from sparse-view data in vivo micro-CT acquisitions. This would considerably reduce acquisition time and X-ray dose. In vivo contrast-enhanced micro-CT datasets were reconstructed using a conventional filtered back projection algorithm (FDK), the image space reconstruction algorithm (ISRA) and total variation regularized ISRA (ISRA-TV). The reconstructed images were then semi-automatically segmented. Segmentations of high- and low-dose protocols were compared and evaluated based on voxel classification, 3D model diameters and centerline differences. FDK reconstruction does not lead to accurate segmentation in the case of low-view acquisitions. ISRA manages accurate segmentation with 1024 or more projection views. ISRA-TV needs a minimum of 256 views. These results indicate that accurate vascular models can be obtained from micro-CT scans with 8 times less X-ray dose and acquisition time, as long as regularized iterative reconstruction is used. PMID:23840893

  9. Age of acquisition and imageability norms for base and morphologically complex words in English and in Spanish.

    PubMed

    Davies, Shakiela K; Izura, Cristina; Socas, Rosy; Dominguez, Alberto

    2016-03-01

    The extent to which processing words involves breaking them down into smaller units or morphemes or is the result of an interactive activation of other units, such as meanings, letters, and sounds (e.g., dis-agree-ment vs. disagreement), is currently under debate. Disentangling morphology from phonology and semantics is often a methodological challenge, because orthogonal manipulations are difficult to achieve (e.g., semantically unrelated words are often phonologically related: casual-casualty and, vice versa, sign-signal). The present norms provide a morphological classification of 3,263 suffixed derived words from two widely spoken languages: English (2,204 words) and Spanish (1,059 words). Morphologically complex words were sorted into four categories according to the nature of their relationship with the base word: phonologically transparent (friend-friendly), phonologically opaque (child-children), semantically transparent (habit-habitual), and semantically opaque (event-eventual). In addition, ratings were gathered for age of acquisition, imageability, and semantic distance (i.e., the extent to which the meaning of the complex derived form could be drawn from the meaning of its base constituents). The norms were completed by adding values for word frequency; word length in number of phonemes, letters, and syllables; lexical similarity, as measured by the number of neighbors; and morphological family size. A series of comparative analyses from the collated ratings for the base and derived words were also carried out. The results are discussed in relation to recent findings.

  10. Squeezing through the Now-or-Never bottleneck: Reconnecting language processing, acquisition, change, and structure.

    PubMed

    Chater, Nick; Christiansen, Morten H

    2016-01-01

    If human language must be squeezed through a narrow cognitive bottleneck, what are the implications for language processing, acquisition, change, and structure? In our target article, we suggested that the implications are far-reaching and form the basis of an integrated account of many apparently unconnected aspects of language and language processing, as well as suggesting revision of many existing theoretical accounts. With some exceptions, commentators were generally supportive both of the existence of the bottleneck and its potential implications. Many commentators suggested additional theoretical and linguistic nuances and extensions, links with prior work, and relevant computational and neuroscientific considerations; some argued for related but distinct viewpoints; a few, though, felt traditional perspectives were being abandoned too readily. Our response attempts to build on the many suggestions raised by the commentators and to engage constructively with challenges to our approach.

  11. Squeezing through the Now-or-Never bottleneck: Reconnecting language processing, acquisition, change, and structure.

    PubMed

    Chater, Nick; Christiansen, Morten H

    2016-01-01

    If human language must be squeezed through a narrow cognitive bottleneck, what are the implications for language processing, acquisition, change, and structure? In our target article, we suggested that the implications are far-reaching and form the basis of an integrated account of many apparently unconnected aspects of language and language processing, as well as suggesting revision of many existing theoretical accounts. With some exceptions, commentators were generally supportive both of the existence of the bottleneck and its potential implications. Many commentators suggested additional theoretical and linguistic nuances and extensions, links with prior work, and relevant computational and neuroscientific considerations; some argued for related but distinct viewpoints; a few, though, felt traditional perspectives were being abandoned too readily. Our response attempts to build on the many suggestions raised by the commentators and to engage constructively with challenges to our approach. PMID:27561252

  12. Acquisition of material properties in production for sheet metal forming processes

    SciTech Connect

    Heingärtner, Jörg; Hora, Pavel; Neumann, Anja; Hortig, Dirk; Rencki, Yasar

    2013-12-16

    In past work a measurement system for the in-line acquisition of material properties was developed at IVP. This system is based on the non-destructive eddy-current principle. Using this system, a 100% control of material properties of the processed material is possible. The system can be used for ferromagnetic materials like standard steels as well as paramagnetic materials like Aluminum and stainless steel. Used as an in-line measurement system, it can be configured as a stand-alone system to control material properties and sort out inapplicable material or as part of a control system of the forming process. In both cases, the acquired data can be used as input data for numerical simulations, e.g. stochastic simulations based on real world data.

  13. Multi-image acquisition-based distance sensor using agile laser spot beam.

    PubMed

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  14. Diffusion MRI of the neonate brain: acquisition, processing and analysis techniques.

    PubMed

    Pannek, Kerstin; Guzzetta, Andrea; Colditz, Paul B; Rose, Stephen E

    2012-10-01

    Diffusion MRI (dMRI) is a popular noninvasive imaging modality for the investigation of the neonate brain. It enables the assessment of white matter integrity, and is particularly suited for studying white matter maturation in the preterm and term neonate brain. Diffusion tractography allows the delineation of white matter pathways and assessment of connectivity in vivo. In this review, we address the challenges of performing and analysing neonate dMRI. Of particular importance in dMRI analysis is adequate data preprocessing to reduce image distortions inherent to the acquisition technique, as well as artefacts caused by head movement. We present a summary of techniques that should be used in the preprocessing of neonate dMRI data, and demonstrate the effect of these important correction steps. Furthermore, we give an overview of available analysis techniques, ranging from voxel-based analysis of anisotropy metrics including tract-based spatial statistics (TBSS) to recently developed methods of statistical analysis addressing issues of resolving complex white matter architecture. We highlight the importance of resolving crossing fibres for tractography and outline several tractography-based techniques, including connectivity-based segmentation, the connectome and tractography mapping. These techniques provide powerful tools for the investigation of brain development and maturation. PMID:22903761

  15. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  16. Image processing and the Arithmetic Fourier Transform

    SciTech Connect

    Tufts, D.W.; Fan, Z.; Cao, Z.

    1989-01-01

    A new Fourier technique, the Arithmetic Fourier Transform (AFT) was recently developed for signal processing. This approach is based on the number-theoretic method of Mobius inversion. The AFT needs only additions except for a small amount of multiplications by prescribed scale factors. This new algorithm is also well suited to parallel processing. And there is no accumulation of rounding errors in the AFT algorithm. In this reprint, the AFT is used to compute the discrete cosine transform and is also extended to 2-D cases for image processing. A 2-D Mobius inversion formula is proved. It is then applied to the computation of Fourier coefficients of a periodic 2-D function. It is shown that the output of an array of delay-line (or transversal) filters is the Mobius transform of the input harmonic terms. The 2-D Fourier coefficients can therefore be obtained through Mobius inversion of the output of the filter array.

  17. Automated Confocal Laser Scanning Microscopy and Semiautomated Image Processing for Analysis of Biofilms

    PubMed Central

    Kuehn, Martin; Hausner, Martina; Bungartz, Hans-Joachim; Wagner, Michael; Wilderer, Peter A.; Wuertz, Stefan

    1998-01-01

    The purpose of this study was to develop and apply a quantitative optical method suitable for routine measurements of biofilm structures under in situ conditions. A computer program was designed to perform automated investigations of biofilms by using image acquisition and image analysis techniques. To obtain a representative profile of a growing biofilm, a nondestructive procedure was created to study and quantify undisturbed microbial populations within the physical environment of a glass flow cell. Key components of the computer-controlled processing described in this paper are the on-line collection of confocal two-dimensional (2D) cross-sectional images from a preset 3D domain of interest followed by the off-line analysis of these 2D images. With the quantitative extraction of information contained in each image, a three-dimensional reconstruction of the principal biological events can be achieved. The program is convenient to handle and was generated to determine biovolumes and thus facilitate the examination of dynamic processes within biofilms. In the present study, Pseudomonas fluorescens or a green fluorescent protein-expressing Escherichia coli strain, EC12, was inoculated into glass flow cells and the respective monoculture biofilms were analyzed in three dimensions. In this paper we describe a method for the routine measurements of biofilms by using automated image acquisition and semiautomated image analysis. PMID:9797255

  18. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  19. HYMOSS signal processing for pushbroom spectral imaging

    NASA Astrophysics Data System (ADS)

    Ludwig, David E.

    1991-06-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  20. The Influence of Working Memory and Phonological Processing on English Language Learner Children's Bilingual Reading and Language Acquisition

    ERIC Educational Resources Information Center

    Swanson, H. Lee; Orosco, Michael J.; Lussier, Cathy M.; Gerber, Michael M.; Guzman-Orth, Danielle A.

    2011-01-01

    In this study, we explored whether the contribution of working memory (WM) to children's (N = 471) 2nd language (L2) reading and language acquisition was best accounted for by processing efficiency at a phonological level and/or by executive processes independent of phonological processing. Elementary school children (Grades 1, 2, & 3) whose 1st…

  1. Relationships among process skills development, knowledge acquisition, and gender in microcomputer-based chemistry laboratories

    NASA Astrophysics Data System (ADS)

    Krieger, Carla Repsher

    This study investigated how instruction in MBL environments can be designed to facilitate process skills development and knowledge acquisition among high school chemistry students. Ninety-eight college preparatory chemistry students in six intact classes were randomly assigned to one of three treatment groups: MBL with enhanced instruction in Macroscopic knowledge, MBL with enhanced instruction in Microscopic knowledge, and MBL with enhanced instruction in Symbolic knowledge. Each treatment group completed a total of four MBL titrations involving acids and bases. After the first and third titrations, the Macroscopic, Microscopic and Symbolic groups received enhanced instruction in the Macroscopic, Microscopic and Symbolic modes, respectively. During each titration, participants used audiotapes to record their verbal interactions. The study also explored the effects of three potential covariates (age, mathematics background, and computer usage) on the relationships among the independent variables (type of enhanced instruction and gender) and the dependent variables (science process skills and knowledge acquisition). Process skills were measured via gain scores on a standardized test. Analysis of Covariance eliminated age, mathematics background, and computer usage as covariates in this study. Analysis of Variance identified no significant effects on process skills attributable to treatment or gender. Knowledge acquisition was assessed via protocol analysis of statements made by the participants during the four titrations. Statements were categorized as procedural, observational, conceptual/analytical, or miscellaneous. Statement category percentages were analyzed for trends across treatments, genders, and experiments. Instruction emphasizing the Macroscopic mode may have increased percentages of observational and miscellaneous statements and decreased percentages of procedural and conceptual/analytical statements. Instruction emphasizing the Symbolic mode may have

  2. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods.

    PubMed

    Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P

    2012-06-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.

  3. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods

    PubMed Central

    Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.

    2012-01-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394

  4. Quantitative analysis of geomorphic processes using satellite image data at different scales

    NASA Technical Reports Server (NTRS)

    Williams, R. S., Jr.

    1985-01-01

    When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.

  5. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  6. Cognitive processes during fear acquisition and extinction in animals and humans: implications for exposure therapy of anxiety disorders.

    PubMed

    Hofmann, Stefan G

    2008-02-01

    Anxiety disorders are highly prevalent. Fear conditioning and extinction learning in animals often serve as simple models of fear acquisition and exposure therapy of anxiety disorders in humans. This article reviews the empirical and theoretical literature on cognitive processes in fear acquisition, extinction, and exposure therapy. It is concluded that exposure therapy is a form of cognitive intervention that specifically changes the expectancy of harm. Implications for therapy research are discussed.

  7. Adaptive anisotropic gaussian filtering to reduce acquisition time in cardiac diffusion tensor imaging.

    PubMed

    Mazumder, Ria; Clymer, Bradley D; Mo, Xiaokui; White, Richard D; Kolipaka, Arunark

    2016-06-01

    Diffusion tensor imaging (DTI) is used to quantify myocardial fiber orientation based on helical angles (HA). Accurate HA measurements require multiple excitations (NEX) and/or several diffusion encoding directions (DED). However, increasing NEX and/or DED increases acquisition time (TA). Therefore, in this study, we propose to reduce TA by implementing a 3D adaptive anisotropic Gaussian filter (AAGF) on the DTI data acquired from ex-vivo healthy and infarcted porcine hearts. DTI was performed on ex-vivo hearts [9-healthy, 3-myocardial infarction (MI)] with several combinations of DED and NEX. AAGF, mean (AVF) and median filters (MF) were applied on the primary eigenvectors of the diffusion tensor prior to HA estimation. The performance of AAGF was compared against AVF and MF. Root mean square error (RMSE), concordance correlation-coefficients and Bland-Altman's technique was used to determine optimal combination of DED and NEX that generated the best HA maps in the least possible TA. Lastly, the effect of implementing AAGF on the infarcted porcine hearts was also investigated. RMSE in HA estimation for AAGF was lower compared to AVF or MF. Post-filtering (AAGF) fewer DED and NEX were required to achieve HA maps with similar integrity as those obtained from higher NEX and/or DED. Pathological alterations caused in HA orientation in the MI model were preserved post-filtering (AAGF). Our results demonstrate that AAGF reduces TA without affecting the integrity of the myocardial microstructure. PMID:26843150

  8. Measurement of eye lens dose for Varian On-Board Imaging with different cone-beam computed tomography acquisition techniques

    PubMed Central

    Deshpande, Sudesh; Dhote, Deepak; Thakur, Kalpna; Pawar, Amol; Kumar, Rajesh; Kumar, Munish; Kulkarni, M. S.; Sharma, S. D.; Kannan, V.

    2016-01-01

    The objective of this work was to measure patient eye lens dose for different cone-beam computed tomography (CBCT) acquisition protocols of Varian's On-Board Imaging (OBI) system using optically stimulated luminescence dosimeter (OSLD) and to study the variation in eye lens dose with patient geometry and distance of isocenter to the eye lens. During the experimental measurements, OSLD was placed on the patient between the eyebrows of both eyes in line of nose during CBCT image acquisition to measure eye lens doses. The eye lens dose measurements were carried out for three different cone-beam acquisition protocols (standard dose head, low-dose head [LDH], and high-quality head [HQH]) of Varian OBI. Measured doses were correlated with patient geometry and distance between isocenter and eye lens. Measured eye lens doses for standard head and HQH protocols were in the range of 1.8–3.2 mGy and 4.5–9.9 mGy, respectively. However, the measured eye lens dose for the LDH protocol was in the range of 0.3–0.7 mGy. The measured data indicate that eye lens dose to patient depends on the selected imaging protocol. It was also observed that eye lens dose does not depend on patient geometry but strongly depends on distance between eye lens and treatment field isocenter. However, undoubted advantages of imaging system should not be counterbalanced by inappropriate selection of imaging protocol, especially for very intense imaging protocol. PMID:27651564

  9. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  10. Acquisition of quantitative physiological data and computerized image reconstruction using a single scan TV system

    NASA Technical Reports Server (NTRS)

    Baily, N. A.

    1976-01-01

    A single-scan radiography system has been interfaced to a minicomputer, and the combined system has been used with a variety of fluoroscopic systems and image intensifiers available in clinical facilities. The system's response range is analyzed, and several applications are described. These include determination of the gray scale for typical X-ray-fluoroscopic-television chains, measurement of gallstone volume in patients, localization of markers or other small anatomical features, determinations of organ areas and volumes, computer reconstruction of tomographic sections of organs in motion, and computer reconstruction of transverse axial body sections from fluoroscopic images. It is concluded that this type of system combined with a minimum of statistical processing shows excellent capabilities for delineating small changes in differential X-ray attenuation.

  11. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  12. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  13. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions

    PubMed Central

    Shenoy, Shailesh M.

    2016-01-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software’s support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity. PMID:27516727

  14. Platform for distributed image processing and image retrieval

    NASA Astrophysics Data System (ADS)

    Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.

    2003-06-01

    We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.

  15. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  16. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  17. Digitizing data acquisition and time-of-flight pulse processing for ToF-ERDA

    NASA Astrophysics Data System (ADS)

    Julin, Jaakko; Sajavaara, Timo

    2016-01-01

    A versatile system to capture and analyze signals from multi channel plate (MCP) based time-of-flight detectors and ionization based energy detectors such as silicon diodes and gas ionization chambers (GIC) is introduced. The system is based on commercial digitizers and custom software. It forms a part of a ToF-ERDA spectrometer, which has to be able to detect recoil atoms of many different species and energies. Compared to the currently used analogue electronics the digitizing system provides comparable time-of-flight resolution and improved hydrogen detection efficiency, while allowing the operation of the spectrometer be studied and optimized after the measurement. The hardware, data acquisition software and digital pulse processing algorithms to suit this application are described in detail.

  18. MSL's Widgets: Adding Rebustness to Martian Sample Acquisition, Handling, and Processing

    NASA Technical Reports Server (NTRS)

    Roumeliotis, Chris; Kennedy, Brett; Lin, Justin; DeGrosse, Patrick; Cady, Ian; Onufer, Nicholas; Sigel, Deborah; Jandura, Louise; Anderson, Robert; Katz, Ira; Slimko, Eric; Limonadi, Daniel

    2013-01-01

    Mars Science Laboratory's (MSL) Sample Acquisition Sample Processing and Handling (SA-SPaH) system is one of the most ambitious terrain interaction and manipulation systems ever built and successfully used outside of planet earth. Mars has a ruthless environment that has surprised many who have tried to explore there. The robustness widget program was implemented by the MSL project to help ensure the SA-SPaH system would be robust enough to the surprises of this ruthless Martian environment. The robustness widget program was an effort of extreme schedule pressure and responsibility, but was accomplished with resounding success. This paper will focus on a behind the scenes look at MSL's robustness widgets: the particle fun zone, the wind guards, and the portioner pokers.

  19. Bone feature analysis using image processing techniques.

    PubMed

    Liu, Z Q; Austin, T; Thomas, C D; Clement, J G

    1996-01-01

    In order to establish the correlation between bone structure and age, and information about age-related bone changes, it is necessary to study microstructural features of human bone. Traditionally, in bone biology and forensic science, the analysis if bone cross-sections has been carried out manually. Such a process is known to be slow, inefficient and prone to human error. Consequently, the results obtained so far have been unreliable. In this paper we present a new approach to quantitative analysis of cross-sections of human bones using digital image processing techniques. We demonstrate that such a system is able to extract various bone features consistently and is capable of providing more reliable data and statistics for bones. Consequently, we will be able to correlate features of bone microstructure with age and possibly also with age related bone diseases such as osteoporosis. The development of knowledge-based computer vision-systems for automated bone image analysis can now be considered feasible.

  20. Signal processing for imaging and mapping ladar

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Tolt, Gustav

    2011-11-01

    The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy. By gathering range information from several frames the geometrical information of the target can be obtained. We also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we illustrate how range data enables target classification in near real-time and that the results can be improved if several frames are co-registered. Examples using data from forest and maritime scenes are shown.