Science.gov

Sample records for acquisition image processing

  1. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  2. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  3. Towards a Platform for Image Acquisition and Processing on RASTA

    NASA Astrophysics Data System (ADS)

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele

    2013-08-01

    This paper presents the architecture of a platform for image acquisition and processing based on commercial hardware and space qualified hardware. The aim is to extend the Reference Architecture Test-bed for Avionics (RASTA) system in order to obtain a Test-bed that allows testing different hardware and software solutions in the field of image acquisition and processing. The platform will allow the integration of space qualified hardware and Commercial Off The Shelf (COTS) hardware in order to test different architectural configurations. The first implementation is being performed on a low cost commercial board and on the GR712RC board based on the Dual Core Leon3 fault tolerant processor. The platform will include an actuation module with the aim of implementing a complete pipeline from image acquisition to actuation, making possible the simulation of a real case scenario involving acquisition and actuation.

  4. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  5. System of acquisition and processing of images of dynamic speckle

    NASA Astrophysics Data System (ADS)

    Vega, F.; >C Torres,

    2015-01-01

    In this paper we show the design and implementation of a system to capture and analysis of dynamic speckle. The device consists of a USB camera, an isolated system lights for imaging, a laser pointer 633 nm 10 mw as coherent light source, a diffuser and a laptop for processing video. The equipment enables the acquisition and storage of video, also calculated of different descriptors of statistical analysis (vector global accumulation of activity, activity matrix accumulation, cross-correlation vector, autocorrelation coefficient, matrix Fujji etc.). The equipment is designed so that it can be taken directly to the site where the sample for biological study and is currently being used in research projects within the group.

  6. [An image acquisition & processing system of the wireless endoscope based on DSP].

    PubMed

    Zhang, Jin-hua; Peng, Cheng-lin; Zhao, De-chun; Yang-Li

    2006-07-01

    This paper covers an image acquisition & processing system of the capsule-style endoscope. Images sent by the endoscope are compressed and encoded with the digital signal processor (DSP) saving data in HD into PC for analyzing and processing in the image browser workstation. PMID:17039927

  7. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  8. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  9. A review of breast tomosynthesis. Part I. The image acquisition process

    PubMed Central

    Sechopoulos, Ioannis

    2013-01-01

    Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process. PMID:23298126

  10. A review of breast tomosynthesis. Part I. The image acquisition process

    SciTech Connect

    Sechopoulos, Ioannis

    2013-01-15

    Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process.

  11. A review of breast tomosynthesis. Part I. The image acquisition process.

    PubMed

    Sechopoulos, Ioannis

    2013-01-01

    Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process. PMID:23298126

  12. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing

    PubMed Central

    2014-01-01

    Introduction Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator’s (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. Material and method The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. Results For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient’s back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects – error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18

  13. Automated system for acquisition and image processing for the control and monitoring boned nopal

    NASA Astrophysics Data System (ADS)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  14. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  15. Exploitation of realistic computational anthropomorphic phantoms for the optimization of nuclear imaging acquisition and processing protocols.

    PubMed

    Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C

    2014-01-01

    Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies. PMID:25570355

  16. Digital image processing: a primer for JVIR authors and readers: part 2: digital image acquisition.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-11-01

    This is the second installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first article of the series, we reviewed the fundamentals of digital image architecture. In this article, we describe the ways that an author can import digital images to the computer desktop. We explore the modern imaging network and explain how to import picture archiving and communications systems (PACS) images to the desktop. Options and techniques for producing digital hard copy film are also presented. PMID:14605101

  17. Object-oriented programming approach to CCD data acquisition and image processing

    NASA Astrophysics Data System (ADS)

    Naidu, B. Nagaraja; Srinivasan, R.; Shankar, S. Murali

    1997-10-01

    In the recent past both the CCD camera controller hardware and software have witnessed a dynamic change to keep pace with the astronomer's imaging requirements. Conventional data acquisition software is based on menu driven programs developed using structured high level languages in non-window environment. An application under windows offers several advantages to the users, over the non-window approach, like multitasking, accessing large memory and inter-application communication. Windows also provides many programming facilities to the developers such as device-independent graphics, support to wide range of input/output devices, menus, icons, bitmaps. However, programming for windows environment under structured programming demands an in-depth knowledge of events, formats, handles and inner workings. Object-oriented approach simplifies the task of programming for windows by using object windows which manage the message- processing behavior and insulate the developer from the details of inner workings of windows. As a result, a window application can be developed in much less time and effort compared to conventional approaches. We have designed and developed an easy-to-use CCD data acquisition and processing software under Microsoft Windows 3.1 operating environment using object-Pascal for windows. The acquisition software exploits the advantages of the objects to provide custom specific tool boxes to implement different functions of CCD data accusation and image processing. In this paper the hierarchy of the software structure and various application functions are presented. The flexibility of the software to handle different CCDs and also mosaic arrangement is illustrated.

  18. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  19. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    SciTech Connect

    Schickert, Martin

    2015-03-31

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants.

  20. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    NASA Astrophysics Data System (ADS)

    Schickert, Martin

    2015-03-01

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants.

  1. Micro-MRI-based image acquisition and processing system for assessing the response to therapeutic intervention

    NASA Astrophysics Data System (ADS)

    Vasilić, B.; Ladinsky, G. A.; Saha, P. K.; Wehrli, F. W.

    2006-03-01

    Osteoporosis is the cause of over 1.5 million bone fractures annually. Most of these fractures occur in sites rich in trabecular bone, a complex network of bony struts and plates found throughout the skeleton. The three-dimensional structure of the trabecular bone network significantly determines mechanical strength and thus fracture resistance. Here we present a data acquisition and processing system that allows efficient noninvasive assessment of trabecular bone structure through a "virtual bone biopsy". High-resolution MR images are acquired from which the trabecular bone network is extracted by estimating the partial bone occupancy of each voxel. A heuristic voxel subdivision increases the effective resolution of the bone volume fraction map and serves a basis for subsequent analysis of topological and orientational parameters. Semi-automated registration and segmentation ensure selection of the same anatomical location in subjects imaged at different time points during treatment. It is shown with excerpts from an ongoing clinical study of early post-menopausal women, that significant reduction in network connectivity occurs in the control group while the structural integrity is maintained in the hormone replacement group. The system described should be suited for large-scale studies designed to evaluate the efficacy of therapeutic intervention in subjects with metabolic bone disease.

  2. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  3. Thermal Imaging of the Waccasassa Bay Preserve: Image Acquisition and Processing

    USGS Publications Warehouse

    Raabe, Ellen A.; Bialkowska-Jelinska, Elzbieta

    2010-01-01

    Thermal infrared (TIR) imagery was acquired along coastal Levy County, Florida, in March 2009 with the goal of identifying groundwater-discharge locations in Waccasassa Bay Preserve State Park (WBPSP). Groundwater discharge is thermally distinct in winter when Floridan aquifer temperature, 71-72 degrees F, contrasts with the surrounding cold surface waters. Calibrated imagery was analyzed to assess temperature anomalies and related thermal traces. The influence of warm Gulf water and image artifacts on small features was successfully constrained by image evaluation in three separate zones: Creeks, Bay, and Gulf. Four levels of significant water-temperature anomalies were identified, and 488 sites of interest were mapped. Among the sites identified, at least 80 were determined to be associated with image artifacts and human activity, such as excavation pits and the Florida Barge Canal. Sites of interest were evaluated for geographic concentration and isolation. High site densities, indicating interconnectivity and prevailing flow, were located at Corrigan Reef, No. 4 Channel, Winzy Creek, Cow Creek, Withlacoochee River, and at excavation sites. In other areas, low to moderate site density indicates the presence of independent vents and unique flow paths. A directional distribution assessment of natural seep features produced a northwest trend closely matching the strike direction of regional faults. Naturally occurring seeps were located in karst ponds and tidal creeks, and several submerged sites were detected in Waccasassa River and Bay, representing the first documentation of submarine vents in the Waccasassa region. Drought conditions throughout the region placed constraints on positive feature identification. Low discharge or displacement by landward movement of saltwater may have reduced or reversed flow during this season. Approximately two-thirds of seep locations in the overlap between 2009 and 2005 TIR night imagery were positively re-identified in 2009

  4. A CCD/CMOS process for integrated image acquisition and early vision signal processing

    NASA Astrophysics Data System (ADS)

    Keast, Craig L.; Sodini, Charles G.

    The development of technology which integrates a four phase, buried-channel CCD in an existing 1.75 micron CMOS process is described. The four phase clock is employed in the integrated early vision system to minimize process complexity. Signal corruption is minimized and lateral fringing fields are enhanced by burying the channel. The CMOS process for CCD enhancement is described, which highlights a new double-poly process and the buried channel, and the integration is outlined. The functionality and transfer efficiency of the process enhancement were appraised by measuring CCD shift registers at 100 kHz. CMOS measurement results are presented, which include threshold voltages, poly-to-poly capacitor voltage and temperature coefficients, and dark current. A CCD/CMOS processor is described which combines smoothing and segmentation operations. The integration of the CCD and the CMOS processes is found to function due to the enhancement-compatible design of the CMOS process and the thorough employment of CCD module baseline process steps.

  5. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  6. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  7. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  8. Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Maignan, William; Koeplinger, David; Carhart, Gary W.; Aubailly, Mathieu; Kiamilev, Fouad; Liu, J. Jiang

    2013-05-01

    "Lucky-region fusion" (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and "fuses" them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.

  9. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  10. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  11. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  12. Optoelectronic/image processing module for enhanced fringe pattern acquisition and analysis

    NASA Astrophysics Data System (ADS)

    Dymny, Grzegorz; Kujawinska, Malgorzata

    1996-08-01

    The paper introduces an optoelectronic/image processing module, OIMP, which enables more convenient implementation of full-field optical methods of testing into industry. OIMP consist of two miniature CCD cameras and optical wavefront modification system which recombines the beams produced by opto-mechanical measurement system and images fringe patterns on the CCD matrices. The modules makes possible simultaneous registration of there monochromatic images as R,G,B components of color video signal by means of signal frame grabber or by VCR on video tape. This enables convenient and inexpensive storage of large quantities of data which may be analyzed by spatial carrier phase shifting method of automatic fringe pattern analysis. THe usefulness of OIMP is shown by two examples: u and v in-plane displacement simultaneous analysis in grating interferometry system and complex shape determination by fringe projection systems.

  13. Automated ground data acquisition and processing system for calibration and performance assessment of the EO-1 Advanced Land Imager

    NASA Astrophysics Data System (ADS)

    Viggh, Herbert E. M.; Mendenhall, Jeffrey A.; Sayer, Ronald W.; Stuart, J. S.; Gibbs, Margaret D.

    1999-09-01

    The calibration and performance assessment of the Earth Observing-1 (EO-1) Advanced Land Imager (ALI) required a ground data system for acquiring and processing ALI data. In order to meet tight schedule and budget requirements, an automated system was developed that could be run by a single operator. This paper describes the overall system and the individual Electrical Ground Support Equipment (EGSE) and computer components used. The ALI Calibration Control Node (ACCN) serves as a test executive with a single graphical user interface to the system, controlling calibration equipment and issuing data acquisition and processing requests to the other EGSE and computers. EGSE1, a custom data acquisition syste, collects ALI science data and also passes ALI commanding and housekeeping telemetry collection requests to EGSE2 and EGSE3 which are implemented on an ASIST workstation. The performance assessment machine, stores and processes collected ALI data, automatically displaying quick-look processing results. The custom communications protocol developed to interface these various machines and to automate their interactions is described, including the various modes of operation needed to support spatial, radiometric, spectral, and functional calibration and performance assessment of the ALI.

  14. SNAP: Simulating New Acquisition Processes

    NASA Technical Reports Server (NTRS)

    Alfeld, Louis E.

    1997-01-01

    Simulation models of acquisition processes range in scope from isolated applications to the 'Big Picture' captured by SNAP technology. SNAP integrates a family of models to portray the full scope of acquisition planning and management activities, including budgeting, scheduling, testing and risk analysis. SNAP replicates the dynamic management processes that underlie design, production and life-cycle support. SNAP provides the unique 'Big Picture' capability needed to simulate the entire acquisition process and explore the 'what-if' tradeoffs and consequences of alternative policies and decisions. Comparison of cost, schedule and performance tradeoffs help managers choose the lowest-risk, highest payoff at each step in the acquisition process.

  15. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF).

    PubMed

    Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S

    2012-02-23

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame. PMID:24027619

  16. Graphics processing unit (GPU) implementation of image processing algorithms to improve system performance of the control acquisition, processing, and image display system (CAPIDS) of the micro-angiographic fluoroscope (MAF)

    NASA Astrophysics Data System (ADS)

    Swetadri Vasan, S. N.; Ionita, Ciprian N.; Titus, A. H.; Cartwright, A. N.; Bednarek, D. R.; Rudin, S.

    2012-03-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.

  17. Graphics Processing Unit (GPU) implementation of image processing algorithms to improve system performance of the Control, Acquisition, Processing, and Image Display System (CAPIDS) of the Micro-Angiographic Fluoroscope (MAF)

    PubMed Central

    Vasan, S.N. Swetadri; Ionita, Ciprian N.; Titus, A.H.; Cartwright, A.N.; Bednarek, D.R.; Rudin, S.

    2012-01-01

    We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame. PMID:24027619

  18. Taking the perfect nuclear image: quality control, acquisition, and processing techniques for cardiac SPECT, PET, and hybrid imaging.

    PubMed

    Case, James A; Bateman, Timothy M

    2013-10-01

    Nuclear Cardiology for the past 40 years has distinguished itself in its ability to non-invasively assess regional myocardial blood flow and identify obstructive coronary disease. This has led to advances in managing the diagnosis, risk stratification, and prognostic assessment of cardiac patients. These advances have all been predicated on the collection of high quality nuclear image data. National and international professional societies have established guidelines for nuclear laboratories to maintain high quality nuclear cardiology services. In addition, laboratory accreditation has further advanced the goal of the establishing high quality standards for the provision of nuclear cardiology services. This article summarizes the principles of nuclear cardiology single photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging and techniques for maintaining quality: from the calibration of imaging equipment to post processing techniques. It also will explore the quality considerations of newer technologies such as cadmium zinc telleride (CZT)-based SPECT systems and absolute blood flow measurement techniques using PET. PMID:23868070

  19. Real-time multilevel process monitoring and control of CR image acquisition and preprocessing for PACS and ICU

    NASA Astrophysics Data System (ADS)

    Zhang, Jianguo; Wong, Stephen T. C.; Andriole, Katherine P.; Wong, Albert W. K.; Huang, H. K.

    1996-05-01

    The purpose of this paper is to present a control theory and a fault tolerance algorithm developed for real time monitoring and control of acquisition and preprocessing of computed radiographs for PACS and Intensive Care Unit operations. This monitoring and control system uses the event-driven, multilevel processing approach to remove computational bottleneck and to improve system reliability. Its computational performance and processing reliability are evaluated and compared with those of the traditional, single level processing approach.

  20. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  1. Data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Tsuda, Toshitaka

    1989-10-01

    Fundamental methods of signal processing used in normal mesosphere stratosphere troposphere (MST) radar observations are described. Complex time series of received signals obtained in each range gate are converted into Doppler spectra, from which the mean Doppler shift, spectral width and signal-to-noise ratio (SNR) are estimated. These spectral parameters are further utilized to study characteristics of scatterers and atmospheric motions.

  2. New field programmable gate array-based image-oriented acquisition and real-time processing applied to plasma facing component thermal monitoring

    SciTech Connect

    Martin, V.; Dunand, G.; Moncada, V.; Jouve, M.; Travere, J.-M.

    2010-10-15

    During operation of present fusion devices, the plasma facing components (PFCs) are exposed to high heat fluxes. Understanding and preventing overheating of these components during long pulse discharges is a crucial safety issue for future devices like ITER. Infrared digital cameras interfaced with complex optical systems have become a routine diagnostic to measure surface temperatures in many magnetic fusion devices. Due to the complexity of the observed scenes and the large amount of data produced, the use of high computational performance hardware for real-time image processing is then mandatory to avoid PFC damages. At Tore Supra, we have recently made a major upgrade of our real-time infrared image acquisition and processing board by the use of a new field programmable gate array (FPGA) optimized for image processing. This paper describes the new possibilities offered by this board in terms of image calibration and image interpretation (abnormal thermal events detection) compared to the previous system.

  3. Integral imaging acquisition and processing for visualization of photon counting images in the mid-wave infrared range

    NASA Astrophysics Data System (ADS)

    Latorre-Carmona, P.; Pla, F.; Javidi, B.

    2016-06-01

    In this paper, we present an overview of our previously published work on the application of the maximum likelihood (ML) reconstruction method to integral images acquired with a mid-wave infrared detector on two different types of scenes: one of them consisting of a road, a group of trees and a vehicle just behind one of the trees (being the car at a distance of more than 200m from the camera), and another one consisting of a view of the Wright Air Force Base airfield, with several hangars and different other types of installations (including warehouses) at distances ranging from 600m to more than 2km. Dark current noise is considered taking into account the particular features this type of sensors have. Results show that this methodology allows to improve visualization in the photon counting domain.

  4. Effective GPR Data Acquisition and Imaging

    NASA Astrophysics Data System (ADS)

    Sato, M.

    2014-12-01

    We have demonstrated that dense GPR data acquisition typically antenna step increment less than 1/10 wave length can provide clear 3-dimeantiona subsurface images, and we created 3DGPR images. Now we are interested in developing GPR survey methodologies which required less data acquisition time. In order to speed up the data acquisition, we are studying efficient antenna positioning for GPR survey and 3-D imaging algorithm. For example, we have developed a dual sensor "ALIS", which combines GPR with metal detector (Electromagnetic Induction sensor) for humanitarian demining, which acquires GPR data by hand scanning. ALIS is a pulse radar system, which has a frequency range 0.5-3GHz.The sensor position tracking system has accuracy about a few cm, and the data spacing is typically more than a few cm, but it can visualize the mines, which has a diameter about 8cm. 2 systems of ALIS have been deployed by Cambodian Mine Action Center (CMAC) in mine fields in Cambodia since 2009 and have detected more than 80 buried land mines. We are now developing signal processing for an array type GPR "Yakumo". Yakumo is a SFCW radar system which is a multi-static radar, consisted of 8 transmitter antennas and 8 receiver antennas. We have demonstrated that the multi-static data acquisition is not only effective in data acquisition, but at the same time, it can increase the quality of GPR images. Archaeological survey by Yakumo in large areas, which are more than 100m by 100m have been conducted, for promoting recovery from Tsunami attacked East Japan in March 2011. With a conventional GPR system, we are developing an interpolation method of radar signals, and demonstrated that it can increase the quality of the radar images, without increasing the data acquisition points. When we acquire one dimensional GPR profile along a survey line, we can acquire relatively high density data sets. However, when we need to relocate the data sets along a "virtual" survey line, for example a

  5. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  6. The analysis of image acquisition in LabVIEW

    NASA Astrophysics Data System (ADS)

    Xu, Wuni; Zhong, Lanxiang

    2011-06-01

    In this paper, four methods of image acquisition in LabVIEW were described, and its realization principles and the procedures in combination with different hardware architectures were illustrated in the virtual instrument laboratory. Experiment results show that the methods of image acquisition in LabVIEW have many advantages such as easier configuration, lower complexity and stronger practicability than in VB and C++. Thus the methods are fitter to set the foundation for image processing, machine vision, pattern recognition research.

  7. Optical image acquisition system for colony analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Jin, Wenbiao

    2006-02-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.

  8. Image Acquisition in Real Time

    NASA Technical Reports Server (NTRS)

    2003-01-01

    In 1995, Carlos Jorquera left NASA s Jet Propulsion Laboratory (JPL) to focus on erasing the growing void between high-performance cameras and the requisite software to capture and process the resulting digital images. Since his departure from NASA, Jorquera s efforts have not only satisfied the private industry's cravings for faster, more flexible, and more favorable software applications, but have blossomed into a successful entrepreneurship that is making its mark with improvements in fields such as medicine, weather forecasting, and X-ray inspection. Formerly a JPL engineer who constructed imaging systems for spacecraft and ground-based astronomy projects, Jorquera is the founder and president of the three-person firm, Boulder Imaging Inc., based in Louisville, Colorado. Joining Jorquera to round out the Boulder Imaging staff are Chief Operations Engineer Susan Downey, who also gained experience at JPL working on space-bound projects including Galileo and the Hubble Space Telescope, and Vice President of Engineering and Machine Vision Specialist Jie Zhu Kulbida, who has extensive industrial and research and development experience within the private sector.

  9. Optimisation of acquisition time in bioluminescence imaging

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie; Cobbold, Mark; Styles, Iain B.; Dehghani, Hamid

    2015-03-01

    Decreasing the acquisition time in bioluminescence imaging (BLI) and bioluminescence tomography (BLT) will enable animals to be imaged within the window of stable emission of the bioluminescent source, a higher imaging throughput and minimisation of the time which an animal is anaesthetised. This work investigates, through simulation using a heterogeneous mouse model, two methods of decreasing acquisition time: 1. Imaging at fewer wavelengths (a reduction from five to three); and 2. Increasing the bandwidth of filters used for imaging. The results indicate that both methods are viable ways of decreasing the acquisition time without a loss in quantitative accuracy. Importantly, when choosing imaging wavelengths, the spectral attenuation of tissue and emission spectrum of the source must be considered, in order to choose wavelengths at which a high signal can be achieved. Additionally, when increasing the bandwidth of the filters used for imaging, the bandwidth must be accounted for in the reconstruction algorithm.

  10. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  11. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  12. Acoustic imaging systems (for robotic object acquisition)

    NASA Astrophysics Data System (ADS)

    Richardson, J. M.; Martin, J. F.; Marsh, K. A.; Schoenwald, J. S.

    1985-03-01

    The long-term objective of the effort is to establish successful approaches for 3D acoustic imaging of dense solid objects in air to provide the information required for acquisition and manipulation of these objects by a robotic system. The objective of this first year's work was to achieve and demonstrate the determination of the external geometry (shape) of such objects with a fixed sparse array of sensors, without the aid of geometrical models or extensive training procedures. Conventional approaches for acoustic imaging fall into two basic categories. The first category is used exclusively for dense solid objects. It involves echo-ranging from a large number of sensor positions, achieved either through the use of a larger array of transducers or through extensive physical scanning of a small array. This approach determines the distance to specular reflection points from each sensor position; with suitable processing an image can be inferred. The second category uses the full acoustic waveforms to provide an image, but is strictly applicable only to weak inhomogeneities. The most familiar example is medical imaging of the soft tissue portions of the body where the range of acoustic impedance is relatively small.

  13. Image acquisition in the Pi-of-the-Sky project

    NASA Astrophysics Data System (ADS)

    Jegier, M.; Nawrocki, K.; Poźniak, K.; Sokołowski, M.

    2006-10-01

    Modern astronomical image acquisition systems dedicated for sky surveys provide large amount of data in a single measurement session. During one session that lasts a few hours it is possible to get as much as 100 GB of data. This large amount of data needs to be transferred from camera and processed. This paper presents some aspects of image acquisition in a sky survey image acquisition system. It describes a dedicated USB linux driver for the first version of the "Pi of The Sky" CCD camera (later versions have also Ethernet interface) and the test program for the camera together with a driver-wrapper providing core device functionality. Finally, the paper contains description of an algorithm for matching several images based on image features, i.e. star positions and their brightness.

  14. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  15. Functional MRI Using Regularized Parallel Imaging Acquisition

    PubMed Central

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M.; Belliveau, John W.; Wald, Lawrence L.; Kwong, Kenneth K.

    2013-01-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. PMID:16032694

  16. Digital image acquisition in in vivo confocal microscopy.

    PubMed

    Petroll, W M; Cavanagh, H D; Lemp, M A; Andrews, P M; Jester, J V

    1992-01-01

    A flexible system for the real-time acquisition of in vivo images has been developed. Images are generated using a tandem scanning confocal microscope interfaced to a low-light-level camera. The video signal from the camera is digitized and stored using a Gould image processing system with a real-time digital disk (RTDD). The RTDD can store up to 3200 512 x 512 pixel images at video rates (30 images s-1). Images can be input directly from the camera during the study, or off-line from a Super VHS video recorder. Once a segment of experimental interest is digitized onto the RTDD, the user can interactively step through the images, average stable sequences, and identify candidates for further processing and analysis. Examples of how this system can be used to study the physiology of various organ systems in vivo are presented. PMID:1552573

  17. SU-C-18C-06: Radiation Dose Reduction in Body Interventional Radiology: Clinical Results Utilizing a New Imaging Acquisition and Processing Platform

    SciTech Connect

    Kohlbrenner, R; Kolli, KP; Taylor, A; Kohi, M; Fidelman, N; LaBerge, J; Kerlan, R; Gould, R

    2014-06-01

    Purpose: To quantify the patient radiation dose reduction achieved during transarterial chemoembolization (TACE) procedures performed in a body interventional radiology suite equipped with the Philips Allura Clarity imaging acquisition and processing platform, compared to TACE procedures performed in the same suite equipped with the Philips Allura Xper platform. Methods: Total fluoroscopy time, cumulative dose area product, and cumulative air kerma were recorded for the first 25 TACE procedures performed to treat hepatocellular carcinoma (HCC) in a Philips body interventional radiology suite equipped with Philips Allura Clarity. The same data were collected for the prior 85 TACE procedures performed to treat HCC in the same suite equipped with Philips Allura Xper. Mean values from these cohorts were compared using two-tailed t tests. Results: Following installation of the Philips Allura Clarity platform, a 42.8% reduction in mean cumulative dose area product (3033.2 versus 1733.6 mGycm∧2, p < 0.0001) and a 31.2% reduction in mean cumulative air kerma (1445.4 versus 994.2 mGy, p < 0.001) was achieved compared to similar procedures performed in the same suite equipped with the Philips Allura Xper platform. Mean total fluoroscopy time was not significantly different between the two cohorts (1679.3 versus 1791.3 seconds, p = 0.41). Conclusion: This study demonstrates a significant patient radiation dose reduction during TACE procedures performed to treat HCC after a body interventional radiology suite was converted to the Philips Allura Clarity platform from the Philips Allura Xper platform. Future work will focus on evaluation of patient dose reduction in a larger cohort of patients across a broader range of procedures and in specific populations, including obese patients and pediatric patients, and comparison of image quality between the two platforms. Funding for this study was provided by Philips Healthcare, with 5% salary support provided to authors K. Pallav

  18. Personal computer process data acquisition

    SciTech Connect

    Dworjanyn, L.O. )

    1989-01-01

    A simple Basic program was written to permit personal computer data collection of process temperatures, pressures, flows, and inline analyzer outputs for a batch-type, unit operation. The field voltage outputs were read on a IEEE programmable digital multimeter using a programmable scanner to select different output lines. The data were stored in ASCII format to allow direct analysis by spreadsheet programs. 1 fig., 1 tab.

  19. Auditory Processing Disorder and Foreign Language Acquisition

    ERIC Educational Resources Information Center

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  20. The ADIS advanced data acquisition, imaging, and storage system

    SciTech Connect

    Flaherty, J.W.

    1986-01-01

    The design and development of Automated Ultrasonic Scanning Systems (AUSS) by McDonnell Aircraft Company has provided the background for the development of the ADIS advanced data acquisition, imaging, and storage system. The ADIS provides state-of-the-art ultrasonic data processing and imaging features which can be utilized in both laboratory and production line composite evaluation applications. System features, such as, real-time imaging, instantaneous electronic rescanning, multitasking capability, histograms, and cross-sections, provide the tools necessary to inspect and evaluate composite parts quickly and consistently.

  1. Applications Of Digital Image Acquisition In Anthropometry

    NASA Astrophysics Data System (ADS)

    Woolford, Barbara; Lewis, James L.

    1981-10-01

    Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.

  2. CCD image data acquisition system for optical astronomy.

    NASA Astrophysics Data System (ADS)

    Bhat, P. N.; Patnaik, K.; Kembhavi, A. K.; Patnaik, A. R.; Prabhu, T. P.

    1990-11-01

    A complete image processing system based on a charge coupled device (CCD) has been developed at TIFR, Bombay, for use in optical astronomy. The system consists of a P-8600/B GEC CCD chip, a CCD controller, a VAX 11/725 mini-computer to carry out the image acquisition and display on a VS-11 monitor. All the necessary software and part of the hardware were developed locally, integrated together and installed at the Vainu Bappu Observatory at Kavalur. CCD as an imaging device and its advantages over the conventional photographic plate is briefly reviewed. The acquisition system is described in detail. The preliminary results are presented and the future research programme is outlined.

  3. Processability Theory and German Case Acquisition

    ERIC Educational Resources Information Center

    Baten, Kristof

    2011-01-01

    This article represents the first attempt to formulate a hypothetical sequence for German case acquisition by Dutch-speaking learners on the basis of Processability Theory (PT). It will be argued that case forms emerge corresponding to a development from lexical over phrasal to interphrasal morphemes. This development, however, is subject to a…

  4. Reducing the Effects of Background Noise during Auditory Functional Magnetic Resonance Imaging of Speech Processing: Qualitative and Quantitative Comparisons between Two Image Acquisition Schemes and Noise Cancellation

    ERIC Educational Resources Information Center

    Blackman, Graham A.; Hall, Deborah A.

    2011-01-01

    Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…

  5. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  6. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists. PMID:15352557

  7. Major system acquisitions process (A-109)

    NASA Technical Reports Server (NTRS)

    Saric, C.

    1991-01-01

    The Major System examined is a combination of elements (hardware, software, facilities, and services) that function together to produce capabilities required to fulfill a mission need. The system acquisition process is a sequence of activities beginning with documentation of mission need and ending with introduction of major system into operational use or otherwise successful achievement of program objectives. It is concluded that the A-109 process makes sense and provides a systematic, integrated management approach along with appropriate management level involvement and innovative and 'best ideas' from private sector in satisfying mission needs.

  8. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  9. Mosaic acquisition and processing for optical-resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Shao, Peng; Shi, Wei; Chee, Ryan K. W.; Zemp, Roger J.

    2012-08-01

    In optical-resolution photo-acoustic microscopy (OR-PAM), data acquisition time is limited by both laser pulse repetition rate (PRR) and scanning speed. Optical-scanning offers high speed, but limited, field of view determined by ultrasound transducer sensitivity. In this paper, we propose a hybrid optical and mechanical-scanning OR-PAM system with mosaic data acquisition and processing. The system employs fast-scanning mirrors and a diode-pumped, nanosecond-pulsed, Ytterbium-doped, 532-nm fiber laser with PRR up to 600 kHz. Data from a sequence of image mosaic patches is acquired systematically, at predetermined mechanical scanning locations, with optical scanning. After all imaging locations are covered, a large panoramic scene is generated by stitching the mosaic patches together. Our proposed system is proven to be at least 20 times faster than previous reported OR-PAM systems.

  10. Camera settings for UAV image acquisition

    NASA Astrophysics Data System (ADS)

    O'Connor, James; Smith, Mike J.; James, Mike R.

    2016-04-01

    The acquisition of aerial imagery has become more ubiquitous than ever in the geosciences due to the advent of consumer-grade UAVs capable of carrying imaging devices. These allow the collection of high spatial resolution data in a timely manner with little expertise. Conversely, the cameras/lenses used to acquire this imagery are often given less thought, and can be unfit for purpose. Given weight constraints which are frequently an issue with UAV flights, low-payload UAVs (<1 kg) limit the types of cameras/lenses which could potentially be used for specific surveys, and therefore the quality of imagery which can be acquired. This contribution discusses these constraints, which need to be considered when selecting a camera/lens for conducting a UAV survey and how they can best be optimized. These include balancing of the camera exposure triangle (ISO, Shutter speed, Aperture) to ensure sharp, well exposed imagery, and its interactions with other camera parameters (Sensor size, Focal length, Pixel pitch) as well as UAV flight parameters (height, velocity).

  11. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  12. Reproducible high-resolution multispectral image acquisition in dermatology

    NASA Astrophysics Data System (ADS)

    Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir

    2015-07-01

    Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.

  13. Age of Acquisition and Imageability: A Cross-Task Comparison

    ERIC Educational Resources Information Center

    Ploetz, Danielle M.; Yates, Mark

    2016-01-01

    Previous research has reported an imageability effect on visual word recognition. Words that are high in imageability are recognised more rapidly than are those lower in imageability. However, later researchers argued that imageability was confounded with age of acquisition. In the current research, these two factors were manipulated in a…

  14. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  15. 28. Perimeter acquisition radar building room #302, signal process and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. Perimeter acquisition radar building room #302, signal process and analog receiver room - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  16. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  17. Acquisition by Processing Theory: A Theory of Everything?

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    2004-01-01

    Truscott and Sharwood Smith (henceforth T&SS) propose a novel theory of language acquisition, "Acquisition by Processing Theory" (APT), designed to account for both first and second language acquisition, monolingual and bilingual speech perception and parsing, and speech production. This is a tall order. Like any theoretically ambitious…

  18. Hyperspectral image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  19. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  20. Hybrid image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    Partly-digital, partly-optical 'hybrid' image processing attempts to use the properties of each domain to synergistic advantage: while Fourier optics furnishes speed, digital processing allows the use of much greater algorithmic complexity. The video-rate image-coordinate transformation used is a critical technology for real-time hybrid image-pattern recognition. Attention is given to the separation of pose variables, image registration, and both single- and multiple-frame registration.

  1. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  2. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  3. Chemical Applications of a Programmable Image Acquisition System

    NASA Astrophysics Data System (ADS)

    Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian

    2003-06-01

    Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.

  4. 29. Perimeter acquisition radar building room #318, data processing system ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. Perimeter acquisition radar building room #318, data processing system area; data processor maintenance and operations center, showing data processing consoles - Stanley R. Mickelsen Safeguard Complex, Perimeter Acquisition Radar Building, Limited Access Area, between Limited Access Patrol Road & Service Road A, Nekoma, Cavalier County, ND

  5. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  6. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  7. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  8. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  9. Influence of acquisition parameters on MV-CBCT image quality.

    PubMed

    Gayou, Olivier

    2012-01-01

    The production of high quality pretreatment images plays an increasing role in image-guided radiotherapy (IGRT) and adaptive radiation therapy (ART). Megavoltage cone-beam computed tomography (MV-CBCT) is the simplest solution of all the commercially available volumetric imaging systems for localization. It also suffers the most from relatively poor contrast due to the energy range of the imaging photons. Several avenues can be investigated to improve MV-CBCT image quality while maintaining an acceptable patient exposure: beam generation, detector technology, reconstruction parameters, and acquisition parameters. This article presents a study of the effects of the acquisition scan length and number of projections of a Siemens Artiste MV-CBCT system on image quality within the range provided by the manufacturer. It also discusses other aspects not related to image quality one should consider when selecting an acquisition protocol. Noise and uniformity were measured on the image of a cylindrical water phantom. Spatial resolution was measured using the same phantom half filled with water to provide a sharp water/air interface to derive the modulation transfer function (MTF). Contrast-to-noise ratio (CNR) was measured on a pelvis-shaped phantom with four inserts of different electron densities relative to water (1.043, 1.117, 1.513, and 0.459). Uniformity was independent of acquisition protocol. Noise decreased from 1.96% to 1.64% when the total number of projections was increased from 100 to 600 for a total exposure of 13.5 MU. The CNR showed a ± 5% dependence on the number of projections and 10% dependence on the scan length. However, these variations were not statistically significant. The spatial resolution was unaffected by the arc length or the sampling rate. Acquisition parameters have little to no effect on the image quality of the MV-CBCT system within the range of parameters available on the system. Considerations other than image quality, such as memory

  10. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  11. Single Acquisition Quantitative Single Point Electron Paramagnetic Resonance Imaging

    PubMed Central

    Jang, Hyungseok; Subramanian, Sankaran; Devasahayam, Nallathamby; Saito, Keita; Matsumoto, Shingo; Krishna, Murali C; McMillan, Alan B

    2013-01-01

    Purpose Electron paramagnetic resonance imaging (EPRI) has emerged as a promising non-invasive technology to dynamically image tissue oxygenation. Due to its extremely short spin-spin relaxation times, EPRI benefits from a single-point imaging (SPI) scheme where the entire FID signal is captured using pure phase encoding. However, direct T2*/pO2 quantification is inhibited due to constant magnitude gradients which result in time-decreasing FOV. Therefore, conventional acquisition techniques require repeated imaging experiments with differing gradient amplitudes (typically 3), which results in long acquisition time. Methods In this study, gridding was evaluated as a method to reconstruct images with equal FOV to enable direct T2*/pO2 quantification within a single imaging experiment. Additionally, an enhanced reconstruction technique that shares high spatial k-space regions throughout different phase encoding time delays was investigated (k-space extrapolation). Results The combined application of gridding and k-space extrapolation enables pixelwise quantification of T2* from a single acquisition with improved image quality across a wide range of phase encoding delay times. The calculated T2*/pO2 does not vary across this time range. Conclusion By utilizing gridding and k-space extrapolation, accurate T2*/pO2 quantification can be achieved within a single dataset to allow enhanced temporal resolution (by a factor of 3). PMID:23913515

  12. Visual color image processing

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Schaefer, Gerald

    1999-12-01

    In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

  13. Second Language Vocabulary Acquisition: A Lexical Input Processing Approach

    ERIC Educational Resources Information Center

    Barcroft, Joe

    2004-01-01

    This article discusses the importance of vocabulary in second language acquisition (SLA), presents an overview of major strands of research on vocabulary acquisition, and discusses five principles for effective second language (L2) vocabulary instruction based on research findings on lexical input processing. These principles emphasize…

  14. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  15. CCD image acquisition for multispectral teledetection

    NASA Astrophysics Data System (ADS)

    Peralta-Fabi, R.; Peralta, A.; Prado, Jorge M.; Vicente, Esau; Navarette, M.

    1992-08-01

    A low cost high-reliability multispectral video system has been developed for airborne remote sensing. Three low weight CCD cameras are mounted together with a photographic camera in a keviar composite self-contained structure. The CCD cameras are remotely controlled have spectral filters (80 nm at 50 T) placed in front of their optical system and all cameras are aligned to capture the same image field. Filters may be changed so as to adjust spectral bands according to the object s reflectance properties but a set of bands common to most remote sensing aircraft and satellites are usually placed covering visible and near JR. This paper presents results obtained with this system and some comparisons as to the cost resolution and atmospheric correction advantages with respect to other more costly devices. Also a brief description of the Remotely Piloted Vehicle (RPV) project where the camera system will be mounted is given. The images so obtained replace the costlier ones obtained by satellites in severai specific applications. Other applications under development include fire monitoring identification of vegetation in the field and in the laboratory discrimination of objects by color for industrial applications and for geological and engineering surveys. 1.

  16. Automatic image acquisition processor and method

    DOEpatents

    Stone, W.J.

    1984-01-16

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  17. Automatic image acquisition processor and method

    DOEpatents

    Stone, William J.

    1986-01-01

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  18. Meteorological image processing applications

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Hasler, A. F.; Adler, R. F.

    1979-01-01

    Meteorologists at NASA's Goddard Space Flight Center are conducting an extensive program of research in weather and climate related phenomena. This paper focuses on meteorological image processing applications directed toward gaining a detailed understanding of severe weather phenomena. In addition, the paper discusses the ground data handling and image processing systems used at the Goddard Space Flight Center to support severe weather research activities and describes three specific meteorological studies which utilized these facilities.

  19. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  20. An extended-source spatial acquisition process based on maximum likelihood criterion for planetary optical communications

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee

    1992-01-01

    This paper describes an extended-source spatial acquisition process based on the maximum likelihood criterion for interplanetary optical communications. The objective is to use the sun-lit Earth image as a receiver beacon and point the transmitter laser to the Earth-based receiver to establish a communication path. The process assumes the existence of a reference image. The uncertainties between the reference image and the received image are modeled as additive white Gaussian disturbances. It has been shown that the optimal spatial acquisition requires solving two nonlinear equations to estimate the coordinates of the transceiver from the received camera image in the transformed domain. The optimal solution can be obtained iteratively by solving two linear equations. Numerical results using a sample sun-lit Earth as a reference image demonstrate that sub-pixel resolutions can be achieved in a high disturbance environment. Spatial resolution is quantified by Cramer-Rao lower bounds.

  1. A digital imaging photometry system for cometary data acquisition

    NASA Technical Reports Server (NTRS)

    Clifton, K. S.; Benson, C. M.; Gary, G. A.

    1986-01-01

    This report describes a digital imaging photometry system developed in the Space Science Laboratory at the Marshall Space Flight center. The photometric system used for cometary data acquisition is based on an intensified secondary electron conduction (ISEC) vidicon coupled to a versatile data acquisition system which allows real-time interactive operation. Field tests on the Orion and Rosette nebulas indicate a limiting magnitude of approximately m sub v = 14 over the 40 arcmin field-of-view. Observations were conducted of Comet Giacobini-Zinner in August 1985. The resulting data are discussed in relation to the capabilities of the digital analysis system. The development program concluded on August 31, 1985.

  2. Imaging and Data Acquisition in Clinical Trials for Radiation Therapy.

    PubMed

    FitzGerald, Thomas J; Bishop-Jodoin, Maryann; Followill, David S; Galvin, James; Knopp, Michael V; Michalski, Jeff M; Rosen, Mark A; Bradley, Jeffrey D; Shankar, Lalitha K; Laurie, Fran; Cicchetti, M Giulia; Moni, Janaki; Coleman, C Norman; Deye, James A; Capala, Jacek; Vikram, Bhadrasain

    2016-02-01

    Cancer treatment evolves through oncology clinical trials. Cancer trials are multimodal and complex. Assuring high-quality data are available to answer not only study objectives but also questions not anticipated at study initiation is the role of quality assurance. The National Cancer Institute reorganized its cancer clinical trials program in 2014. The National Clinical Trials Network (NCTN) was formed and within it was established a Diagnostic Imaging and Radiation Therapy Quality Assurance Organization. This organization is Imaging and Radiation Oncology Core, the Imaging and Radiation Oncology Core Group, consisting of 6 quality assurance centers that provide imaging and radiation therapy quality assurance for the NCTN. Sophisticated imaging is used for cancer diagnosis, treatment, and management as well as for image-driven technologies to plan and execute radiation treatment. Integration of imaging and radiation oncology data acquisition, review, management, and archive strategies are essential for trial compliance and future research. Lessons learned from previous trials are and provide evidence to support diagnostic imaging and radiation therapy data acquisition in NCTN trials. PMID:26853346

  3. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  4. The Power of Imageability: How the Acquisition of Inflected Forms Is Facilitated in Highly Imageable Verbs and Nouns in Czech Children

    ERIC Educational Resources Information Center

    Smolík, Filip; Kríž, Adam

    2015-01-01

    Imageability is the ability of words to elicit mental sensory images of their referents. Recent research has suggested that imageability facilitates the processing and acquisition of inflected word forms. The present study examined whether inflected word forms are acquired earlier in highly imageable words in Czech children. Parents of 317…

  5. Current image acquisition options in PET/MR.

    PubMed

    Boellaard, Ronald; Quick, Harald H

    2015-05-01

    Whole-body PET/MR hybrid imaging combines excellent soft tissue contrast and various functional imaging parameters provided by MR with high sensitivity and quantification of radiotracer uptake provided by PET. Although clinical evaluation now is under way, PET/MR demands for new technologies and innovative solutions, currently subject to interdisciplinary research. Attenuation correction (AC) of human soft tissues and of hardware components has to be MR based to maintain quantification of PET imaging as CT attenuation information is missing. MR-based AC is inherently associated with the following challenges: patient tissues are segmented into only few tissue classes, providing discrete attenuation coefficients; bone is substituted as soft tissue in MR-based AC; the limited field of view in MRI leads to truncations in body imaging and, consequently, in MR-based AC; and correct segmentation of lung tissue may be hampered by breathing artifacts. Use of time of flight during PET image acquisition and reconstruction, however, may improve the accuracy of AC. This article provides a status of current image acquisition options in PET/MR hybrid imaging. PMID:25841274

  6. Image acquisition planning for the CHRIS sensor onboard PROBA

    NASA Astrophysics Data System (ADS)

    Fletcher, Peter A.

    2004-10-01

    The CHRIS (Compact High Resolution Imaging Spectrometer) instrument was launched onboard the European Space Agency (ESA) PROBA satellite on 22 October 2001. CHRIS can acquire up to 63 bands of hyperspectral data at a ground spatial resolution of 36m. Alternatively, the instrument can be configured to acquire 18 bands of data with a spatial resolution of 17m. PROBA, by virtue of its agile pointing capability, enables CHRIS to acquire five different angle images of the selected site. Two sites can be acquired every 24 hours. The hyperspectral and multi-angle capability of CHRIS makes it an important resource for stydying BRDF phenomena of vegetation. Other applications include coastal and inland waters, wild fires, education and public relations. An effective data acquisition planning procedure has been implemented and since mid-2002 users have been receiving data for analysis. A cloud prediction routine has been adopted that maximises the image acquisition capacity of CHRIS-PROBA. Image acquisition planning is carried out by RSAC Ltd on behalf of ESA and in co-operation with Sira Technology Ltd and Redu, the ESA ground station in Belgium, responsible for CHRIS-PROBA.

  7. High-accuracy data acquisition architectures for ultrasonic imaging.

    PubMed

    Kalashnikov, Alexander N; Ivchenko, Vladimir G; Challis, Richard E; Hayes-Gill, Barrie R

    2007-08-01

    This paper proposes a novel architecture for a data acquisition system intended to support the next generation of ultrasonic imaging instruments operating at or above 100 MHz. Existing systems have relatively poor signal-to-noise ratios and are limited in terms of their maximum data sampling rate, both of which are improved by a combination of embedded averaging and embedded interleaved sampling. "On-the-fly" pipelined operation minimizes control overheads for signal averaging. A two-clock sampling timing system provides for effective sampling rates that are a factor of 20 or more above the basic sampling rate of the analog-to-digital converter (ADC). The system uses commercial field-programmable gate array devices operated at clock frequencies commensurable with the ADC clock. Implementation is via the Xilinx Xtreme digital signal processing development kit, available at low cost. Sample rates of up to 2160 MHz have been achieved in combination with up to 16384 coherent averages using the above-mentioned off-the-shelf hardware. PMID:17703663

  8. Onboard image processing

    NASA Technical Reports Server (NTRS)

    Martin, D. R.; Samulon, A. S.

    1979-01-01

    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  9. Impact of image acquisition timing on image quality for dual energy contrast-enhanced breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Hill, Melissa L.; Mainprize, James G.; Puong, Sylvie; Carton, Ann-Katherine; Iordache, Razvan; Muller, Serge; Yaffe, Martin J.

    2012-03-01

    Dual-energy contrast-enhanced digital breast tomosynthesis (DE CE-DBT) image quality is affected by a large parameter space including the tomosynthesis acquisition geometry, imaging technique factors, the choice of reconstruction algorithm, and the subject breast characteristics. The influence of most of these factors on reconstructed image quality is well understood for DBT. However, due to the contrast agent uptake kinetics in CE imaging, the subject breast characteristics change over time, presenting a challenge for optimization . In this work we experimentally evaluate the sensitivity of the reconstructed image quality to timing of the low-energy and high-energy images and changes in iodine concentration during image acquisition. For four contrast uptake patterns, a variety of acquisition protocols were tested with different timing and geometry. The influence of the choice of reconstruction algorithm (SART or FBP) was also assessed. Image quality was evaluated in terms of the lesion signal-difference-to-noise ratio (LSDNR) in the central slice of DE CE-DBT reconstructions. Results suggest that for maximum image quality, the low- and high-energy image acquisitions should be made within one x-ray tube sweep, as separate low- and high-energy tube sweeps can degrade LSDNR. In terms of LSDNR per square-root dose, the image quality is nearly equal between SART reconstructions with 9 and 15 angular views, but using fewer angular views can result in a significant improvement in the quantitative accuracy of the reconstructions due to the shorter imaging time interval.

  10. Image sets for satellite image processing systems

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Horner, Toby; Temple, Asael

    2011-06-01

    The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of images and video across noisy or low-bandwidth channels. Unfortunately, the DWT algorithm's performance deteriorates in the presence of noise. Evolutionary algorithms are often able to train image filters that outperform DWT filters in noisy environments. Here, we present and evaluate two image sets suitable for the training of such filters for satellite and unmanned aerial vehicle imagery applications. We demonstrate the use of the first image set as a training platform for evolutionary algorithms that optimize discrete wavelet transform (DWT)-based image transform filters for satellite image compression. We evaluate the suitability of each image as a training image during optimization. Each image is ranked according to its suitability as a training image and its difficulty as a test image. The second image set provides a test-bed for holdout validation of trained image filters. These images are used to independently verify that trained filters will provide strong performance on unseen satellite images. Collectively, these image sets are suitable for the development of image processing algorithms for satellite and reconnaissance imagery applications.

  11. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    PubMed

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. PMID:26248715

  12. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  13. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  14. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  15. Auditory Processing Disorders: Acquisition and Treatment

    ERIC Educational Resources Information Center

    Moore, David R.

    2007-01-01

    Auditory processing disorder (APD) describes a mixed and poorly understood listening problem characterised by poor speech perception, especially in challenging environments. APD may include an inherited component, and this may be major, but studies reviewed here of children with long-term otitis media with effusion (OME) provide strong evidence…

  16. 75 FR 62069 - Federal Acquisition Regulation; Sudan Waiver Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... Federal Acquisition Regulation; Sudan Waiver Process AGENCIES: Department of Defense (DoD), General... criteria that an agency must address in a waiver request and a waiver consultation process regarding... Operations in Sudan and Imports from Burma, in the Federal Register at 74 FR 40463 on August 11,...

  17. Retinomorphic image processing.

    PubMed

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  18. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  19. Modeling the target acquisition performance of active imaging systems.

    PubMed

    Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H

    2007-04-01

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown. PMID:19532626

  20. Modeling the target acquisition performance of active imaging systems

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.

    2007-04-01

    Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.

  1. New developments in electron microscopy for serial image acquisition of neuronal profiles.

    PubMed

    Kubota, Yoshiyuki

    2015-02-01

    Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed. PMID:25564566

  2. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  3. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  4. Metrics for image-based modeling of target acquisition

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan D.

    2012-06-01

    This paper presents an image-based system performance model. The image-based system model uses an image metric to compare a given degraded image of a target, as seen through the modeled system, to the set of possible targets in the target set. This is repeated for all possible targets to generate a confusion matrix. The confusion matrix is used to determine the probability of identifying a target from the target set when using a particular system in a particular set of conditions. The image metric used in the image-based model should correspond closely to human performance. The image-based model performance is compared to human perception data on Contrast Threshold Function (CTF) tests, naked eye Triangle Orientation Discrimination (TOD), and TOD including an infrared camera system. Image-based system performance modeling is useful because it allows modeling of arbitrary image processing. Modern camera systems include more complex image processing, much of which is nonlinear. Existing linear system models, such as the TTP metric model implemented in NVESD models such as NV-IPM, assume that the entire system is linear and shift invariant (LSI). The LSI assumption makes modeling nonlinear processes difficult, such as local area processing/contrast enhancement (LAP/LACE), turbulence reduction, and image fusion.

  5. 360-degree dense multiview image acquisition system using time multiplexing

    NASA Astrophysics Data System (ADS)

    Yendo, Tomohiro; Fujii, Toshiaki; Panahpour Tehrani, Mehrdad; Tanimoto, Masayuki

    2010-02-01

    A novel 360-degree 3D image acquisition system that captures multi-view images with narrow view interval is proposed. The system consists of a scanning optics system and a high-speed camera. The scanning optics system is composed of a double-parabolic mirror shell and a rotating flat mirror tilted at 45 degrees to the horizontal plane. The mirror shell produces a real image of an object that is placed at the bottom of the shell. The mirror shell is modified from usual system which is used as 3D illusion toy so that the real image can be captured from right horizontal viewing direction. The rotating mirror in the real image reflects the image to the camera-axis direction. The reflected image observed from the camera varies according to the angle of the rotating mirror. This means that the camera can capture the object from various viewing directions that are determined by the angle of the rotating mirror. To acquire the time-varying reflected images, we use a high-speed camera that is synchronized with the angle of the rotating mirror. We have used a high-speed camera which resolution is 256×256 and the maximum frame rate is 10000fps at the resolution. Rotating speed of the tilted flat mirror is about 27 rev./sec. The number of views is 360. The focus length of parabolic mirrors is 73mm and diameter is 360mm. Objects which length is less than about 30mm can be acquired. Captured images are compensated rotation and distortion caused by double-parabolic mirror system, and reproduced as 3D moving images by Seelinder display.

  6. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  7. Face acquisition camera design using the NV-IPM image generation tool

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.; Choi, Hee-Sue; Reynolds, Joseph P.

    2015-05-01

    In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.

  8. Future image acquisition trends for PET/MRI.

    PubMed

    Boss, Andreas; Weiger, Markus; Wiesinger, Florian

    2015-05-01

    Hybrid PET/MRI scanners have become commercially available in the past years but are not yet widely distributed. The combination of a state-of-the-art PET with a state-of-the-art MRI scanner provides numerous potential advantages compared with the established PET/CT hybrid systems, namely, increased soft tissue contrast; functional information from MRI such as diffusion, perfusion, and blood oxygenation level-dependent techniques; true multiplanar data acquisition; and reduced radiation exposure. On the contrary, current PET/MRI technology is hampered by several shortcomings compared with PET/CT, the most important issues being how to use MR data for PET attenuation correction and the low sensitivity of MRI for small-scale pulmonary pathologies compared with high-resolution CT. Moreover, the optimal choice for hybrid PET/MRI acquisition protocols needs to be defined providing the highest possible degree of sensitivity and specificity within the constraints of the available measurement time. A multitude of new acquisition strategies of PET and MRI not only offer to overcome current obstacles of hybrid PET/MRI but also provide deeper insights into the pathophysiology of oncological, inflammatory, or degenerative diseases from the combination of molecular and functional imaging techniques. PMID:25841275

  9. Reading Acquisition Enhances an Early Visual Process of Contour Integration

    ERIC Educational Resources Information Center

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched…

  10. Low Cost Coherent Doppler Lidar Data Acquisition and Processing

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce W.; Koch, Grady J.

    2003-01-01

    The work described in this paper details the development of a low-cost, short-development time data acquisition and processing system for a coherent Doppler lidar. This was done using common laboratory equipment and a small software investment. This system provides near real-time wind profile measurements. Coding flexibility created a very useful test bed for new techniques.

  11. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  12. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  13. A flexible high-rate USB2 data acquisition system for PET and SPECT imaging

    SciTech Connect

    J. Proffitt, W. Hammond, S. Majewski, V. Popov, R.R. Raylman, A.G. Weisenberger, R. Wojcik

    2006-02-01

    A new flexible data acquisition system has been developed to instrument gamma-ray imaging detectors designed by the Jefferson Lab Detector and Imaging Group. Hardware consists of 16-channel data acquisition modules installed on USB2 carrier boards. Carriers have been designed to accept one, two, and four modules. Application trigger rate and channel density determines the number of acquisition boards and readout computers used. Each channel has an independent trigger, gated integrator and a 2.5 MHz 12-bit ADC. Each module has an FPGA for analog control and signal processing. Processing includes a 5 ns 40-bit trigger time stamp and programmable triggering, gating, ADC timing, offset and gain correction, charge and pulse-width discrimination, sparsification, event counting, and event assembly. The carrier manages global triggering and transfers module data to a USB buffer. High-granularity time-stamped triggering is suitable for modular detectors. Time stamped events permit dynamic studies, complex offline event assembly, and high-rate distributed data acquisition. A sustained USB data rate of 20 Mbytes/s, a sustained trigger rate of 300 kHz for 32 channels, and a peak trigger rate of 2.5 MHz to FIFO memory were achieved. Different trigger, gating, processing, and event assembly techniques were explored. Target applications include >100 kHz coincidence rate PET detectors, dynamic SPECT detectors, miniature and portable gamma detectors for small-animal and clinical use.

  14. Status of RAISE, the Rapid Acquisition Imaging Spectrograph Experiment

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, D. M.; DeForest, C.; Ayres, T. R.; Davis, M.; De Pontieu, B.; Schuehle, U.; Warren, H.

    2013-07-01

    The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100 ms, with 1 arcsec spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a new class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. The telescope and grating are coated with B4C to enhance short wavelength (2nd order) reflectance, enabling the instrument to record the brightest lines between 602-622Å and 761-780Å at the same time. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. We present an overview of the project, a summary of the maiden flight results, and an update on instrument status.Abstract (2,250 Maximum Characters): The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar

  15. RAISE (Rapid Acquisition Imaging Spectrograph Experiment): Results and Instrument Status

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald; DeForest, Craig; Ayres, Tom; Davis, Michael; DePontieu, Bart; Diller, Jed; Graham, Roy; Schule, Udo; Warren, Harry

    2015-04-01

    We present initial results from the successful November 2014 launch of the RAISE (Rapid Acquisition Imaging Spectrograph Experiment) sounding rocket program, including intensity maps, high-speed spectroheliograms and dopplergrams, as well as an update on instrument status. The RAISE sounding rocket payload is the fastest high-speed scanning-slit imaging spectrograph flown to date and is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to study small-scale multithermal dynamics in active region (AR) loops, explore the strength, spectrum and location of high frequency waves in the solar atmosphere, and investigate the nature of transient brightenings in the chromospheric network.

  16. Applications of Digital Image Processing 11

    NASA Technical Reports Server (NTRS)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  17. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  18. Superimposed fringe projection for three-dimensional shape acquisition by image analysis

    SciTech Connect

    Sasso, Marco; Chiappini, Gianluca; Palmieri, Giacomo; Amodio, Dario

    2009-05-01

    The aim in this work is the development of an image analysis technique for 3D shape acquisition, based on luminous fringe projections. In more detail, the method is based on the simultaneous use of several projectors, which is desirable whenever the surface under inspection has a complex geometry, with undercuts or shadow areas. In these cases, the usual fringe projection technique needs to perform several acquisitions, each time moving the projector or using several projectors alternately. Besides the procedure of fringe projection and phase calculation, an unwrap algorithm has been developed in order to obtain continuous phase maps needed in following calculations for shape extraction. With the technique of simultaneous projections, oriented in such a way to cover all of the surface, it is possible to increase the speed of the acquisition process and avoid the postprocessing problems related to the matching of different point clouds.

  19. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  20. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  1. The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) Sounding Rocket Investigation

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald M.; Deforest, Craig; Slater, David D.; Thomas, Roger J.; Ayres, Thomas; Davis, Michael; de Pontieu, Bart; Diller, Jed; Graham, Roy; Michaelis, Harald; Schuele, Udo; Warren, Harry

    2016-03-01

    We present a summary of the solar observing Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket program including an overview of the design and calibration of the instrument, flight performance, and preliminary chromospheric results from the successful November 2014 launch of the RAISE instrument. The RAISE sounding rocket payload is the fastest scanning-slit solar ultraviolet imaging spectrograph flown to date. RAISE is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2km/s. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (first-order 1205-1251Å and 1524-1569Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10Hz, recording up to 1800 complete spectra (per detector) in a single 6-min rocket flight. This opens up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to observe small-scale multithermal dynamics in Active Region (AR) and quiet Sun loops, identify the strength, spectrum and location of high frequency waves in the solar atmosphere, and determine the nature of energy release in the chromospheric network.

  2. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  3. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  4. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  5. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    PubMed

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics. PMID:27587162

  6. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials

    NASA Astrophysics Data System (ADS)

    Fada, Justin S.; Wheeler, Nicholas R.; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J.; French, Roger H.

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  7. Payload Configurations for Efficient Image Acquisition - Indian Perspective

    NASA Astrophysics Data System (ADS)

    Samudraiah, D. R. M.; Saxena, M.; Paul, S.; Narayanababu, P.; Kuriakose, S.; Kiran Kumar, A. S.

    2014-11-01

    The world is increasingly depending on remotely sensed data. The data is regularly used for monitoring the earth resources and also for solving problems of the world like disasters, climate degradation, etc. Remotely sensed data has changed our perspective of understanding of other planets. With innovative approaches in data utilization, the demands of remote sensing data are ever increasing. More and more research and developments are taken up for data utilization. The satellite resources are scarce and each launch costs heavily. Each launch is also associated with large effort for developing the hardware prior to launch. It is also associated with large number of software elements and mathematical algorithms post-launch. The proliferation of low-earth and geostationary satellites has led to increased scarcity in the available orbital slots for the newer satellites. Indian Space Research Organization has always tried to maximize the utility of satellites. Multiple sensors are flown on each satellite. In each of the satellites, sensors are designed to cater to various spectral bands/frequencies, spatial and temporal resolutions. Bhaskara-1, the first experimental satellite started with 2 bands in electro-optical spectrum and 3 bands in microwave spectrum. The recent Resourcesat-2 incorporates very efficient image acquisition approach with multi-resolution (3 types of spatial resolution) multi-band (4 spectral bands) electro-optical sensors (LISS-4, LISS-3* and AWiFS). The system has been designed to provide data globally with various data reception stations and onboard data storage capabilities. Oceansat-2 satellite has unique sensor combination with 8 band electro-optical high sensitive ocean colour monitor (catering to ocean and land) along with Ku band scatterometer to acquire information on ocean winds. INSAT- 3D launched recently provides high resolution 6 band image data in visible, short-wave, mid-wave and long-wave infrared spectrum. It also has 19 band

  8. Understanding the knowledge acquisition process about Earth and Space concepts

    NASA Astrophysics Data System (ADS)

    Frappart, Soren

    There exist two main theoretical views concerning the knowledge acquisition process in science. Those views are still in debate in the literature. On the one hand, knowledge is considered to be organized into coherent wholes (mental models). On the other hand knowledge is described as fragmented sets with no link between the fragments. Mental models have a predictive and explicative power and are constrained by universal presuppositions. They follow a universal gradual development in three steps from initial, synthetic to scientific models. On the contrary, the fragments are not organised and development is seen as a situated process where cultural transmission plays a fundamental role. After a presentation of those two theoretical positions, we will illustrate them with examples of studies related to the Earth Shape and gravity performed in different cultural contexts in order to enhance both the differences and the invariant cultural elements. We will show how those problematic are important to take into account and to question for space concepts, like gravity, orbits, weightlessness for instance. Indeed capturing the processes of acquisition and development of knowledge concerning specific space concepts can give us important information to develop relevant and adapted strategies for instruction. If the process of knowledge acquisition for Space concepts is fragmented then we have to think of how we could identify those fragments and help the learner organise links between them. If the knowledge is organised into coherent mental models, we have to think of how to destabilize a non relevant model and to prevent from the development of initial and synthetic models. Moreover the question of what is universal versus what is culture dependant in this acquisition process need to be explored. We will also present some main misconceptions that appeared about Space concepts. Indeed, additionally to the previous theoretical consideration, the collection and awareness of

  9. FPGA Based Data Acquisition and Processing for Gamma Ray Tomography

    NASA Astrophysics Data System (ADS)

    Schlaberg, H. Inaki; Li, Donghui; Wu, Yingxiang; Wang, Mi

    2007-06-01

    Data acquisition and processing for gamma ray tomography has traditionally been performed with analogue electronic circuitry. Detectors convert the received photons into electrical signals which are then shaped and conditioned for the next counting stage. An approach of using a FPGA (Field programmable gate array) based data acquisition and processing system for gamma ray tomography is presented in this paper. With recently introduced low cost high speed analogue to digital converters and digital signal processors the electrical output of the detectors can be converted into the digital domain with only simple analogue signal conditioning. This step can significantly reduce the amount of components and the size of the instrument as much of the analogue processing circuitry is eliminated. To count the number of incident photons from the converted electrical signal, a peak detection algorithm can be developed for the DSP (Digital Signal Processor). However due to the relatively high sample rate the consequently low number of available of processor cycles to process the sample makes it more effective to implement a peak detection algorithm on the FPGA. This paper presents the development of the acquisition system hardware and simulation results of the peak detection with previously recorded experimental data on a flow loop.

  10. Towards Quantification of Functional Breast Images Using Dedicated SPECT With Non-Traditional Acquisition Trajectories

    PubMed Central

    Perez, Kristy L.; Cutler, Spencer J.; Madhav, Priti; Tornai, Martin P.

    2012-01-01

    Quantification of radiotracer uptake in breast lesions can provide valuable information to physicians in deciding patient care or determining treatment efficacy. Physical processes (e.g., scatter, attenuation), detector/collimator characteristics, sampling and acquisition trajectories, and reconstruction artifacts contribute to an incorrect measurement of absolute tracer activity and distribution. For these experiments, a cylinder with three syringes of varying radioactivity concentration, and a fillable 800 mL breast with two lesion phantoms containing aqueous 99mTc pertechnetate were imaged using the SPECT sub-system of the dual-modality SPECT-CT dedicated breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisitions including vertical axis of rotation, 30° tilted, and complex sinusoidal trajectories. Different energy windows around the photopeak were quantitatively compared, along with appropriate scatter energy windows, to determine the best quantification accuracy after attenuation and dual-window scatter correction. Measured activity concentrations in the reconstructed images for syringes with greater than 10 µCi /mL corresponded to within 10% of the actual dose calibrator measured activity concentration for ±4% and ±8% photopeak energy windows. The same energy windows yielded lesion quantification results within 10% in the breast phantom as well. Results for the more complete complex sinsusoidal trajectory are similar to the simple vertical axis acquisition, and additionally allows both anterior chest wall sampling, no image distortion, and reasonably accurate quantification. PMID:22262925

  11. Towards Quantification of Functional Breast Images Using Dedicated SPECT With Non-Traditional Acquisition Trajectories.

    PubMed

    Perez, Kristy L; Cutler, Spencer J; Madhav, Priti; Tornai, Martin P

    2011-10-01

    Quantification of radiotracer uptake in breast lesions can provide valuable information to physicians in deciding patient care or determining treatment efficacy. Physical processes (e.g., scatter, attenuation), detector/collimator characteristics, sampling and acquisition trajectories, and reconstruction artifacts contribute to an incorrect measurement of absolute tracer activity and distribution. For these experiments, a cylinder with three syringes of varying radioactivity concentration, and a fillable 800 mL breast with two lesion phantoms containing aqueous (99m)Tc pertechnetate were imaged using the SPECT sub-system of the dual-modality SPECT-CT dedicated breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisitions including vertical axis of rotation, 30° tilted, and complex sinusoidal trajectories. Different energy windows around the photopeak were quantitatively compared, along with appropriate scatter energy windows, to determine the best quantification accuracy after attenuation and dual-window scatter correction. Measured activity concentrations in the reconstructed images for syringes with greater than 10 µCi /mL corresponded to within 10% of the actual dose calibrator measured activity concentration for ±4% and ±8% photopeak energy windows. The same energy windows yielded lesion quantification results within 10% in the breast phantom as well. Results for the more complete complex sinsusoidal trajectory are similar to the simple vertical axis acquisition, and additionally allows both anterior chest wall sampling, no image distortion, and reasonably accurate quantification. PMID:22262925

  12. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  13. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  14. Multi-Channel Data Acquisition System for Nuclear Pulse Processing

    SciTech Connect

    Myjak, Mitchell J.; Ma, Ding; Robinson, Dirk J.; La Rue, George S.

    2009-11-13

    We have developed a compact, inexpensive electronics package that can digitize pulse-mode or current-mode data from 32 detector outputs in parallel. The electronics package consists of two circuit boards: a custom acquisition board and an off-the-shelf processing board. The acquisition board features a custom-designed integrated circuit that contains an array of charge-to-pulse-width converters. The processing board contains a field programmable gate array that digitizes the pulse widths, performs event discrimination, constructs energy histograms, and executes any user-defined software. Together, the two boards cost around $1000. The module can transfer data to a computer or operate entirely as a standalone system. The design achieves 0.20% nonlinearity and 0.18% FWHM precision at full scale. However, the overall performance could be improved with some modifications to the integrated circuit.

  15. Implementation of a laser beam analyzer using the image acquisition card IMAQ (NI)

    NASA Astrophysics Data System (ADS)

    Rojas-Laguna, R.; Avila-Garcia, M. S.; Alvarado-Mendez, Edgar; Andrade-Lucio, Jose A.; Obarra-Manzano, O. G.; Torres-Cisneros, Miguel; Castro-Sanchez, R.; Estudillo-Ayala, J. M.; Ibarra-Escamilla, Baldeamr

    2001-08-01

    In this work we address our attention to the implementation of a beam analyzer. The software was designed under LabView, platform and using the Image Acquisition Card IMAQ of National Instruments. The objective is to develop a graphic interface which has to include image processing tools such as characteristic enhancement such as bright, contrast and morphologic operations and quantification of dimensions. An application of this graphic interface is like laser beam analyzer of medium cost, versatile, precise and easily reconfigurable under this programing environment.

  16. Acquisition and Processing of Multi-source Technique Offshore with Different Types of Source

    NASA Astrophysics Data System (ADS)

    Li, L.; Tong, S.; Zhou, H. W.

    2015-12-01

    Multi-source blended offshore seismic acquisition has been developed in recent years. The technology aims to improve the efficiency of acquisition or enhance the image quality through the dense spatial sampling. Previous methods usually use several source of the same type, we propose applying onshore sources with different central frequencies to image multiscale target layers at different depths. Low frequency seismic source is used to image the deep structure but has low resolution at shallow depth, which can be compensated by high frequency. By combing the low and high frequency imaging together, we obtain high resolution profiles on both shallow and deep. Considering all of above, we implemented a 2-D cruise using 300Hz and 2000Hz central frequency spark source whcich are randomly shooted with certain delay time. In this process we separate blended data by denoising methods, including middle filter and curvelet transform, and then match prestack data to obtain final profiles. Median filter can restrain impulse noise and protect the edges while curvelet transform has multi-scale characteristics and powerful sparse expression ability. The iterative noise elimination can produce good results. Prestack matching filter is important when integrate wavelet of two different spark sources because of their different characteristics, making data accordant for reflecting time, amplitude, frequency and phase. By comparing with profiles used either single type of source, the image of blended acquisition shows higher resolution at shallow depth and results in more information in deep locations.

  17. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  18. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  19. A general algorithm for magnetic resonance imaging simulation: a versatile tool to collect information about imaging artefacts and new acquisition techniques.

    PubMed

    Placidi, Giuseppe; Alecci, Marcello; Sotgiu, Antonello

    2002-01-01

    An innovative algorithm for Magnetic Resonance Imaging (MRI) capable of demonstrating the source of various artefacts and driving the hardware and software acquisition process is presented. The algorithm is based on the application of the Bloch equations to the magnetization vector of each point of the simulated object, as requested by the instructions of the MRI pulse sequence. The collected raw data are then used to reconstruct the image of the object. The general structure of the algorithm makes it possible to simulate a great range of imaging situations in order to explain the nature of unwanted artefacts and to study new acquisition techniques. The way the algorithm structures the sequence has also allowed the easy implementation of MRI data acquisition on a commercial general-purpose DSP-based data acquisition board, thus facilitating the comparison between simulated and experimental results. PMID:15460653

  20. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  1. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    SciTech Connect

    Castillo, S; Castillo, R; Castillo, E; Pan, T; Ibbott, G; Balter, P; Hobbs, B; Dai, J; Guerrero, T

    2014-06-15

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phase sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase

  2. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system. PMID:23594233

  3. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  4. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  5. Target-acquisition performance in undersampled infrared imagers: static imagery to motion video.

    PubMed

    Krapels, Keith; Driggers, Ronald G; Teaney, Brian

    2005-11-20

    In this research we show that the target-acquisition performance of an undersampled imager improves with sensor or target motion. We provide an experiment designed to evaluate the improvement in observer performance as a function of target motion rate in the video. We created the target motion by mounting a thermal imager on a precision two-axis gimbal and varying the sensor motion rate from 0.25 to 1 instantaneous field of view per frame. A midwave thermal imager was used to permit short integration times and remove the effects of motion blur. It is shown that the human visual system performs a superresolution reconstruction that mitigates some aliasing and provides a higher (than static imagery) effective resolution. This process appears to be relatively independent of motion velocity. The results suggest that the benefits of superresolution reconstruction techniques as applied to imaging systems with motion may be limited. PMID:16318174

  6. 3D Image Acquisition System Based on Shape from Focus Technique

    PubMed Central

    Billiot, Bastien; Cointault, Frédéric; Journaux, Ludovic; Simon, Jean-Claude; Gouton, Pierre

    2013-01-01

    This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene. PMID:23591964

  7. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  8. Advances in diffusion MRI acquisition and processing in the Human Connectome Project.

    PubMed

    Sotiropoulos, Stamatios N; Jbabdi, Saad; Xu, Junqian; Andersson, Jesper L; Moeller, Steen; Auerbach, Edward J; Glasser, Matthew F; Hernandez, Moises; Sapiro, Guillermo; Jenkinson, Mark; Feinberg, David A; Yacoub, Essa; Lenglet, Christophe; Van Essen, David C; Ugurbil, Kamil; Behrens, Timothy E J

    2013-10-15

    The Human Connectome Project (HCP) is a collaborative 5-year effort to map human brain connections and their variability in healthy adults. A consortium of HCP investigators will study a population of 1200 healthy adults using multiple imaging modalities, along with extensive behavioral and genetic data. In this overview, we focus on diffusion MRI (dMRI) and the structural connectivity aspect of the project. We present recent advances in acquisition and processing that allow us to obtain very high-quality in-vivo MRI data, whilst enabling scanning of a very large number of subjects. These advances result from 2 years of intensive efforts in optimising many aspects of data acquisition and processing during the piloting phase of the project. The data quality and methods described here are representative of the datasets and processing pipelines that will be made freely available to the community at quarterly intervals, beginning in 2013. PMID:23702418

  9. Advances in diffusion MRI acquisition and processing in the Human Connectome Project

    PubMed Central

    Sotiropoulos, Stamatios N; Jbabdi, Saad; Xu, Junqian; Andersson, Jesper L; Moeller, Steen; Auerbach, Edward J; Glasser, Matthew F; Hernandez, Moises; Sapiro, Guillermo; Jenkinson, Mark; Feinberg, David A; Yacoub, Essa; Lenglet, Christophe; Ven Essen, David C; Ugurbil, Kamil; Behrens, Timothy EJ

    2013-01-01

    The Human Connectome Project (HCP) is a collaborative 5-year effort to map human brain connections and their variability in healthy adults. A consortium of HCP investigators will study a population of 1200 healthy adults using multiple imaging modalities, along with extensive behavioral and genetic data. In this overview, we focus on diffusion MRI (dMRI) and the structural connectivity aspect of the project. We present recent advances in acquisition and processing that allow us to obtain very high-quality in-vivo MRI data, while enabling scanning of a very large number of subjects. These advances result from 2 years of intensive efforts in optimising many aspects of data acquisition and processing during the piloting phase of the project. The data quality and methods described here are representative of the datasets and processing pipelines that will be made freely available to the community at quarterly intervals, beginning in 2013. PMID:23702418

  10. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  11. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  12. Acquisition method improvement for Bossa Nova Technologies' full Stokes, passive polarization imaging camera SALSA

    NASA Astrophysics Data System (ADS)

    El Ketara, M.; Vedel, M.; Breugnot, S.

    2016-05-01

    For some applications, the need for fast polarization acquisition is essential (if the scene observed is moving or changing quickly). In this paper, we present a new acquisition method for Bossa Nova Technologies' full Stokes passive polarization imaging camera, the SALSA. This polarization imaging camera is based on "Division of Time polarimetry" architecture. The use of this technique presents the advantage of preserving the full resolution of the image observed all the while reducing the speed acquisition time. The goal of this new acquisition method is to overcome the limitations associated with Division of Time acquisition technique as well as to obtain high-speed polarization imaging while maintaining the image resolution. The efficiency of this new method is demonstrated in this paper through different experiments.

  13. Remote online processing of multispectral image data

    NASA Astrophysics Data System (ADS)

    Groh, Christine; Rothe, Hendrik

    2005-10-01

    Within the scope of this paper a both compact and economical data acquisition system for multispecral images is described. It consists of a CCD camera, a liquid crystal tunable filter in combination with an associated concept for data processing. Despite of their limited functionality (e.g.regarding calibration) in comparison with commercial systems such as AVIRIS the use of these upcoming compact multispectral camera systems can be advantageous in many applications. Additional benefit can be derived adding online data processing. In order to maintain the systems low weight and price this work proposes to separate data acquisition and processing modules, and transmit pre-processed camera data online to a stationary high performance computer for further processing. The inevitable data transmission has to be optimised because of bandwidth limitations. All mentioned considerations hold especially for applications involving mini-unmanned-aerial-vehicles (mini-UAVs). Due to their limited internal payload the use of a lightweight, compact camera system is of particular importance. This work emphasises on the optimal software interface in between pre-processed data (from the camera system), transmitted data (regarding small bandwidth) and post-processed data (based on high performance computer). Discussed parameters are pre-processing algorithms, channel bandwidth, and resulting accuracy in the classification of multispectral image data. The benchmarked pre-processing algorithms include diagnostic statistics, test of internal determination coefficients as well as loss-free and lossy data compression methods. The resulting classification precision is computed in comparison to a classification performed with the original image dataset.

  14. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  15. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  16. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  17. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  18. Design of smart imagers with image processing

    NASA Astrophysics Data System (ADS)

    Serova, Evgeniya N.; Shiryaev, Yury A.; Udovichenko, Anton O.

    2005-06-01

    This paper is devoted to creation of novel CMOS APS imagers with focal plane parallel image preprocessing for smart technical vision and electro-optical systems based on neural implementation. Using analysis of main biological vision features, the desired artificial vision characteristics are defined. Image processing tasks can be implemented by smart focal plane preprocessing CMOS imagers with neural networks are determined. Eventual results are important for medicine, aerospace ecological monitoring, complexity, and ways for CMOS APS neural nets implementation. To reduce real image preprocessing time special methods based on edge detection and neighbored frame subtraction will be considered and simulated. To select optimal methods and mathematical operators for edge detection various medical, technical and aerospace images will be tested. The important research direction will be devoted to analogue implementation of main preprocessing operations (addition, subtraction, neighbored frame subtraction, module, and edge detection of pixel signals) in focal plane of CMOS APS imagers. We present the following results: the algorithm of edge detection for analog realization, and patented focal plane circuits for analog image reprocessing (edge detection and motion detection).

  19. An interactive image processing system.

    PubMed

    Troxel, D E

    1981-01-01

    A multiuser multiprocessing image processing system has been developed. It is an interactive picture manipulation and enhancement facility which is capable of executing a variety of image processing operations while simultaneously controlling real-time input and output of pictures. It was designed to provide a reliable picture processing system which would be cost-effective in the commercial production environment. Additional goals met by the system include flexibility and ease of operation and modification. PMID:21868923

  20. Instant super-resolution imaging in live cells and embryos via analog image processing

    PubMed Central

    York, Andrew G.; Chandris, Panagiotis; Nogare, Damian Dalle; Head, Jeffrey; Wawrzusin, Peter; Fischer, Robert S.; Chitnis, Ajay; Shroff, Hari

    2013-01-01

    Existing super-resolution fluorescence microscopes compromise acquisition speed to provide subdiffractive sample information. We report an analog implementation of structured illumination microscopy that enables 3D super-resolution imaging with 145 nm lateral and 350 nm axial resolution, at acquisition speeds up to 100 Hz. By performing image processing operations optically instead of digitally, we removed the need to capture, store, and combine multiple camera exposures, increasing data acquisition rates 10–100x over other super-resolution microscopes and acquiring and displaying super-resolution images in real-time. Low excitation intensities allow imaging over hundreds of 2D sections, and combined physical and computational sectioning allow similar depth penetration to confocal microscopy. We demonstrate the capability of our system by imaging fine, rapidly moving structures including motor-driven organelles in human lung fibroblasts and the cytoskeleton of flowing blood cells within developing zebrafish embryos. PMID:24097271

  1. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  2. Contractor relationships and inter-organizational strategies in NASA's R and D acquisition process

    NASA Technical Reports Server (NTRS)

    Guiltinan, J.

    1976-01-01

    Interorganizational analysis of NASA's acquisition process for research and development systems is discussed. The importance of understanding the contractor environment, constraints, and motives in selecting an acquisition strategy is demonstrated. By articulating clear project goals, by utilizing information about the contractor and his needs at each stage in the acquisition process, and by thorough analysis of the inter-organizational relationship, improved selection of acquisition strategies and business practices is possible.

  3. Image processing of aerodynamic data

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1985-01-01

    The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

  4. Autonomous Closed-Loop Tasking, Acquisition, Processing, and Evaluation for Situational Awareness Feedback

    NASA Technical Reports Server (NTRS)

    Frye, Stuart; Mandl, Dan; Cappelaere, Pat

    2016-01-01

    This presentation describes the closed loop satellite autonomy methods used to connect users and the assets on Earth Orbiter- 1 (EO-1) and similar satellites. The base layer is a distributed architecture based on Goddard Mission Services Evolution Concept (GMSEC) thus each asset still under independent control. Situational awareness is provided by a middleware layer through common Application Programmer Interface (API) to GMSEC components developed at GSFC. Users setup their own tasking requests, receive views into immediate past acquisitions in their area of interest, and into future feasibilities for acquisition across all assets. Automated notifications via pubsub feeds are returned to users containing published links to image footprints, algorithm results, and full data sets. Theme-based algorithms are available on-demand for processing.

  5. Progress in the Development of a new Angiography Suite including the High Resolution Micro-Angiographic Fluoroscope (MAF), a Control, Acquisition, Processing, and Image Display System (CAPIDS), and a New Detector Changer Integrated into a Commercial C-Arm Angiography Unit to Enable Clinical Use.

    PubMed

    Wang, Weiyuan; Ionita, Ciprian N; Keleshis, Christos; Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel; Rudin, Stephen

    2010-03-23

    Due to the high-resolution needs of angiographic and interventional vascular imaging, a Micro-Angiographic Fluoroscope (MAF) detector with a Control, Acquisition, Processing, and Image Display System (CAPIDS) was installed on a detector changer which was attached to the C-arm of a clinical angiographic unit. The MAF detector provides high-resolution, high-sensitivity, and real-time imaging capabilities and consists of a 300 μm-thick CsI phosphor, a dual stage micro-channel plate light image intensifier (LII) coupled to a fiber optic taper (FOT), and a scientific grade frame-transfer CCD camera, providing an image matrix of 1024×1024 35 μm square pixels with 12 bit depth. The Solid-State X-Ray Image Intensifier (SSXII) is an EMCCD (Electron Multiplying charge-coupled device) based detector which provides an image matrix of 1k×1k 32 μm square pixels with 12 bit depth. The changer allows the MAF or a SSXII region-of-interest (ROI) detector to be inserted in front of the standard flat-panel detector (FPD) when higher resolution is needed during angiographic or interventional vascular imaging procedures. The CAPIDS was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF or SSXII including: fluoroscopy, roadmapping, radiography, and digital-subtraction-angiography (DSA). The total system has been used for image guidance during endovascular image-guided interventions (EIGI) using prototype self-expanding asymmetric vascular stents (SAVS) in over 10 rabbit aneurysm creation and treatment experiments which have demonstrated the system's potential benefits for future clinical use. PMID:21243037

  6. Progress in the development of a new angiography suite including the high resolution micro-angiographic fluoroscope (MAF): a control, acquisition, processing, and image display system (CAPIDS), and a new detector changer integrated into a commercial C-arm angiography unit to enable clinical use

    NASA Astrophysics Data System (ADS)

    Wang, Weiyuan; Ionita, Ciprian N.; Keleshis, Christos; Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel; Rudin, Stephen

    2010-04-01

    Due to the high-resolution needs of angiographic and interventional vascular imaging, a Micro-Angiographic Fluoroscope (MAF) detector with a Control, Acquisition, Processing, and Image Display System (CAPIDS) was installed on a detector changer which was attached to the C-arm of a clinical angiographic unit. The MAF detector provides high-resolution, high-sensitivity, and real-time imaging capabilities and consists of a 300 μm-thick CsI phosphor, a dual stage micro-channel plate light image intensifier (LII) coupled to a fiber optic taper (FOT), and a scientific grade frame-transfer CCD camera, providing an image matrix of 1024×1024 35 μm square pixels with 12 bit depth. The Solid-State X-Ray Image Intensifier (SSXII) is an EMCCD (Electron Multiplying charge-coupled device) based detector which provides an image matrix of 1k×1k 32 μm square pixels with 12 bit depth. The changer allows the MAF or a SSXII region-of-interest (ROI) detector to be inserted in front of the standard flat-panel detector (FPD) when higher resolution is needed during angiographic or interventional vascular imaging procedures. The CAPIDS was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF or SSXII including: fluoroscopy, roadmapping, radiography, and digital-subtraction-angiography (DSA). The total system has been used for image guidance during endovascular image-guided interventions (EIGI) using prototype self-expanding asymmetric vascular stents (SAVS) in over 10 rabbit aneurysm creation and treatment experiments which have demonstrated the system's potential benefits for future clinical use.

  7. Progress in the Development of a new Angiography Suite including the High Resolution Micro-Angiographic Fluoroscope (MAF), a Control, Acquisition, Processing, and Image Display System (CAPIDS), and a New Detector Changer Integrated into a Commercial C-Arm Angiography Unit to Enable Clinical Use

    PubMed Central

    Wang, Weiyuan; Ionita, Ciprian N; Keleshis, Christos; Kuhls-Gilcrist, Andrew; Jain, Amit; Bednarek, Daniel; Rudin, Stephen

    2010-01-01

    Due to the high-resolution needs of angiographic and interventional vascular imaging, a Micro-Angiographic Fluoroscope (MAF) detector with a Control, Acquisition, Processing, and Image Display System (CAPIDS) was installed on a detector changer which was attached to the C-arm of a clinical angiographic unit. The MAF detector provides high-resolution, high-sensitivity, and real-time imaging capabilities and consists of a 300 μm-thick CsI phosphor, a dual stage micro-channel plate light image intensifier (LII) coupled to a fiber optic taper (FOT), and a scientific grade frame-transfer CCD camera, providing an image matrix of 1024×1024 35 μm square pixels with 12 bit depth. The Solid-State X-Ray Image Intensifier (SSXII) is an EMCCD (Electron Multiplying charge-coupled device) based detector which provides an image matrix of 1k×1k 32 μm square pixels with 12 bit depth. The changer allows the MAF or a SSXII region-of-interest (ROI) detector to be inserted in front of the standard flat-panel detector (FPD) when higher resolution is needed during angiographic or interventional vascular imaging procedures. The CAPIDS was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF or SSXII including: fluoroscopy, roadmapping, radiography, and digital-subtraction-angiography (DSA). The total system has been used for image guidance during endovascular image-guided interventions (EIGI) using prototype self-expanding asymmetric vascular stents (SAVS) in over 10 rabbit aneurysm creation and treatment experiments which have demonstrated the system's potential benefits for future clinical use. PMID:21243037

  8. Radio reflection imaging of asteroid and comet interiors I: Acquisition and imaging theory

    NASA Astrophysics Data System (ADS)

    Sava, Paul; Ittharat, Detchai; Grimm, Robert; Stillman, David

    2015-05-01

    Imaging the interior structure of comets and asteroids can provide insight into their formation in the early Solar System, and can aid in their exploration and hazard mitigation. Accurate imaging can be accomplished using broadband wavefield data penetrating deep inside the object under investigation. This can be done in principle using seismic systems (which is difficult since it requires contact with the studied object), or using radar systems (which is easier since it can be conducted from orbit). We advocate the use of radar systems based on instruments similar to the ones currently deployed in space, e.g. the CONSERT experiment of the Rosetta mission, but perform imaging using data reflected from internal interfaces, instead of data transmitted through the imaging object. Our core methodology is wavefield extrapolation using time-domain finite differences, a technique often referred to as reverse-time migration and proven to be effective in high-quality imaging of complex geologic structures. The novelty of our approach consists in the use of dual orbiters around the studied object, instead of an orbiter and a lander. Dual orbiter systems can provide multi-offset data that illuminate the target object from many different illumination angles. Multi-offset data improve image quality (a) by avoiding illumination shadows, (b) by attenuating coherent noise (image artifacts) caused by wavefield multi-pathing, and (c) by providing information necessary to infer the model parameters needed to simulate wavefields inside the imaging target. The images obtained using multi-offset are robust with respect to instrument noise comparable in strength with the reflected signal. Dual-orbiter acquisition leads to improved image quality which is directly dependent on the aperture between the transmitter and receiver antennas. We illustrate the proposed methodology using a complex model based on a scaled version of asteroid 433 Eros.

  9. KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process

    NASA Technical Reports Server (NTRS)

    Gettig, Gary A.

    1988-01-01

    Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.

  10. Real-time multi-camera video acquisition and processing platform for ADAS

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  11. Post-acquisition small-animal respiratory gated imaging using micro cone-beam CT

    NASA Astrophysics Data System (ADS)

    Hu, Jicun; Haworth, Steven T.; Molthen, Robert C.; Dawson, Christopher A.

    2004-04-01

    On many occasions, it is desirable to image lungs in vivo to perform a pulmonary physiology study. Since the lungs are moving, gating with respect to the ventilatory phase has to be performed in order to minimize motion artifacts. Gating can be done in real time, similar to cardiac imaging in clinical applications, however, there are technical problems that have lead us to investigate different approaches. The problems include breath-to-breath inconsistencies in tidal volume, which makes the precise detection of ventilatory phase difficult, and the relatively high ventilation rates seen in small animals (rats and mice have ventilation rates in the range of a hundred cycles per minute), which challenges the capture rate of many imaging systems (this is particularly true of our system which utilizes cone-beam geometry and a 2 dimensional detector). Instead of pre-capture ventilation gating we implemented a method of post-acquisition gating. We acquire a sequence of projections images at 30 frames per second for each of 360 viewing angles. During each capture sequence the rat undergoes multiple ventilation cycles. Using the sequence of projection images, an automated region of interest algorithm, based on integrated grayscale intensity, tracts the ventilatory phase of the lungs. In the processing of an image sequence, multiple projection images are identified at a particular phase and averaged to improve the signal-to-ratio. The resulting averaged projection images are input to a Feldkamp cone-beam algorithm reconstruction algorithm in order to obtain isotropic image volumes. Minimal motion artifact data sets improve qualitative and quantitative analysis techniques useful in physiologic studies of pulmonary structure and function.

  12. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  13. The Logical Syntax of Number Words: Theory, Acquisition and Processing

    ERIC Educational Resources Information Center

    Musolino, Julien

    2009-01-01

    Recent work on the acquisition of number words has emphasized the importance of integrating linguistic and developmental perspectives [Musolino, J. (2004). The semantics and acquisition of number words: Integrating linguistic and developmental perspectives. "Cognition 93", 1-41; Papafragou, A., Musolino, J. (2003). Scalar implicatures: Scalar…

  14. Signal and Image Processing Operations

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  15. A Spartan 6 FPGA-based data acquisition system for dedicated imagers in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, E.; Loudos, G.; Georgiou, M.; David, S.; Matsopoulos, G.

    2012-12-01

    We present the development of a four-channel low-cost hardware system for data acquisition, with application in dedicated nuclear medicine imagers. A 12 bit octal channel high-speed analogue to digital converter, with up to 65 Msps sampling rate, was used for the digitization of analogue signals. The digitized data are fed into a field programmable gate array (FPGA), which contains an interface to a bank of double data rate 2 (DDR2)-type memory. The FPGA processes the digitized data and stores the results into the DDR2. An ethernet link was used for data transmission to a personal computer. The embedded system was designed using Xilinx's embedded development kit (EDK) and was based on Xilinx's Microblaze soft-core processor. The system has been evaluated using two different discrete optical detector arrays (a position-sensitive photomultiplier tube and a silicon photomultiplier) with two different pixelated scintillator arrays (BGO, LSO:Ce). The energy resolution for both detectors was approximately 25%. A clear identification of all crystal elements was achieved in all cases. The data rate of the system with this implementation can reach 60 Mbits s-1. The results have shown that this FPGA data acquisition system is a compact and flexible solution for single-photon-detection applications. This paper was originally submitted for inclusion in the special feature on Imaging Systems and Techniques 2011.

  16. Acquisition and Processing of Multi-Fold GPR Data for Characterization of Shallow Groundwater Systems

    NASA Astrophysics Data System (ADS)

    Bradford, J. H.

    2004-05-01

    Most ground-penetrating radar (GPR) data are acquired with a constant transmitter-receiver offset and often investigators apply little or no processing in generating a subsurface image. This mode of operation can provide useful information, but does not take full advantage of the information the GPR signal can carry. In continuous multi-offset (CMO) mode, one acquires several traces with varying source-receiver separations at each point along the survey. CMO acquisition is analogous to common-midpoint acquisition in exploration seismology and gives rise to improved subsurface characterization through three key features: 1) Processes such as stacking and velocity filtering significantly attenuate coherent and random noise resulting in subsurface images that are easier to interpret, 2) CMO data enable measurement of vertical and lateral velocity variations which leads to improved understanding of material distribution and more accurate depth estimates, and 3) CMO data enable observation of reflected wave behaviour (ie variations in amplitude and spectrum) at a common reflection point for various travel paths through the subsurface - quantification of these variations can be a valuable tool in material property characterization. Although there are a few examples in the literature, investigators rarely acquire CMO GPR data. This is, in large part, due to the fact that CMO acquisition with a single channel system is labor intensive and time consuming. At present, no multi-channel GPR systems designed for CMO acquisition are commercially available. Over the past 8 years I have designed, conducted, and processed numerous 2D and 3D CMO GPR surveys using a single channel GPR system. I have developed field procedures that enable a three man crew to acquire CMO GPR data at a rate comparable to a similar scale multi-channel seismic reflection survey. Additionally, many recent advances in signal processing developed in the oil and gas industry have yet to see significant

  17. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision. PMID:18285181

  18. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  19. Fault recognition depending on seismic acquisition and processing for application to geothermal exploration

    NASA Astrophysics Data System (ADS)

    Buness, H.; von Hartmann, H.; Rumpel, H.; Krawczyk, C. M.; Schulz, R.

    2011-12-01

    Fault systems offer a large potential for deep hydrothermal energy extraction. Most of the existing and planned projects rely on enhanced permeability assumed to be connected with them. Target depth of hydrothermal exploration in Germany is in the order of 3 -5 km to ensure an economic operation due to moderate temperature gradients. 3D seismics is the most appropriate geophysical method to image fault systems at these depth, but also one of the most expensive ones. It constitutes a significant part of the total project costs, so its application was (and is) discussed. Cost reduction in principle can be achieved by sparse acquisition. However, the decreased fold inevitably leads to a decreased S/N ratio. To overcome this problem, the application of the CRS (Common Reflection Surface) method has been proposed. The stacking operator of the CRS method inherently includes more traces than the conventional NMO/DMO stacking operator and hence a better S/N ratio can be achieved. We tested this approach using exiting 3D seismic datasets of the two most important hydrothermal provinces in Germany, the Upper Rhine Graben (URG) and the German Molasse Basin (GMB). To simulate a sparse acquisition, we reduced the amount of data to a quarter respectively a half and did a reprocessing of the data, including new velocity analysis and residual static corrections. In the URG, the utilization of the variance cube as basis for a horizon bound window amplitude analysis has been successful for the detection of small faults, which would hardly be recognized in seismic sections. In both regions, CRS processing undoubtedly improved the imaging of small faults in the complete as well as in the reduced versions of the datasets. However, CRS processing could not compensate the loss of resolution due to the reduction associated with the simulated sparse acquisition, and hence smaller faults became undetectable. The decision for a sparse acquisition of course depends on the scope of the survey

  20. Low cost FPGA based data acquisition system for a gamma imaging probe

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, E.; Georgiou, M.; Loudos, G.; Matsopoulos, G.

    2013-11-01

    We present the development of a low cost field programmable gate arrays (FPGA) based data acquisition system for a gamma imaging probe proposed for sentinel lymph node (SLN) mapping. Radioguided surgery using a gamma probe is an established practice and has been widely introduced in SLN biopsies. For such applications, imaging systems require compact readout electronics and flexibility. Embedded systems implemented in the FPGA technology offer new possibilities in data acquisition for nuclear medicine imagers. FPGAs are inexpensive compared to application specific integrated circuits (ASICs), usually used for the readout electronics of dedicated gamma cameras and their size is rather small. In this study, cost effective analog to digital converters (ADCs) were used and signal processing algorithms were implemented in the FPGA to extract the energy and position information. The analog front-end electronics were carefully designed taking into account the low sampling rate of the ADCs. The reference gamma probe has a small field of view (2.5 cm × 2.5 cm) and is based on the R8900U-00-C12 position sensitive photomultiplier tube (PSPMT) coupled to a pixellated CsI(Na) scintillator with 1 mm × 1 mm × 5 mm crystal element size. Measurements were carried out using a general purpose collimator and 99mTc sources emitted at 140 keV. Performance parameters for the imaging gamma probe were compared with those obtained when data were acquired using the standard NIM (Nuclear Instrumentation Modules) electronics and found to be in very good agreement, which demonstrates the efficiency of the proposed implementation.

  1. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  2. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  3. Seismic Imaging Processing and Migration

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  4. 48 CFR 636.602-5 - Short selection processes for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND... not to exceed the simplified acquisition threshold. The short selection process described in FAR...

  5. 48 CFR 636.602-5 - Short selection processes for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Acquisition Regulations System DEPARTMENT OF STATE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND... not to exceed the simplified acquisition threshold. The short selection process described in FAR...

  6. Effect of image bit depth on target acquisition modeling

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Reynolds, Joseph P.

    2008-04-01

    The impact of bit depth on human in the loop recognition and identification performance is of particular importance when considering trade-offs between resolution and band-width of sensor systems. This paper presents the results from two perception studies designed to measure the effects of quantization and finite bit depth on target acquisition performance. The results in this paper allow for the inclusion of limited bit depth and quantization as an additional noise term in NVESD sensor performance models.

  7. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  8. Imageability predicts the age of acquisition of verbs in Chinese children*

    PubMed Central

    Ma, Weiyi; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathy; McDonough, Colleen; Tardif, Twila

    2010-01-01

    Verbs are harder to learn than nouns in English and in many other languages, but are relatively easy to learn in Chinese. This paper evaluates one potential explanation for these findings by examining the construct of imageability, or the ability of a word to produce a mental image. Chinese adults rated the imageability of Chinese words from the Chinese Communicative Development Inventory (Tardif et al., in press). Imageability ratings were a reliable predictor of age of acquisition in Chinese for both nouns and verbs. Furthermore, whereas early Chinese and English nouns do not differ in imageability, verbs receive higher imageability ratings in Chinese than in English. Compared with input frequency, imageability independently accounts for a portion of the variance in age of acquisition (AoA) of verb learning in Chinese and English. PMID:18937878

  9. Using image processing techniques on proximity probe signals in rotordynamics

    NASA Astrophysics Data System (ADS)

    Diamond, Dawie; Heyns, Stephan; Oberholster, Abrie

    2016-06-01

    This paper proposes a new approach to process proximity probe signals in rotordynamic applications. It is argued that the signal be interpreted as a one dimensional image. Existing image processing techniques can then be used to gain information about the object being measured. Some results from one application is presented. Rotor blade tip deflections can be calculated through localizing phase information in this one dimensional image. It is experimentally shown that the newly proposed method performs more accurately than standard techniques, especially where the sampling rate of the data acquisition system is inadequate by conventional standards.

  10. A Procedure of Image Acquisition and Display Based on Ov7670

    NASA Astrophysics Data System (ADS)

    Yao, Jun; Yang, Dongxuan

    Design a procedure of K60 MCU using the DMA data transfer driver with OV7670 image sensor and the collected data is transmitted through the serial port to the PC, which achieves real time synchronization of acquisition and display of image data stream.

  11. Performance of reduced bit-depth acquisition for optical frequency domain imaging

    PubMed Central

    Goldberg, Brian D.; Vakoc, Benjamin J.; Oh, Wang-Yuhl; Suter, Melissa J.; Waxman, Sergio; Freilich, Mark I.; Bouma, Brett E.; Tearney, Guillermo J.

    2009-01-01

    High-speed optical frequency domain imaging (OFDI) has enabled practical wide-field microscopic imaging in the biological laboratory and clinical medicine. The imaging speed of OFDI, and therefore the field of view, of current systems is limited by the rate at which data can be digitized and archived rather than the system sensitivity or laser performance. One solution to this bottleneck is to natively digitize OFDI signals at reduced bit depths, e.g., at 8-bit depth rather than the conventional 12–14 bit depth, thereby reducing overall bandwidth. However, the implications of reduced bit-depth acquisition on image quality have not been studied. In this paper, we use simulations and empirical studies to evaluate the effects of reduced depth acquisition on OFDI image quality. We show that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. Images of a human coronary artery acquired in vivo at 8-bit depth are presented and compared with images at higher bit-depth acquisition. PMID:19770914

  12. Data Acquisition and Image Reconstruction Systems from the miniPET Scanners to the CARDIOTOM Camera

    SciTech Connect

    Valastvan, I.; Imrek, J.; Hegyesi, G.; Molnar, J.; Novak, D.; Bone, D.; Kerek, A.

    2007-11-26

    Nuclear imaging devices play an important role in medical diagnosis as well as drug research. The first and second generation data acquisition systems and the image reconstruction library developed provide a unified hardware and software platform for the miniPET-I, miniPET-II small animal PET scanners and for the CARDIOTOM{sup TM}.

  13. Towards the development of Hyperspectral Images of trench walls. Robotrench: Automatic Data acquisition

    NASA Astrophysics Data System (ADS)

    Ragona, D. E.; Minster, B.; Rockwell, T. K.; Fialko, Y.; Bloom, R. G.; Hemlinger, M.

    2004-12-01

    Previous studies on imaging spectrometry of paleoseismological excavations (Ragona, et. al, 2003, 2004) showed that low resolution Hyperspectral Imagery of a trench wall, processed with a supervised classification algorithm, provided more stratigraphic information than a high-resolution digital photography of the same exposure. Although the low-resolution images depicted the most important variations, a higher resolution hyperspectral image is necessary to assist in the recognition and documentation of paleoseismic events. Because of the fact that our spectroradiometer can only acquire one pixel at the time, creating a 25 psi image of a 1 x 1 m area of a trench wall will require 40000 individual measurements. To ease this extensive task we designed and built a device that can automatically position the spectroradiometer probe along the x-z plane of a trench wall. This device, informally named Robotrench, has two 7 feet long axes of motion (horizontal and vertical) commanded by a stepper motor controller board and a laptop computer. A platform provides the set up for the spectroradiometer probe and for the calibrated illumination system. A small circuit provided the interface between the Robotrench motion and the spectroradiomenter data collection. At its best, Robotrench ?spectroradiometer symbiotic pair can automatically record 1500-2000 pixels/hour, making the image acquisition process slow but feasible. At the time this abstract submission only a small calibration experiment was completed. This experiment was designed to calibrate the X-Z axes and to test the instrument performance. We measured a 20 x 10 cm brick wall at a 25 psi resolution. Three reference marks were set up on the trench wall as control points for the image registration process. The experiment was conducted at night under artificial light (stabilized 2 x 50 W halogen lamps). The data obtained was processed with the Spectral Angle Mapper algorithm. The image recovered from the data showed an

  14. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  15. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  16. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  17. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 360°×60° FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  18. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  19. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  20. Rapid acquisition of high-volume microscopic images using predicted focal plane.

    PubMed

    Yu, Lingjie; Wang, Rongwu; Zhou, Jinfeng; Xu, Bugao

    2016-09-01

    For an automated microscopic imaging system, the image acquisition speed is one of the most critical performance features because many applications require to analyse high-volume images. This paper illustrates a novel approach for rapid acquisition of high-volume microscopic images used to count blood cells automatically. This approach firstly forms a panoramic image of the sample slide by stitching sequential images captured at a low magnification, selects a few basic points (x, y) indicating the target areas from the panoramic image, and then refocuses the slide at each of the basic points at the regular magnification to record the depth position (z). The focusing coordinates (x, y, z) at these basic points are used to calculate a predicted focal plane that defines the relationship between the focus position (z) and the stage position (x, y). Via the predicted focal plane, the system can directly focus the objective lens at any local view, and can tremendously save image-acquisition time by avoiding the autofocusing function. The experiments showed how to determine the optimal number of the basic points at a given imaging condition, and proved that there is no significant difference between the images captured using the autofocusing function or the predicted focal plane. PMID:27229441

  1. Reengineering the Acquisition/Procurement Process: A Methodology for Requirements Collection

    NASA Technical Reports Server (NTRS)

    Taylor, Randall; Vanek, Thomas

    2011-01-01

    This paper captures the systematic approach taken by JPL's Acquisition Reengineering Project team, the methodology used, challenges faced, and lessons learned. It provides pragmatic "how-to" techniques and tools for collecting requirements and for identifying areas of improvement in an acquisition/procurement process or other core process of interest.

  2. 77 FR 2682 - Defense Federal Acquisition Regulation Supplement; DoD Voucher Processing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-19

    ... Regulation Supplement; DoD Voucher Processing AGENCY: Defense Acquisition Regulations System, Department of Defense (DoD). ACTION: Proposed rule. SUMMARY: DoD is proposing to amend the Defense Federal Acquisition Regulation Supplement (DFARS) to update DoD's voucher processing procedures and better accommodate the use...

  3. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  4. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  5. Developmental Stages in Receptive Grammar Acquisition: A Processability Theory Account

    ERIC Educational Resources Information Center

    Buyl, Aafke; Housen, Alex

    2015-01-01

    This study takes a new look at the topic of developmental stages in the second language (L2) acquisition of morphosyntax by analysing receptive learner data, a language mode that has hitherto received very little attention within this strand of research (for a recent and rare study, see Spinner, 2013). Looking at both the receptive and productive…

  6. DEVELOPMENT OF MARKETABLE TYPING SKILL--SENSORY PROCESSES UNDERLYING ACQUISITION.

    ERIC Educational Resources Information Center

    WEST, LEONARD J.

    THE PROJECT ATTEMPTED TO PROVIDE FURTHER DATA ON THE DOMINANT HYPOTHESIS ABOUT THE SENSORY MECHANISMS UNDERLYING SKILL ACQUISITION IN TYPEWRITING. IN SO DOING, IT PROPOSED TO FURNISH A BASIS FOR IMPORTANT CORRECTIVES TO SUCH CONVENTIONAL INSTRUCTIONAL PROCEDURES AS TOUCH TYPING. SPECIFICALLY, THE HYPOTHESIS HAS BEEN THAT KINESTHESIS IS NOT…

  7. Effect of temporal acquisition parameters on image quality of strain time constant elastography.

    PubMed

    Nair, Sanjay; Varghese, Joshua; Chaudhry, Anuj; Righetti, Raffaella

    2015-04-01

    Ultrasound methods to image the time constant (TC) of elastographic tissue parameters have been recently developed. Elastographic TC images from creep or stress relaxation tests have been shown to provide information on the viscoelastic and poroelastic behavior of tissues. However, the effect of temporal ultrasonic acquisition parameters and input noise on the image quality of the resultant strain TC elastograms has not been fully investigated yet. Understanding such effects could have important implications for clinical applications of these novel techniques. This work reports a simulation study aimed at investigating the effects of varying windows of observation, acquisition frame rate, and strain signal-to-noise ratio (SNR) on the image quality of elastographic TC estimates. A pilot experimental study was used to corroborate the simulation results in specific testing conditions. The results of this work suggest that the total acquisition time necessary for accurate strain TC estimates has a linear dependence to the underlying strain TC (as estimated from the theoretical strain-vs.-time curve). The results also indicate that it might be possible to make accurate estimates of the elastographic TC (within 10% error) using windows of observation as small as 20% of the underlying TC, provided sufficiently fast acquisition rates (>100 Hz for typical acquisition depths). The limited experimental data reported in this study statistically confirm the simulation trends, proving that the proposed model can be used as upper bound guidance for the correct execution of the experiments. PMID:24942645

  8. Selection of methods for etching carbides in MAR-M509 cobalt-base superalloy and acquisition of their images

    SciTech Connect

    Szala, Janusz . E-mail: janusz.szala@polsl.pl; Szczotok, Agnieszka . E-mail: agnieszka.szczotok@polsl.pl; Richter, Janusz . E-mail: janusz.richter@polsl.pl; Cwajna, Jan . E-mail: jan.cwajna@polsl.pl; Maciejny, Adolf . E-mail: adolf.maciejny@polsl.pl

    2006-06-15

    The paper summarizes results of research into conditions for selectively revealing carbides and carbide eutectics occurring in the structure of MAR-M509 cobalt-base alloy as well as conditions for their detection. Also discussed are the various conditions for acquisition and registration of structural images (by means of light and scanning electron microscopes) to ensure the selective detection of carbides and carbide eutectics. In particular, the influence of accelerating voltage on the possibility of automating the detection process is analyzed. Very good results were obtained on images registered by applying very low accelerating voltages (0.5 to 1 kV)

  9. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  10. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  11. Compressive image acquisition and classification via secant projections

    NASA Astrophysics Data System (ADS)

    Li, Yun; Hegde, Chinmay; Sankaranarayanan, Aswin C.; Baraniuk, Richard; Kelly, Kevin F.

    2015-06-01

    Given its importance in a wide variety of machine vision applications, extending high-speed object detection and recognition beyond the visible spectrum in a cost-effective manner presents a significant technological challenge. As a step in this direction, we developed a novel approach for target image classification using a compressive sensing architecture. Here we report the first implementation of this approach utilizing the compressive single-pixel camera system. The core of our approach rests on the design of new measurement patterns, or projections, that are tuned to objects of interest. Our measurement patterns are based on the notion of secant projections of image classes that are constructed using two different approaches. Both approaches show at least a twofold improvement in terms of the number of measurements over the conventional, data-oblivious compressive matched filter. As more noise is added to the image, the second method proves to be the most robust.

  12. High speed COMS image acquisition and transmission system based on USB

    NASA Astrophysics Data System (ADS)

    Cui, Yundong; Jiang, Jie; Zhang, Guangjun

    2008-10-01

    A high speed CMOS image acquisition and transmission system, which is composed of CMOS image sensor IBIS5-A-1300, USB 2.0 interface chip EZ-USB FX2 and FPGA (Field Programmable Gate Array), is designed and developed. The design of IBIS5-A-1300 driving timing, USB interface chip timing, firmware and application program are introduced. Experiments show that the system possesses the advantage of high resolution and high frame rate, supports single frame acquisition and video preview and fits the criterion of USB2.0 and the demand of real-time data transmission.

  13. Seismic acquisition and processing methodologies in overthrust areas: Some examples from Latin America

    SciTech Connect

    Tilander, N.G.; Mitchel, R..

    1996-08-01

    Overthrust areas represent some of the last frontiers in petroleum exploration today. Billion barrel discoveries in the Eastern Cordillera of Colombia and the Monagas fold-thrust belt of Venezuela during the past decade have highlighted the potential rewards for overthrust exploration. However the seismic data recorded in many overthrust areas is disappointingly poor. Challenges such as rough topography, complex subsurface structure, presence of high-velocity rocks at the surface, back-scattered energy and severe migration wavefronting continue to lower data quality and reduce interpretability. Lack of well/velocity control also reduces the reliability of depth estimations and migrated images. Failure to obtain satisfactory pre-drill structural images can easily result in costly wildcat failures. Advances in the methodologies used by Chevron for data acquisition, processing and interpretation have produced significant improvements in seismic data quality in Bolivia, Colombia and Trinidad. In this paper, seismic test results showing various swath geometries will be presented. We will also show recent examples of processing methods which have led to improved structural imaging. Rather than focusing on {open_quotes}black box{close_quotes} methodology, we will emphasize the cumulative effect of step-by-step improvements. Finally, the critical significance and interrelation of velocity measurements, modeling and depth migration will be explored. Pre-drill interpretations must ultimately encompass a variety of model solutions, and error bars should be established which realistically reflect the uncertainties in the data.

  14. Digital image processing: a primer for JVIR authors and readers: part 1: the fundamentals.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-10-01

    Online submission of manuscripts will be mandatory for most journals in the near future. To prepare authors for this requirement and to acquaint readers with this new development, herein the basics of digital image processing are described. From the fundamentals of digital image architecture, through acquisition, editing, and storage of digital images, the steps necessary to prepare an image for online submission are reviewed. In this article, the first of a three-part series, the structure of the digital image is described. In subsequent articles, the acquisition and editing of digital images will be reviewed. PMID:14551267

  15. Dual camera system for acquisition of high resolution images

    NASA Astrophysics Data System (ADS)

    Papon, Jeremie A.; Broussard, Randy P.; Ives, Robert W.

    2007-02-01

    Video surveillance is ubiquitous in modern society, but surveillance cameras are severely limited in utility by their low resolution. With this in mind, we have developed a system that can autonomously take high resolution still frame images of moving objects. In order to do this, we combine a low resolution video camera and a high resolution still frame camera mounted on a pan/tilt mount. In order to determine what should be photographed (objects of interest), we employ a hierarchical method which first separates foreground from background using a temporal-based median filtering technique. We then use a feed-forward neural network classifier on the foreground regions to determine whether the regions contain the objects of interest. This is done over several frames, and a motion vector is deduced for the object. The pan/tilt mount then focuses the high resolution camera on the next predicted location of the object, and an image is acquired. All components are controlled through a single MATLAB graphical user interface (GUI). The final system we present will be able to detect multiple moving objects simultaneously, track them, and acquire high resolution images of them. Results will demonstrate performance tracking and imaging varying numbers of objects moving at different speeds.

  16. Vehicle positioning using image processing

    NASA Astrophysics Data System (ADS)

    Kaur, Amardeep; Watkins, Steve E.; Swift, Theresa M.

    2009-03-01

    An image-processing approach is described that detects the position of a vehicle on a bridge. A load-bearing vehicle must be carefully positioned on a bridge for quantitative bridge monitoring. The personnel required for setup and testing and the time required for bridge closure or traffic control are important management and cost considerations. Consequently, bridge monitoring and inspections are good candidates for smart embedded systems. The objectives of this work are to reduce the need for personnel time and to minimize the time for bridge closure. An approach is proposed that uses a passive target on the bridge and camera instrumentation on the load vehicle. The orientation of the vehicle-mounted camera and the target determine the position. The experiment used pre-defined concentric circles as the target, a FireWire camera for image capture, and MATLAB for computer processing. Various image-processing techniques are compared for determining the orientation of the target circles with respect to speed and accuracy in the positioning application. The techniques for determining the target orientation use algorithms based on using the centroid feature, template matching, color feature, and Hough transforms. Timing parameters are determined for each algorithm to determine the feasibility for real-time use in a position triggering system. Also, the effect of variations in the size and color of the circles are examined. The development can be combined with embedded sensors and sensor nodes for a complete automated procedure. As the load vehicle moves to the proper position, the image-based system can trigger an embedded measurement, which is then transmitted back to the vehicle control computer through a wireless link.

  17. Possible overlapping time frames of acquisition and consolidation phases in object memory processes: a pharmacological approach.

    PubMed

    Akkerman, Sven; Blokland, Arjan; Prickaerts, Jos

    2016-01-01

    In previous studies, we have shown that acetylcholinesterase inhibitors and phosphodiesterase inhibitors (PDE-Is) are able to improve object memory by enhancing acquisition processes. On the other hand, only PDE-Is improve consolidation processes. Here we show that the cholinesterase inhibitor donepezil also improves memory performance when administered within 2 min after the acquisition trial. Likewise, both PDE5-I and PDE4-I reversed the scopolamine deficit model when administered within 2 min after the learning trial. PDE5-I was effective up to 45 min after the acquisition trial and PDE4-I was effective when administered between 3 and 5.5 h after the acquisition trial. Taken together, our study suggests that acetylcholine, cGMP, and cAMP are all involved in acquisition processes and that cGMP and cAMP are also involved in early and late consolidation processes, respectively. Most important, these pharmacological studies suggest that acquisition processes continue for some time after the learning trial where they share a short common time frame with early consolidation processes. Additional brain concentration measurements of the drugs suggest that these acquisition processes can continue up to 4-6 min after learning. PMID:26670184

  18. Image processing photosensor for robots

    NASA Astrophysics Data System (ADS)

    Vinogradov, Sergey L.; Shubin, Vitaly E.

    1995-01-01

    Some aspects of the possible applications of new, nontraditional generation of the advanced photosensors having the inherent internal image processing for multifunctional optoelectronic systems such as machine vision systems (MVS) are discussed. The optical information in these solid-state photosensors, so-called photoelectric structures with memory (PESM), is registered and stored in the form of 2D charge and potential patterns in the plane of the layers, and then it may be transferred and transformed in a normal direction due to interaction of these patterns. PESM ensure high operation potential of the massively parallel processing with effective rate up to 1014 operation/bit/s in such integral operations as addition, subtraction, contouring, correlation of images and so on. Most diverse devices and apparatus may be developed on their base, ranging from automatic rangefinders to the MVS for furnishing robotized industries. Principal features, physical backgrounds of the main primary operations, complex functional algorithms for object selection, tracking, and guidance are briefly described. The examples of the possible application of the PESM as an intellectual 'supervideosensor', that combines a high-quality imager, memory media and a high-capacity special-purpose processor will be presented.

  19. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  20. Noise-compensated homotopic non-local regularized reconstruction for rapid retinal optical coherence tomography image acquisitions

    PubMed Central

    2014-01-01

    Background Optical coherence tomography (OCT) is a minimally invasive imaging technique, which utilizes the spatial and temporal coherence properties of optical waves backscattered from biological material. Recent advances in tunable lasers and infrared camera technologies have enabled an increase in the OCT imaging speed by a factor of more than 100, which is important for retinal imaging where we wish to study fast physiological processes in the biological tissue. However, the high scanning rate causes proportional decrease of the detector exposure time, resulting in a reduction of the system signal-to-noise ratio (SNR). One approach to improving the image quality of OCT tomograms acquired at high speed is to compensate for the noise component in the images without compromising the sharpness of the image details. Methods In this study, we propose a novel reconstruction method for rapid OCT image acquisitions, based on a noise-compensated homotopic modified James-Stein non-local regularized optimization strategy. The performance of the algorithm was tested on a series of high resolution OCT images of the human retina acquired at different imaging rates. Results Quantitative analysis was used to evaluate the performance of the algorithm using two state-of-art denoising strategies. Results demonstrate significant SNR improvements when using our proposed approach when compared to other approaches. Conclusions A new reconstruction method based on a noise-compensated homotopic modified James-Stein non-local regularized optimization strategy was developed for the purpose of improving the quality of rapid OCT image acquisitions. Preliminary results show the proposed method shows considerable promise as a tool to improve the visualization and analysis of biological material using OCT. PMID:25319186

  1. Calibration of a flood inundation model using a SAR image: influence of acquisition time

    NASA Astrophysics Data System (ADS)

    Van Wesemael, Alexandra; Gobeyn, Sacha; Neal, Jeffrey; Lievens, Hans; Van Eerdenbrugh, Katrien; De Vleeschouwer, Niels; Schumann, Guy; Vernieuwe, Hilde; Di Baldassarre, Giuliano; De Baets, Bernard; Bates, Paul; Verhoest, Niko

    2016-04-01

    Flood risk management has always been in a search for effective prediction approaches. As such, the calibration of flood inundation models is continuously improved. In practice, this calibration process consists of finding the optimal roughness parameters, both channel and floodplain Manning coefficients, since these values considerably influence the flood extent in a catchment. In addition, Synthetic Aperture Radar (SAR) images have been proven to be a very useful tool in calibrating the flood extent. These images can distinguish between wet (flooded) and dry (non-flooded) pixels through the intensity of backscattered radio waves. To this date, however, satellite overpass often occurs only once during a flood event. Therefore, this study is specifically concerned with the effect of the timing of the SAR data acquisition on calibration results. In order to model the flood extent, the raster-based inundation model, LISFLOOD-FP, is used together with a high resolution synthetic aperture radar image (ERS-2 SAR) of a flood event of the river Dee, Wales, in December 2006. As only one satellite image of the considered case study is available, a synthetic framework is implemented in order to generate a time series of SAR observations. These synthetic observations are then used to calibrate the model at different time instants. In doing so, the sensitivity of the model output to the channel and floodplain Manning coefficients is studied through time. As results are examined, these suggest that there is a clear difference in the spatial variability to which water is held within the floodplain. Furthermore, these differences seem to be variable through time. Calibration by means of satellite flood observations obtained from the rising or receding limb, would generally lead to more reliable results rather than near peak flow observations.

  2. Contrast medium administration and image acquisition parameters in renal CT angiography: what radiologists need to know

    PubMed Central

    Saade, Charbel; Deeb, Ibrahim Alsheikh; Mohamad, Maha; Al-Mohiy, Hussain; El-Merhi, Fadi

    2016-01-01

    Over the last decade, exponential advances in computed tomography (CT) technology have resulted in improved spatial and temporal resolution. Faster image acquisition enabled renal CT angiography to become a viable and effective noninvasive alternative in diagnosing renal vascular pathologies. However, with these advances, new challenges in contrast media administration have emerged. Poor synchronization between scanner and contrast media administration have reduced the consistency in image quality with poor spatial and contrast resolution. Comprehensive understanding of contrast media dynamics is essential in the design and implementation of contrast administration and image acquisition protocols. This review includes an overview of the parameters affecting renal artery opacification and current protocol strategies to achieve optimal image quality during renal CT angiography with iodinated contrast media, with current safety issues highlighted. PMID:26728701

  3. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  4. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  5. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera`s frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera`s focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 {mu}s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  6. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera's frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera's focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 [mu]s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  7. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    NASA Astrophysics Data System (ADS)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  8. The 2014 Broadband Acquisition and Imaging Operation (BAcIO) at Stromboli Volcano (Italy)

    NASA Astrophysics Data System (ADS)

    Scarlato, P.; Taddeucci, J.; Del Bello, E.; Gaudin, D.; Ricci, T.; Andronico, D.; Lodato, L.; Cannata, A.; Ferrari, F.; Orr, T. R.; Sesterhenn, J.; Plescher, R.; Baumgärtel, Y.; Harris, A. J. L.; Bombrun, M.; Barnie, T. D.; Houghton, B. F.; Kueppers, U.; Capponi, A.

    2014-12-01

    In May 2014, Stromboli volcano, one of the best natural laboratories for the study of weak explosive volcanism, hosted a large combination of state-of-the-art and prototype eruption monitoring technologies. Aiming to expand our parameterization capabilities for explosive eruption dynamics, we temporarily deployed in direct view of the active vents a range of imaging, acoustic, and seismic data acquisition systems. Imaging systems included: two high-speed visible cameras acquiring synchronized images at 500 and 1000 frames per second (fps); two thermal infrared forward looking (FLIR) cameras zooming into the active vents and acquiring at 50-200 fps; two FLIR cameras acquiring at lower (3-50 fps) frame rates with a broader field of view; one visible time-lapse camera; one UV camera system for the measurement of sulphur dioxide emission; and one drone equipped with a camcorder. Acoustic systems included: four broadband microphones (range of tens of kHz to 0.1 Hz), two of them co-located with one of the high-speed cameras and one co-located with one of the seismometers (see below); and an acoustic microphone array. This array included sixteen microphones with a circular arrangement located on a steep slope above the active vents. Seismic systems included two broadband seismometers, one of them co-located with one of the high-speed cameras, and one co-located with one of the microphones. The above systems were synchronized with a variety of methods, and temporarily added to the permanent monitoring networks already operating on the island. Observation focus was on pyroclast ejection processes extending from the shallow conduit, through their acceleration and interaction with the atmosphere, and to their dispersal and deposition. The 3-D distribution of bombs, the sources of jet noise in the explosions, the comparison between methods for estimating explosion properties, and the relations between erupted gas and magma volumes, are some examples of the processes targeted

  9. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    PubMed Central

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  10. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  11. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  12. Hybrid Utrasound and MRI Acquisitions for High-Speed Imaging of Respiratory Organ Motion

    PubMed Central

    Preiswerk, Frank; Toews, Matthew; Hoge, W. Scott; Chiou, Jr-yuan George; Panych, Lawrence P.; Wells, William M.; Madore, Bruno

    2016-01-01

    Magnetic Resonance (MR) imaging provides excellent image quality at a high cost and low frame rate. Ultrasound (US) provides poor image quality at a low cost and high frame rate. We propose an instance-based learning system to obtain the best of both worlds: high quality MR images at high frame rates from a low cost single-element US sensor. Concurrent US and MRI pairs are acquired during a relatively brief offine learning phase involving the US transducer and MR scanner. High frame rate, high quality MR imaging of respiratory organ motion is then predicted from US measurements, even after stopping MRI acquisition, using a probabilistic kernel regression framework. Experimental results show predicted MR images to be highly representative of actual MR images. PMID:27135063

  13. Optimized image acquisition for breast tomosynthesis in projection and reconstruction space

    SciTech Connect

    Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan

    2009-11-15

    Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular

  14. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  15. Real-time microstructural and functional imaging and image processing in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Westphal, Volker

    Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new

  16. IMAGE FUSION OF RECONSTRUCTED DIGITAL TOMOSYNTHESIS VOLUMES FROM A FRONTAL AND A LATERAL ACQUISITION.

    PubMed

    Arvidsson, Jonathan; Söderman, Christina; Allansdotter Johnsson, Åse; Bernhardt, Peter; Starck, Göran; Kahl, Fredrik; Båth, Magnus

    2016-06-01

    Digital tomosynthesis (DTS) has been used in chest imaging as a low radiation dose alternative to computed tomography (CT). Traditional DTS shows limitations in the spatial resolution in the out-of-plane dimension. As a first indication of whether a dual-plane dual-view (DPDV) DTS data acquisition can yield a fair resolution in all three spatial dimensions, a manual registration between a frontal and a lateral image volume was performed. An anthropomorphic chest phantom was scanned frontally and laterally using a linear DTS acquisition, at 120 kVp. The reconstructed image volumes were resampled and manually co-registered. Expert radiologist delineations of the mediastinal soft tissues enabled calculation of similarity metrics in regard to delineations in a reference CT volume. The fused volume produced the highest total overlap, implying that the fused volume was a more isotropic 3D representation of the examined object than the traditional chest DTS volumes. PMID:26683464

  17. Acquisition of Flexible Image Recognition by Coupling of Reinforcement Learning and a Neural Network

    NASA Astrophysics Data System (ADS)

    Shibata, Katsunari; Kawano, Tomohiko

    The authors have proposed a very simple autonomous learning system consisting of one neural network (NN), whose inputs are raw sensor signals and whose outputs are directly passed to actuators as control signals, and which is trained by using reinforcement learning (RL). However, the current opinion seems that such simple learning systems do not actually work on complicated tasks in the real world. In this paper, with a view to developing higher functions in robots, the authors bring up the necessity to introduce autonomous learning of a massively parallel and cohesively flexible system with massive inputs based on the consideration about the brain architecture and the sequential property of our consciousness. The authors also bring up the necessity to place more importance on “optimization” of the total system under a uniform criterion than “understandability” for humans. Thus, the authors attempt to stress the importance of their proposed system when considering the future research on robot intelligence. The experimental result in a real-world-like environment shows that image recognition from as many as 6240 visual signals can be acquired through RL under various backgrounds and light conditions without providing any knowledge about image processing or the target object. It works even for camera image inputs that were not experienced in learning. In the hidden layer, template-like representation, division of roles between hidden neurons, and representation to detect the target uninfluenced by light condition or background were observed after learning. The autonomous acquisition of such useful representations or functions makes us feel the potential towards avoidance of the frame problem and the development of higher functions.

  18. The acquisition process of musical tonal schema: implications from connectionist modeling

    PubMed Central

    Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-ichi

    2015-01-01

    Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical ‘scale’ sensitivity early and ‘harmony’ sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music. PMID:26441725

  19. Optical hyperspectral imaging in microscopy and spectroscopy - a review of data acquisition.

    PubMed

    Gao, Liang; Smith, R Theodore

    2015-06-01

    Rather than simply acting as a photographic camera capturing two-dimensional (x, y) intensity images or a spectrometer acquiring spectra (λ), a hyperspectral imager measures entire three-dimensional (x, y, λ) datacubes for multivariate analysis, providing structural, molecular, and functional information about biological cells or tissue with unprecedented detail. Such data also gives clinical insights for disease diagnosis and treatment. We summarize the principles underpinning this technology, highlight its practical implementation, and discuss its recent applications at microscopic to macroscopic scales. Datacube acquisition strategies in hyperspectral imaging x, y, spatial coordinates; λ, wavelength. PMID:25186815

  20. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  1. On Accuracy of Knowledge Acquisition for Decision Making Processes Acquiring Subjective Information on the Internet

    NASA Astrophysics Data System (ADS)

    Fujimoto, Kazunori; Yamamoto, Yutaka

    This paper presents a mathematical model for decision making processes where the knowledge for the decision is constructed automatically from subjective information on the Internet. This mathematical model enables us to know the required degree of accuracy of knowledge acquisition for constructing decision support systems using two technologies: automated knowledge acquisition from information on the Internet and automated reasoning about the acquired knowledge. The model consists of three elements: knowledge source, which is a set of subjective information on the Internet, knowledge acquisition, which acquires knowledge base within a computer from the knowledge source, and decision rule, which chooses a set of alternatives by using the knowledge base. One of the important features of this model is that the model contains not only decision making processes but also knowledge acquisition processes. This feature enables to analyze the decision processes with the sufficiency of knowledge sources and the accuracy of knowledge acquisition methods. Based on the model, decision processes by which the knowledge source and the knowledge base lead to the same choices are given and the required degree of accuracy of knowledge acquisition is quantified as required accuracy value. In order to show the way to utilize the value for designing the decision support systems, the value is calculated by using some examples of knowledge sources and decision rules. This paper also describes the computational complexity of the required accuracy value calculation and shows a computation principle for reducing the complexity to the polynomial order of the size of knowledge sources.

  2. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series

    NASA Astrophysics Data System (ADS)

    Nitze, Ingmar; Barrett, Brian; Cawkwell, Fiona

    2015-02-01

    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8-10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of

  3. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy.

    PubMed

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E; Kaminski, Clemens F; Szabó, Gábor; Erdélyi, Miklós

    2014-03-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  4. Xenbase: Core features, data acquisition, and data processing.

    PubMed

    James-Zorn, Christina; Ponferrada, Virgillio G; Burns, Kevin A; Fortriede, Joshua D; Lotay, Vaneet S; Liu, Yu; Brad Karpinka, J; Karimi, Kamran; Zorn, Aaron M; Vize, Peter D

    2015-08-01

    Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web-accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high-quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB, and Ensembl. Here, we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features, and the curation of gene nomenclature and gene models. PMID:26150211

  5. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  6. Learning (Not) to Predict: Grammatical Gender Processing in Second Language Acquisition

    ERIC Educational Resources Information Center

    Hopp, Holger

    2016-01-01

    In two experiments, this article investigates the predictive processing of gender agreement in adult second language (L2) acquisition. We test (1) whether instruction on lexical gender can lead to target predictive agreement processing and (2) how variability in lexical gender representations moderates L2 gender agreement processing. In a…

  7. Data acquisition and analysis for the energy-subtraction Compton scatter camera for medical imaging

    NASA Astrophysics Data System (ADS)

    Khamzin, Murat Kamilevich

    In response to the shortcomings of the Anger camera currently being used in conventional SPECT, particularly the trade-off between sensitivity and spatial resolution, a novel energy-subtraction Compton scatter camera, or the ESCSC, has been proposed. A successful clinical implementation of the ESCSC could revolutionize the field of SPECT. Features of this camera include utilization of silicon and CdZnTe detectors in primary and secondary detector systems, list-mode time stamping data acquisition, modular architecture, and post-acquisition data analysis. Previous ESCSC studies were based on Monte Carlo modeling. The objective of this work is to test the theoretical framework developed in previous studies by developing the data acquisition and analysis techniques necessary to implement the ESCSC. The camera model working in list-mode with time stamping was successfully built and tested thus confirming potential of the ESCSC that was predicted in previous simulation studies. The obtained data were processed during the post-acquisition data analysis based on preferred event selection criteria. Along with the construction of a camera model and proving the approach, the post-acquisition data analysis was further extended to include preferred event weighting based on the likelihood of a preferred event to be a true preferred event. While formulated to show ESCSC capabilities, the results of this study are important for any Compton scatter camera implementation as well as for coincidence data acquisition systems in general.

  8. 76 FR 68037 - Federal Acquisition Regulation; Sudan Waiver Process

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-02

    ... Regulation; Sudan Waiver Process AGENCIES: Department of Defense (DoD), General Services Administration (GSA... that conducts restricted business operations in Sudan. The rule also describes the consultation process... Federal Register at 75 FR 62069 on October 7, 2010, to revise FAR 25.702, Prohibition on contracting...

  9. 48 CFR 36.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Short selection process... Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 36.602-5 Short selection process...

  10. 48 CFR 36.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Short selection process... Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 36.602-5 Short selection process...

  11. 48 CFR 36.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Short selection process... Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 36.602-5 Short selection process...

  12. 48 CFR 36.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Short selection process... Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 36.602-5 Short selection process...

  13. DDS-Suite - A Dynamic Data Acquisition, Processing, and Analysis System for Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Burnside, Jathan J.

    2012-01-01

    Wind Tunnels have optimized their steady-state data systems for acquisition and analysis and even implemented large dynamic-data acquisition systems, however development of near real-time processing and analysis tools for dynamic-data have lagged. DDS-Suite is a set of tools used to acquire, process, and analyze large amounts of dynamic data. Each phase of the testing process: acquisition, processing, and analysis are handled by separate components so that bottlenecks in one phase of the process do not affect the other, leading to a robust system. DDS-Suite is capable of acquiring 672 channels of dynamic data at rate of 275 MB / s. More than 300 channels of the system use 24-bit analog-to-digital cards and are capable of producing data with less than 0.01 of phase difference at 1 kHz. System architecture, design philosophy, and examples of use during NASA Constellation and Fundamental Aerodynamic tests are discussed.

  14. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  15. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  16. Online image acquisition system for wheel set measurement based on asynchronous reset mode

    NASA Astrophysics Data System (ADS)

    Wu, Kaihua; Guo, Yu; Chen, Yixin

    2011-11-01

    The wearing degree of the wheel set is one of the main factors that influence the safety and stability of running train. Measurement of wheel set wear has significant importance to railway safety. An automatic measurement method for geometrical parameters of wheel set based on optoelectronic technique was proposed. In the method, linear structured laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD. The entire time sequence of asynchronous reset was designed. The image was acquired only when wheel moved into the designed position. Image acquisition was fulfilled by hardware interrupt mode. Quantitative relation between position accuracy and speed, timedelay error, CCD resolution and imaging region was deuced. Relation between moving blur and speed, exposure time was also decided. The measuring system was installed along the straight railway section. When the wheel set was running in a limited speed, the devices placed alone railway line can measure the geometrical parameters automatically. Position accuracy achieved 1.1mm when moving speed reached 2km/h and moving blur was limited in less than one pixel size while exposure time set to be 1/5550s. The image definition can meet the demand of real and online measurement.

  17. High dynamic range image acquisition method for 3D solder paste measurement

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Sun, Changku; Wang, Peng; Xu, Yixin

    2013-12-01

    In the solder paste inspection measurement system which is based on the structured light vision technology, the local oversaturation will emerge because of the reflection coefficient when the laser light project on PCB surface. As this, a high dynamic imaging acquisition system for the solder paste inspection research is researched to reduce the local oversaturation and misjudge. The Reflective liquid crystal on silicon has the merit that it can adjust the reflectance of the Incident light per-pixel. According to this merit, the high dynamic imaging acquisition system based on liquid crystal on silicon (LCoS) was built which using the high-resolution LCoS and CCD image sensor. The optical system was constructed by the imaging lens, the relay lens, the Polarizing Beam Splitter (PBS), and the hardware system was consist of ARM development board, video generic board, MCU and HX7809 according to the electrical characteristics of LCoS. Tests show that the system could reduce the phenomenon of image oversaturation and improve the quality of image.

  18. The selection of field acquisition parameters for dispersion images from multichannel surface wave data

    USGS Publications Warehouse

    Zhang, S.X.; Chan, L.S.; Xia, J.

    2004-01-01

    The accuracy and resolution of surface wave dispersion results depend on the parameters used for acquiring data in the field. The optimized field parameters for acquiring multichannel analysis of surface wave (MASW) dispersion images can be determined if preliminary information on the phase velocity range and interface depth is available. In a case study on a fill slope in Hong Kong, the optimal acquisition parameters were first determined from a preliminary seismic survey prior to a MASW survey. Field tests using different sets of receiver distances and array lengths showed that the most consistent and useful dispersion images were obtained from the optimal acquisition parameters predicted. The inverted S-wave velocities from the dispersion curve obtained at the optimal offset distance range also agreed with those obtained by using direct refraction survey.

  19. In application specific integrated circuit and data acquisition system for digital X-ray imaging

    NASA Astrophysics Data System (ADS)

    Beuville, E.; Cederström, B.; Danielsson, M.; Luo, L.; Nygren, D.; Oltman, E.; Vestlund, J.

    1998-02-01

    We have developed an Application Specific Integrated Circuit (ASIC) and data acquisition system for digital X-ray imaging. The chip consists of 16 parallel channels, each containing preamplifier, shaper, comparator and a 16 bit counter. We have demonstrated noiseless single-photon counting over a threshold of 7.2 keV using Silicon detectors and are presently capable of maximum counting rates of 2 MHz per channel. The ASIC is controlled by a personal computer through a commercial PCI card, which is also used for data acquisition. The content of the 16 bit counters are loaded into a shift register and transferred to the PC at any time at a rate of 20 MHz. The system is non-complicated, low cost and high performance and is optimised for digital X-ray imaging applications.

  20. The acquisition process of musical tonal schema: implications from connectionist modeling.

    PubMed

    Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-Ichi

    2015-01-01

    Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical 'scale' sensitivity early and 'harmony' sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and PMID:26441725

  1. Automatic processing of ultrasound images for nondestructive testing

    NASA Astrophysics Data System (ADS)

    Goodfriend, Leon

    1993-12-01

    Ultrasonic non-destructive testing of carbon fiber composite (CFC) aircraft panels has, in the past, been a time-consuming and laborious process. Data acquisition (using C-scan techniques) takes of order 1 hour per m2, and the decision as to whether the panel meets the testing standard (technically known as sentencing) is an unexciting and repetitive visual task for a human operator. This paper introduces a new system for automated sentencing of CFC panels of solid or matrix (honeycomb) construction. It begins with a brief description of a new parallel-scanning ultrasound rig which greatly reduces the time required for data acquisition. A detailed description is then given of the design and implementation of a computer vision system which processes the resulting ultrasound images.

  2. A New Method of Theodolite Calibration Based on Image Processing Technology

    NASA Astrophysics Data System (ADS)

    Zou, Hui-Hui; Wu, Hong-Bing; Chen, Di

    Aiming at improving the theodolite calibration method for space tracking ship, a calibration device which consists of hardware and software is designed in this paper. Hereinto, the hardware part is a set of optical acquisition system that includes CCD, lens and 0.2" collimator, while the software part contains image acquisition module, image processing module, data processing module and interface display module. During the calibration process, the new methods of image denoising and image character extraction are applied to improve the precision of image measure. The result of the experiment shows that the calibration criteria of the theodolite errors was met by applying the image processing technology of the theodolite calibration device, it is more accurate than the manual reading method under the same situation in dock.

  3. A computational model associating learning process, word attributes, and age of acquisition.

    PubMed

    Hidaka, Shohei

    2013-01-01

    We propose a new model-based approach linking word learning to the age of acquisition (AoA) of words; a new computational tool for understanding the relationships among word learning processes, psychological attributes, and word AoAs as measures of vocabulary growth. The computational model developed describes the distinct statistical relationships between three theoretical factors underpinning word learning and AoA distributions. Simply put, this model formulates how different learning processes, characterized by change in learning rate over time and/or by the number of exposures required to acquire a word, likely result in different AoA distributions depending on word type. We tested the model in three respects. The first analysis showed that the proposed model accounts for empirical AoA distributions better than a standard alternative. The second analysis demonstrated that the estimated learning parameters well predicted the psychological attributes, such as frequency and imageability, of words. The third analysis illustrated that the developmental trend predicted by our estimated learning parameters was consistent with relevant findings in the developmental literature on word learning in children. We further discuss the theoretical implications of our model-based approach. PMID:24223699

  4. An approach to automated acquisition of cryoEM images from lacey carbon grids

    PubMed Central

    Nicholson, William V.; White, Howard; Trinick, John

    2010-01-01

    An approach to automated acquisition of cryoEM image data from lacey carbon grids using the Leginon program is described. Automated liquid nitrogen top up of the specimen holder dewar was used as a step towards full automation, without operator intervention during the course of data collection. During cryoEM studies of actin labelled with myosin V, we have found it necessary to work with lacey grids rather than Quantifoil or C-flat grids due to interaction of myosin V with the support film. Lacey grids have irregular holes of variable shape and size, in contrast to Quantifoil or C-flat grids which have a regular array of similar circular holes on each grid square. Other laboratories also prefer to work with grids with irregular holes for a variety of reasons. Therefore, it was necessary to develop a different strategy from normal Leginon usage for working with lacey grids for targetting holes for image acquisition and suitable areas for focussing prior to image acquisition. This approach was implemented by using the extensible framework provided by Leginon and by developing a new MSI application within that framework which includes a new Leginon node (for a novel method for finding focus targets). PMID:20817100

  5. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P. ); Elliott, A. )

    1992-01-01

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  6. Optical signal acquisition and processing in future accelerator diagnostics

    SciTech Connect

    Jackson, G.P.; Elliott, A.

    1992-12-31

    Beam detectors such as striplines and wall current monitors rely on matched electrical networks to transmit and process beam information. Frequency bandwidth, noise immunity, reflections, and signal to noise ratio are considerations that require compromises limiting the quality of the measurement. Recent advances in fiber optics related technologies have made it possible to acquire and process beam signals in the optical domain. This paper describes recent developments in the application of these technologies to accelerator beam diagnostics. The design and construction of an optical notch filter used for a stochastic cooling system is used as an example. Conceptual ideas for future beam detectors are also presented.

  7. Isolating Intrinsic Processing Disorders from Second Language Acquisition.

    ERIC Educational Resources Information Center

    Lock, Robin H.; Layton, Carol A.

    2002-01-01

    Evaluation of the validity of the Learning Disabilities Diagnostic Inventory with limited-English-proficient (LEP) students in grades 2-7 found that nondisabled LEP students were over-identified as having intrinsic processing deficits. Examination of individual student protocols highlighted the need to train teacher-raters in language acquisition…

  8. Accelerating COTS Middleware Acquisition: The i-Mate Process

    SciTech Connect

    Liu, Anna; Gorton, Ian

    2003-03-05

    Most major organizations now use some commercial-off-the-shelf middleware components to run their businesses. Key drivers behind this growth include ever-increasing Internet usage and the ongoing need to integrate heterogeneous legacy systems to streamline business processes. As organizations do more business online, they need scalable, high-performance software infrastructures to handle transactions and provide access to core systems.

  9. Data acquisition system for an advanced x-ray imaging crystal spectrometer using a segmented position-sensitive detector.

    PubMed

    Nam, U W; Lee, S G; Bak, J G; Moon, M K; Cheon, J K; Lee, C H

    2007-10-01

    A versatile time-to-digital converter based data acquisition system for a segmented position-sensitive detector has been developed. This data acquisition system was successfully demonstrated to a two-segment position-sensitive detector. The data acquisition system will be developed further to support multisegmented position-sensitive detector to improve the photon count rate capability of the advanced x-ray imaging crystal spectrometer system. PMID:17979416

  10. The design of a distributed image processing and dissemination system

    SciTech Connect

    Rafferty, P.; Hower, L.

    1990-01-01

    The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

  11. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  12. High-Speed MALDI-TOF Imaging Mass Spectrometry: Rapid Ion Image Acquisition and Considerations for Next Generation Instrumentation

    PubMed Central

    Spraggins, Jeffrey M.; Caprioli, Richard M.

    2012-01-01

    A prototype matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometer has been used for high-speed ion image acquisition. The instrument incorporates a Nd:YLF solid state laser capable of pulse repetition rates up to 5 kHz and continuous laser raster sampling for high-throughput data collection. Lipid ion images of a sagittal rat brain tissue section were collected in 10 min with an effective acquisition rate of roughly 30 pixels/s. These results represent more than a 10-fold increase in throughput compared with current commercially available instrumentation. Experiments aimed at improving conditions for continuous laser raster sampling for imaging are reported, highlighting proper laser repetition rates and stage velocities to avoid signal degradation from significant oversampling. As new high spatial resolution and large sample area applications present themselves, the development of high-speed microprobe MALDI imaging mass spectrometry is essential to meet the needs of those seeking new technologies for rapid molecular imaging. PMID:21953043

  13. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    NASA Astrophysics Data System (ADS)

    Devès, G.; Daudin, L.; Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V.; Michelet, C.; Seznec, H.; Barberet, P.

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  14. Evaluation of Acquisition Strategies for Image-Based Construction Site Monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2016-06-01

    Construction site monitoring is an essential task for keeping track of the ongoing construction work and providing up-to-date information for a Building Information Model (BIM). The BIM contains the as-planned states (geometry, schedule, costs, ...) of a construction project. For updating, the as-built state has to be acquired repeatedly and compared to the as-planned state. In the approach presented here, a 3D representation of the as-built state is calculated from photogrammetric images using multi-view stereo reconstruction. On construction sites one has to cope with several difficulties like security aspects, limited accessibility, occlusions or construction activity. Different acquisition strategies and techniques, namely (i) terrestrial acquisition with a hand-held camera, (ii) aerial acquisition using a Unmanned Aerial Vehicle (UAV) and (iii) acquisition using a fixed stereo camera pair at the boom of the crane, are tested on three test sites. They are assessed considering the special needs for the monitoring tasks and limitations on construction sites. The three scenarios are evaluated based on the ability of automation, the required effort for acquisition, the necessary equipment and its maintaining, disturbance of the construction works, and on the accuracy and completeness of the resulting point clouds. Based on the experiences during the test cases the following conclusions can be drawn: Terrestrial acquisition has the lowest requirements on the device setup but lacks on automation and coverage. The crane camera shows the lowest flexibility but the highest grade of automation. The UAV approach can provide the best coverage by combining nadir and oblique views, but can be limited by obstacles and security aspects. The accuracy of the point clouds is evaluated based on plane fitting of selected building parts. The RMS errors of the fitted parts range from 1 to a few cm for the UAV and the hand-held scenario. First results show that the crane camera

  15. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  16. A dual process account of coarticulation in motor skill acquisition.

    PubMed

    Shah, Ashvin; Barto, Andrew G; Fagg, Andrew H

    2013-01-01

    Many tasks, such as typing a password, are decomposed into a sequence of subtasks that can be accomplished in many ways. Behavior that accomplishes subtasks in ways that are influenced by the overall task is often described as "skilled" and exhibits coarticulation. Many accounts of coarticulation use search methods that are informed by representations of objectives that define skilled. While they aid in describing the strategies the nervous system may follow, they are computationally complex and may be difficult to attribute to brain structures. Here, the authors present a biologically- inspired account whereby skilled behavior is developed through 2 simple processes: (a) a corrective process that ensures that each subtask is accomplished, but does not do so skillfully and (b) a reinforcement learning process that finds better movements using trial and error search that is not informed by representations of any objectives. We implement our account as a computational model controlling a simulated two-armed kinematic "robot" that must hit a sequence of goals with its hands. Behavior displays coarticulation in terms of which hand was chosen, how the corresponding arm was used, and how the other arm was used, suggesting that the account can participate in the development of skilled behavior. PMID:24116847

  17. Possible Overlapping Time Frames of Acquisition and Consolidation Phases in Object Memory Processes: A Pharmacological Approach

    ERIC Educational Resources Information Center

    Akkerman, Sven; Blokland, Arjan; Prickaerts, Jos

    2016-01-01

    In previous studies, we have shown that acetylcholinesterase inhibitors and phosphodiesterase inhibitors (PDE-Is) are able to improve object memory by enhancing acquisition processes. On the other hand, only PDE-Is improve consolidation processes. Here we show that the cholinesterase inhibitor donepezil also improves memory performance when…

  18. Data acquisition and online processing requirements for experimentation at the Superconducting Super Collider

    SciTech Connect

    Lankford, A.J.; Barsotti, E.; Gaines, I.

    1989-07-01

    Differences in scale between data acquisition and online processing requirements for detectors at the Superconducting Super Collider and systems for existing large detectors will require new architectures and technological advances in these systems. Emerging technologies will be employed for data transfer, processing, and recording. 9 refs., 3 figs.

  19. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and

  20. Sequential Processes In Image Generation.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.; And Others

    1988-01-01

    Results of three experiments are reported, which indicate that images of simple two-dimensional patterns are formed sequentially. The subjects included 48 undergraduates and 16 members of the Harvard University (Cambridge, Mass.) community. A new objective methodology indicates that images of complex letters require more time to generate. (TJH)

  1. Phases of learning: How skill acquisition impacts cognitive processing.

    PubMed

    Tenison, Caitlin; Fincham, Jon M; Anderson, John R

    2016-06-01

    This fMRI study examines the changes in participants' information processing as they repeatedly solve the same mathematical problem. We show that the majority of practice-related speedup is produced by discrete changes in cognitive processing. Because the points at which these changes take place vary from problem to problem, and the underlying information processing steps vary in duration, the existence of such discrete changes can be hard to detect. Using two converging approaches, we establish the existence of three learning phases. When solving a problem in one of these learning phases, participants can go through three cognitive stages: Encoding, Solving, and Responding. Each cognitive stage is associated with a unique brain signature. Using a bottom-up approach combining multi-voxel pattern analysis and hidden semi-Markov modeling, we identify the duration of that stage on any particular trial from participants brain activation patterns. For our top-down approach we developed an ACT-R model of these cognitive stages and simulated how they change over the course of learning. The Solving stage of the first learning phase is long and involves a sequence of arithmetic computations. Participants transition to the second learning phase when they can retrieve the answer, thereby drastically reducing the duration of the Solving stage. With continued practice, participants then transition to the third learning phase when they recognize the problem as a single unit and produce the answer as an automatic response. The duration of this third learning phase is dominated by the Responding stage. PMID:27018936

  2. Evaluation of security algorithms used for security processing on DICOM images

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomeng; Shuai, Jie; Zhang, Jianguo; Huang, H. K.

    2005-04-01

    In this paper, we developed security approach to provide security measures and features in PACS image acquisition and Tele-radiology image transmission. The security processing on medical images was based on public key infrastructure (PKI) and including digital signature and data encryption to achieve the security features of confidentiality, privacy, authenticity, integrity, and non-repudiation. There are many algorithms which can be used in PKI for data encryption and digital signature. In this research, we select several algorithms to perform security processing on different DICOM images in PACS environment, evaluate the security processing performance of these algorithms, and find the relationship between performance with image types, sizes and the implementation methods.

  3. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  4. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  5. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  6. Magnetic resonance image enhancement by reducing receptors' effective size and enabling multiple channel acquisition.

    PubMed

    Yepes-Calderon, Fernando; Velasquez, Adriana; Lepore, Natasha; Beuf, Olivier

    2014-01-01

    Magnetic resonance imaging is empowered by parallel reading, which reduces acquisition time dramatically. The time saved by parallelization can be used to increase image quality or to enable specialized scanning protocols in clinical and research environments. In small animals, the sizing constraints render the use of multi-channeled approaches even more necessary, as they help to improve the typically low spatial resolution and lesser signal-to-noise ratio; however, the use of multiple channels also generates mutual induction (MI) effects that impairs imaging creation. Here, we created coils and used the shared capacitor technique to diminish first degree MI effects and pre-amplifiers to deal with higher order MI-related image deterioration. The constructed devices are tested by imaging phantoms that contain identical solutions; thus, creating the conditions for several statistical comparisons. We confirm that the shared capacitor strategy can recover the receptor capacity in compounded coils when working at the dimensions imposed by small animal imaging. Additionally, we demonstrate that the use of pre-amplifiers does not significantly reduce the quality of the images. Moreover, in light of our results, the two MI-avoiding techniques can be used together, therefore establishing the practical feasibility of flexible array coils populated with multiple loops for small animal imaging. PMID:25570478

  7. Detailed design of the GOSAT DHF at NIES and data acquisition/processing/distribution strategy

    NASA Astrophysics Data System (ADS)

    Watanabe, Hiroshi; Ishihara, Hironari; Hayashi, Kenji; Kawazoe, Fumie; Kikuchi, Nobuyuki; Eguchi, Nawo; Matsunaga, Tsuneo; Yokota, Tatsuya

    2008-10-01

    GOSAT Project (GOSAT stands for Greenhouse gases Observation SATellite) is a joint project of MOE (Ministry of the Environment), JAXA (Japan Aerospace Exploration Agency) and NIES (National Institute for Environmental Studies (NIES). Data acquired by TANSO-FTS (Fourier Transform Spectrometer) and TANSO-CAI (Cloud and Aerosol Imager) on GOSAT (TANSO stands for Thermal And Near infrared Sensor for carbon Observation) will be collected at Tsukuba Space Center @ JAXA. The level 1A and 1B data of FTS (interferogram and spectra, respectively) and the level 1A of CAI (uncorrected data) will be generated at JAXA and will be transferred to GOSAT Data Handling facility (DHF) at NIES for further processing. Radiometric and geometric correction will be applied to CAI L1A data to generate CAI L1B data. From CAI L1B data, cloud coverage and aerosol information (CAI Level 2 data) will be estimated. The FTS data that is recognized to have "low cloud coverage" by CAI will be processed to generate column concentration of carbon dioxide CO2 and methane CH4 (FTS Level 2 data). Level 3 data will be "global map column concentration" of green house gases averaged in time and space. Level 4 data will be global distribution of carbon source/sink model and re-calculated forward model estimated by inverse model. Major data flow will be also described. The Critical Design Review (CDR) of the DHF was completed in early July of 2007 to prepare the scheduled launch of GOSAT in early 2009. In this manuscript, major changes after the CDR are discussed. In addition, data acquisition scenario by FTS is also discussed. The data products can be searched and will be open to the public through GOSAT DHF after the data validation process. Data acquisition plan is also discussed and the discussion will cover lattice point observation for land area, and sun glint observation over water area. The Principal Investigators who submitted a proposal for Research Announcement will have a chance to request the

  8. Is Children's Acquisition of the Passive a Staged Process? Evidence from Six- and Nine-Year-Olds' Production of Passives

    ERIC Educational Resources Information Center

    Messenger, Katherine; Branigan, Holly P.; McLean, Janet F.

    2012-01-01

    We report a syntactic priming experiment that examined whether children's acquisition of the passive is a staged process, with acquisition of constituent structure preceding acquisition of thematic role mappings. Six-year-olds and nine-year-olds described transitive actions after hearing active and passive prime descriptions involving the same or…

  9. A uniform method for analytically modeling mulit-target acquisition with independent networked imaging sensors

    NASA Astrophysics Data System (ADS)

    Friedman, Melvin

    2014-05-01

    The problem solved in this paper is easily stated: for a scenario with 𝑛 networked and moving imaging sensors, 𝑚 moving targets and 𝑘 independent observers searching imagery produced by the 𝑛 moving sensors, analytically model system target acquisition probability for each target as a function of time. Information input into the model is the time dependence of 𝘗∞ and 𝜏, two parameters that describe observer-sensor-atmosphere-range-target properties of the target acquisition system for the case where neither the sensor nor target is moving. The parameter 𝘗∞ can be calculated by the NV-IPM model and 𝜏 is estimated empirically from 𝘗∞. In this model 𝑛, 𝑚 and 𝑘 are integers and 𝑘 can be less than, equal to or greater than 𝑛. Increasing 𝑛 and 𝑘 results in a substantial increase in target acquisition probabilities. Because the sensors are networked, a target is said to be detected the moment the first of the 𝑘 observers declares the target. The model applies to time-limited or time-unlimited search, and applies to any imaging sensors operating in any wavelength band provided each sensor can be described by 𝘗∞ and 𝜏 parameters.

  10. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  11. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  12. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  13. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    SciTech Connect

    Yip, Stephen Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  14. Task-driven image acquisition and reconstruction in cone-beam CT

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Webster Stayman, J.; Ehtiati, Tina; Siewerdsen, Jeffrey H.

    2015-04-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d‧) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d‧ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d‧ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  15. Task-driven image acquisition and reconstruction in cone-beam CT

    PubMed Central

    Gang, Grace J.; Stayman, J. Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H.

    2015-01-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters and in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e., the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  16. The acquisition of integrated science process skills in a web-based learning environment

    NASA Astrophysics Data System (ADS)

    Saat, Rohaida Mohd.

    2004-01-01

    Web-based learning is becoming prevalent in science learning. Some use specially designed programs, while others use materials available on the Internet. This qualitative case study examined the process of acquisition of integrated science process skills, particularly the skill of controlling variables, in a web-based learning environment among grade 5 children. Data were gathered primarily from children's conversations and teacher-student conversations. Analysis of the data revealed that the children acquired the skill in three phases: from the phase of recognition to the phase of familiarization and finally to the phase of automation. Nevertheless, the acquisition of the skill only involved the acquisition of certain subskills of the skill of controlling variables. This progression could be influenced by the web-based instructional material that provided declarative knowledge, concrete visualization and opportunities for practise.

  17. A Future Vision of a Data Acquisition: Distributed Sensing, Processing, and Health Monitoring

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Solano, Wanda; Thurman, Charles; Schmalzel, John

    2000-01-01

    This paper presents a vision fo a highly enhanced data acquisition and health monitoring system at NASA Stennis Space Center (SSC) rocket engine test facility. This vision includes the use of advanced processing capabilities in conjunction with highly autonomous distributed sensing and intelligence, to monitor and evaluate the health of data in the context of it's associated process. This method is expected to significantly reduce data acquisitions costs and improve system reliability. A Universal Signal Conditioning Amplifier (USCA) based system, under development at Kennedy Space Center, is being evaluated for adaptation to the SSC testing infrastructure. Kennedy's USCA architecture offers many advantages including flexible and auto-configuring data acquisition with improved calibration and verifiability. Possible enhancements at SSC may include multiplexing the distributed USCAs to reduce per channel cost, and the use of IEEE-485 to Allen-Bradley Control Net Gateways for interfacing with the resident control systems.

  18. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  19. Dynamic whole-body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-10-01

    Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ˜15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ˜45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically

  20. Utilizing image processing techniques to compute herbivory.

    PubMed

    Olson, T E; Barlow, V M

    2001-01-01

    Leafy spurge (Euphorbia esula L. sensu lato) is a perennial weed species common to the north-central United States and southern Canada. The plant is a foreign species toxic to cattle. Spurge infestation can reduce cattle carrying capacity by 50 to 75 percent [1]. University of Wyoming Entomology doctoral candidate Vonny Barlow is conducting research in the area of biological control of leafy spurge via the Aphthona nigriscutis Foudras flea beetle. He is addressing the question of variability within leafy spurge and its potential impact on flea beetle herbivory. One component of Barlow's research consists of measuring the herbivory of leafy spurge plant specimens after introducing adult beetles. Herbivory is the degree of consumption of the plant's leaves and was measured in two different manners. First, Barlow assigned each consumed plant specimen a visual rank from 1 to 5. Second, image processing techniques were applied to "before" and "after" images of each plant specimen in an attempt to more accurately quantify herbivory. Standardized techniques were used to acquire images before and after beetles were allowed to feed on plants for a period of 12 days. Matlab was used as the image processing tool. The image processing algorithm allowed the user to crop the portion of the "before" image containing only plant foliage. Then Matlab cropped the "after" image with the same dimensions, converted the images from RGB to grayscale. The grayscale image was converted to binary based on a user defined threshold value. Finally, herbivory was computed based on the number of black pixels in the "before" and "after" images. The image processing results were mixed. Although, this image processing technique depends on user input and non-ideal images, the data is useful to Barlow's research and offers insight into better imaging systems and processing algorithms. PMID:11347423

  1. Single-heartbeat electromechanical wave imaging with optimal strain estimation using temporally unequispaced acquisition sequences

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Thiébaut, Stéphane; Luo, Jianwen; Konofagou, Elisa E.

    2012-02-01

    Electromechanical Wave Imaging (EWI) is a non-invasive, ultrasound-based imaging method capable of mapping the electromechanical wave (EW) in vivo, i.e. the transient deformations occurring in response to the electrical activation of the heart. Optimal imaging frame rates, in terms of the elastographic signal-to-noise ratio, to capture the EW cannot be achieved due to the limitations of conventional imaging sequences, in which the frame rate is low and tied to the imaging parameters. To achieve higher frame rates, EWI is typically performed by combining sectors acquired during separate heartbeats, which are then combined into a single view. However, the frame rates achieved remain potentially sub-optimal and this approach precludes the study of non-periodic arrhythmias. This paper describes a temporally unequispaced acquisition sequence (TUAS) for which a wide range of frame rates are achievable independently of the imaging parameters, while maintaining a full view of the heart at high beam density. TUAS is first used to determine the optimal frame rate for EWI in a paced canine heart in vivo and then to image during ventricular fibrillation. These results indicate how EWI can be optimally performed within a single heartbeat, during free breathing and in real time, for both periodic and non-periodic cardiac events.

  2. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications.

    PubMed

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  3. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications

    PubMed Central

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  4. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  5. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  6. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  7. Anthropological methods of optical image processing

    NASA Astrophysics Data System (ADS)

    Ginzburg, V. M.

    1981-12-01

    Some applications of the new method for optical image processing, based on a prior separation of informative elements (IE) with the help of a defocusing equal to the average eye defocusing, considered in a previous paper, are described. A diagram of a "drawing" robot with the use of defocusing and other mechanisms of the human visual system (VS) is given. Methods of narrowing the TV channel bandwidth and elimination of noises in computer image processing by prior image defocusing are described.

  8. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  9. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  10. SUPRIM: easily modified image processing software.

    PubMed

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest. PMID:8742734

  11. Design and construction of the front-end electronics data acquisition for the SLD CRID (Cherenkov Ring Imaging Detector)

    SciTech Connect

    Hoeflich, J.; McShurley, D.; Marshall, D.; Oxoby, G.; Shapiro, S.; Stiles, P. ); Spencer, E. . Inst. for Particle Physics)

    1990-10-01

    We describe the front-end electronics for the Cherenkov Ring Imaging Detector (CRID) of the SLD at the Stanford Linear Accelerator Center. The design philosophy and implementation are discussed with emphasis on the low-noise hybrid amplifiers, signal processing and data acquisition electronics. The system receives signals from a highly efficient single-photo electron detector. These signals are shaped and amplified before being stored in an analog memory and processed by a digitizing system. The data from several ADCs are multiplexed and transmitted via fiber optics to the SLD FASTBUS system. We highlight the technologies used, as well as the space, power dissipation, and environmental constraints imposed on the system. 16 refs., 10 figs.

  12. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  13. Ultimate Attainment in Second Language Acquisition: Near-Native Sentence Processing in Spanish

    ERIC Educational Resources Information Center

    Jegerski, Jill

    2010-01-01

    A study of near-native sentence processing was carried out using the self-paced reading method. Twenty-three near-native speakers of Spanish were identified on the basis of native-like proficiency, age of onset of acquisition after 15 years, and a minimum of three years ongoing residency in Spanish-speaking countries. The sentence comprehension…

  14. Development of a data acquisition and processing system for precision agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A data acquisition and processing system for precision agriculture was developed by using MapX5.0 and Visual C 6.0. This system can be used easily and quickly for drawing grid maps in-field, creating parameters for grid-reorganization, guiding in-field data collection, converting data between diffe...

  15. A Problem-Based Learning Model for Teaching the Instructional Design Business Acquisition Process.

    ERIC Educational Resources Information Center

    Kapp, Karl M.; Phillips, Timothy L.; Wanner, Janice H.

    2002-01-01

    Outlines a conceptual framework for using a problem-based learning model for teaching the Instructional Design Business Acquisition Process. Discusses writing a response to a request for proposal, developing a working prototype, orally presenting the solution, and the impact of problem-based learning on students' perception of their confidence in…

  16. Learning and Individual Differences: An Ability/Information-Processing Framework for Skill Acquisition. Final Report.

    ERIC Educational Resources Information Center

    Ackerman, Phillip L.

    A program of theoretical and empirical research focusing on the ability determinants of individual differences in skill acquisition is reviewed. An integrative framework for information-processing and cognitive ability determinants of skills is reviewed, along with principles for ability-skill relations. Experimental manipulations were used to…

  17. Development of a data acquisition and processing system for precision agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A data acquisition and processing system for precision agriculture was developed by using MapX5.0 and Visual C6.0. This system can be used easily and quickly for drawing grid maps in-field, making out parameters for grid-reorganization, guiding for in-field data collection, converting data between ...

  18. The Processing Cost of Reference Set Computation: Acquisition of Stress Shift and Focus

    ERIC Educational Resources Information Center

    Reinhart, Tanya

    2004-01-01

    Reference set computation -- the construction of a (global) comparison set to determine whether a given derivation is appropriate in context -- comes with a processing cost. I argue that this cost is directly visible at the acquisition stage: In those linguistic areas in which it has been independently established that such computation is indeed…

  19. Processes of Language Acquisition in Children with Autism: Evidence from Preferential Looking

    ERIC Educational Resources Information Center

    Swensen, Lauren D.; Kelley, Elizabeth; Fein, Deborah; Naigles, Letitia R.

    2007-01-01

    Two language acquisition processes (comprehension preceding production of word order, the noun bias) were examined in 2- and 3-year-old children (n=10) with autistic spectrum disorder and in typically developing 21-month-olds (n=13). Intermodal preferential looking was used to assess comprehension of subject-verb-object word order and the tendency…

  20. A user report on the trial use of gesture commands for image manipulation and X-ray acquisition.

    PubMed

    Li, Ellis Chun Fai; Lai, Christopher Wai Keung

    2016-07-01

    Touchless environment for image manipulation and X-ray acquisition may enhance the current infection control measure during X-ray examination simply by avoiding any touch on the control panel. The present study is intended at designing and performing a trial experiment on using motion-sensing technology to perform image manipulation and X-ray acquisition function (the activities a radiographer frequently performs during an X-ray examination) under an experimental setup. Based on the author's clinical experience, several gesture commands were designed carefully to complete a single X-ray examination. Four radiographers were randomly recruited for the study. They were asked to perform gesture commands in front of a computer integrated with a gesture-based touchless controller. The translational movements of the tip of their thumb and index finger while performing different gesture commands were recorded for analysis. Although individual operators were free to decide the extent of movement and the speed at which their fingers and thumbs moved while performing these gesture commands, the result of our study demonstrated that all operators could perform our proposed gesture commands with good consistency, suggesting that motion-sensing technology could, in practice, be integrated into radiographic examinations. To summarize, although the implementation of motion-sensing technology as an input command in radiographic examination might inevitably slow down the examination throughput considering that extra procedural steps are required to trigger specific gesture commands in sequence, it is advantageous in minimizing the potential of the pathogen contamination during image operation and image processing that leads to cross infection. PMID:27230385

  1. Wide-field flexible endoscope for simultaneous color and NIR fluorescence image acquisition during surveillance colonoscopy

    NASA Astrophysics Data System (ADS)

    García-Allende, P. Beatriz; Nagengast, Wouter B.; Glatz, Jürgen; Ntziachristos, Vasilis

    2013-03-01

    Colorectal cancer (CRC) is the third most common form of cancer and, despite recent declines in both incidence and mortality, it still remains the second leading cause of cancer-related deaths in the western world. Colonoscopy is the standard for detection and removal of premalignant lesions to prevent CRC. The major challenges that physicians face during surveillance colonoscopy are the high adenoma miss-rates and the lack of functional information to facilitate decision-making concerning which lesions to remove. Targeted imaging with NIR fluorescence would address these limitations. Tissue penetration is increased in the NIR range while the combination with targeted NIR fluorescent agents provides molecularly specific detection of cancer cells, i.e. a red-flag detection strategy that allows tumor imaging with optimal sensitivity and specificity. The development of a flexible endoscopic fluorescence imaging method that can be integrated with standard medical endoscopes and facilitates the clinical use of this potential is described in this work. A semi-disposable coherent fiber optic imaging bundle that is traditionally employed in the exploration of biliary and pancreatic ducts is proposed, since it is long and thin enough to be guided through the working channel of a traditional video colonoscope allowing visualization of proximal lesions in the colon. A custom developed zoom system magnifies the image of the proximal end of the imaging bundle to fill the dimensions of two cameras operating in parallel providing the simultaneous color and fluorescence video acquisition.

  2. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-01-01

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  3. 48 CFR 636.602-5 - Short selection processes for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Short selection processes... ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 636.602-5 Short selection processes for contracts not to exceed the simplified acquisition threshold. The short selection process described in FAR...

  4. 48 CFR 636.602-5 - Short selection processes for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Short selection processes... ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 636.602-5 Short selection processes for contracts not to exceed the simplified acquisition threshold. The short selection process described in FAR...

  5. 48 CFR 1336.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process... CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 1336.602-5 Short selection process... exceed the simplified acquisition threshold, either or both of the short selection processes set out...

  6. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  7. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  8. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  9. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  10. Self-organizing Symbol Acquisition and Motion Generation based on Dynamics-based Information Processing System

    NASA Astrophysics Data System (ADS)

    Okada, Masafumi; Nakamura, Daisuke; Nakamura, Yoshihiko

    The symbol acquisition and manipulation abilities are one of the inherent characteristics of human beings comparing with other creatures. In this paper, based on recurrent self-organizing map and dynamics-based information processing system, we propose a dynamics based self-organizing map (DBSOM). This method enables designing a topological map using time sequence data, which causes recognition and generation of the robot motion. Using this method, we design the self-organizing symbol acquisition system and robot motion generation system for a humanoid robot. By implementing DBSOM to the robot in the real world, we realize the symbol acquisition from the experimental data and investigate the spatial property of the obtained DBSOM.

  11. An algorithm to unveil the inner structure of objects concealed by beam divergence in radiographic image acquisition systems

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    2014-11-01

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessing a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.

  12. An algorithm to unveil the inner structure of objects concealed by beam divergence in radiographic image acquisition systems

    SciTech Connect

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    2014-11-11

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessing a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.

  13. Health Hazard Assessment and Toxicity Clearances in the Army Acquisition Process

    NASA Technical Reports Server (NTRS)

    Macko, Joseph A., Jr.

    2000-01-01

    The United States Army Materiel Command, Army Acquisition Pollution Prevention Support Office (AAPPSO) is responsible for creating and managing the U.S. Army Wide Acquisition Pollution Prevention Program. They have established Integrated Process Teams (IPTs) within each of the Major Subordinate Commands of the Army Materiel Command. AAPPSO provides centralized integration, coordination, and oversight of the Army Acquisition Pollution Prevention Program (AAPPP) , and the IPTs provide the decentralized execution of the AAPPSO program. AAPPSO issues policy and guidance, provides resources and prioritizes P2 efforts. It is the policy of the (AAPPP) to require United States Army Surgeon General approval of all materials or substances that will be used as an alternative to existing hazardous materials, toxic materials and substances, and ozone-depleting substances. The Army has a formal process established to address this effort. Army Regulation 40-10 requires a Health Hazard Assessment (HHA) during the Acquisition milestones of a new Army system. Army Regulation 40-5 addresses the Toxicity Clearance (TC) process to evaluate new chemicals and materials prior to acceptance as an alternative. U.S. Army Center for Health Promotion and Preventive Medicine is the Army's matrixed medical health organization that performs the HHA and TC mission.

  14. Parallel asynchronous hardware implementation of image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  15. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  16. 48 CFR 1036.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 1036.602-5 Short selection process... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process... process....

  17. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  18. Performance of a VME-based parallel processing LIDAR data acquisition system (summary)

    SciTech Connect

    Moore, K.; Buttler, B.; Caffrey, M.; Soriano, C.

    1995-05-01

    It may be possible to make accurate real time, autonomous, 2 and 3 dimensional wind measurements remotely with an elastic backscatter Light Detection and Ranging (LIDAR) system by incorporating digital parallel processing hardware into the data acquisition system. In this paper, we report the performance of a commercially available digital parallel processing system in implementing the maximum correlation technique for wind sensing using actual LIDAR data. Timing and numerical accuracy are benchmarked against a standard microprocessor impementation.

  19. Optimizing Uas Image Acquisition and Geo-Registration for Precision Agriculture

    NASA Astrophysics Data System (ADS)

    Hearst, A. A.; Cherkauer, K. A.; Rainey, K. M.

    2014-12-01

    Unmanned Aircraft Systems (UASs) can acquire imagery of crop fields in various spectral bands, including the visible, near-infrared, and thermal portions of the spectrum. By combining techniques of computer vision, photogrammetry, and remote sensing, these images can be stitched into precise, geo-registered maps, which may have applications in precision agriculture and other industries. However, the utility of these maps will depend on their positional accuracy. Therefore, it is important to quantify positional accuracy and consider the tradeoffs between accuracy, field site setup, and the computational requirements for data processing and analysis. This will enable planning of data acquisition and processing to obtain the required accuracy for a given project. This study focuses on developing and evaluating methods for geo-registration of raw aerial frame photos acquired by a small fixed-wing UAS. This includes visual, multispectral, and thermal imagery at 3, 6, and 14 cm/pix resolutions, respectively. The study area is 10 hectares of soybean fields at the Agronomy Center for Research and Education (ACRE) at Purdue University. The dataset consists of imagery from 6 separate days of flights (surveys) and supporting ground measurements. The Direct Sensor Orientation (DiSO) and Integrated Sensor Orientation (InSO) methods for geo-registration are tested using 16 Ground Control Points (GCPs). Subsets of these GCPs are used to test for the effects of different numbers and spatial configurations of GCPs on positional accuracy. The horizontal and vertical Root Mean Squared Error (RMSE) is used as the primary metric of positional accuracy. Preliminary results from 1 of the 6 surveys show that the DiSO method (0 GCPs used) achieved an RMSE in the X, Y, and Z direction of 2.46 m, 1.04 m, and 1.91 m, respectively. InSO using 5 GCPs achieved an RMSE of 0.17 m, 0.13 m, and 0.44 m. InSO using 10 GCPs achieved an RMSE of 0.10 m, 0.09 m, and 0.12 m. Further analysis will identify

  20. Software-Based Real-Time Acquisition and Processing of PET Detector Raw Data.

    PubMed

    Goldschmidt, Benjamin; Schug, David; Lerche, Christoph W; Salomon, André; Gebhardt, Pierre; Weissler, Bjoern; Wehner, Jakob; Dueppenbecker, Peter M; Kiessling, Fabian; Schulz, Volkmar

    2016-02-01

    In modern positron emission tomography (PET) readout architectures, the position and energy estimation of scintillation events (singles) and the detection of coincident events (coincidences) are typically carried out on highly integrated, programmable printed circuit boards. The implementation of advanced singles and coincidence processing (SCP) algorithms for these architectures is often limited by the strict constraints of hardware-based data processing. In this paper, we present a software-based data acquisition and processing architecture (DAPA) that offers a high degree of flexibility for advanced SCP algorithms through relaxed real-time constraints and an easily extendible data processing framework. The DAPA is designed to acquire detector raw data from independent (but synchronized) detector modules and process the data for singles and coincidences in real-time using a center-of-gravity (COG)-based, a least-squares (LS)-based, or a maximum-likelihood (ML)-based crystal position and energy estimation approach (CPEEA). To test the DAPA, we adapted it to a preclinical PET detector that outputs detector raw data from 60 independent digital silicon photomultiplier (dSiPM)-based detector stacks and evaluated it with a [(18)F]-fluorodeoxyglucose-filled hot-rod phantom. The DAPA is highly reliable with less than 0.1% of all detector raw data lost or corrupted. For high validation thresholds (37.1 ± 12.8 photons per pixel) of the dSiPM detector tiles, the DAPA is real time capable up to 55 MBq for the COG-based CPEEA, up to 31 MBq for the LS-based CPEEA, and up to 28 MBq for the ML-based CPEEA. Compared to the COG-based CPEEA, the rods in the image reconstruction of the hot-rod phantom are only slightly better separable and less blurred for the LS- and ML-based CPEEA. While the coincidence time resolution (∼ 500 ps) and energy resolution (∼12.3%) are comparable for all three CPEEA, the system sensitivity is up to 2.5 × higher for the LS- and ML-based CPEEA

  1. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  2. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. PMID:26720258

  3. Reference radiochromic film dosimetry in kilovoltage photon beams during CBCT image acquisition

    SciTech Connect

    Tomic, Nada; Devic, Slobodan; DeBlois, Francois; Seuntjens, Jan

    2010-03-15

    Purpose: A common approach for dose assessment during cone beam computed tomography (CBCT) acquisition is to use thermoluminescent detectors for skin dose measurements (on patients or phantoms) or ionization chamber (in phantoms) for body dose measurements. However, the benefits of a daily CBCT image acquisition such as margin reduction in planning target volume and the image quality must be weighted against the extra dose received during CBCT acquisitions. Methods: The authors describe a two-dimensional reference dosimetry technique for measuring dose from CBCT scans using the on-board imaging system on a Varian Clinac-iX linear accelerator that employs the XR-QA radiochromic film model, specifically designed for dose measurements at low energy photons. The CBCT dose measurements were performed for three different body regions (head and neck, pelvis, and thorax) using humanoid Rando phantom. Results: The authors report on both surface dose and dose profiles measurements during clinical CBCT procedures carried out on a humanoid Rando phantom. Our measurements show that the surface doses per CBCT scan can range anywhere between 0.1 and 4.7 cGy, with the lowest surface dose observed in the head and neck region, while the highest surface dose was observed for the Pelvis spot light CBCT protocol in the pelvic region, on the posterior side of the Rando phantom. The authors also present results of the uncertainty analysis of our XR-QA radiochromic film dosimetry system. Conclusions: Radiochromic film dosimetry protocol described in this work was used to perform dose measurements during CBCT acquisitions with the one-sigma dose measurement uncertainty of up to 3% for doses above 1 cGy. Our protocol is based on film exposure calibration in terms of ''air kerma in air,'' which simplifies both the calibration procedure and reference dosimetry measurements. The results from a full Monte Carlo investigation of the dose conversion of measured XR-QA film dose at the surface into

  4. Cardiovascular Magnetic Resonance in Cardiology Practice: A Concise Guide to Image Acquisition and Clinical Interpretation.

    PubMed

    Valbuena-López, Silvia; Hinojar, Rocío; Puntmann, Valentina O

    2016-02-01

    Cardiovascular magnetic resonance plays an increasingly important role in routine cardiology clinical practice. It is a versatile imaging modality that allows highly accurate, broad and in-depth assessment of cardiac function and structure and provides information on pertinent clinical questions in diseases such as ischemic heart disease, nonischemic cardiomyopathies, and heart failure, as well as allowing unique indications, such as the assessment and quantification of myocardial iron overload or infiltration. Increasing evidence for the role of cardiovascular magnetic resonance, together with the spread of knowledge and skill outside expert centers, has afforded greater access for patients and wider clinical experience. This review provides a snapshot of cardiovascular magnetic resonance in modern clinical practice by linking image acquisition and postprocessing with effective delivery of the clinical meaning. PMID:26778592

  5. Validation of a target acquisition model for active imager using perception experiments

    NASA Astrophysics Data System (ADS)

    Lapaz, Frédéric; Canevet, Loïc

    2007-10-01

    Active night vision systems based on laser diodes emitters have now reached a technology level allowing military applications. In order to predict the performance of observers using such systems, we built an analytic model including sensor, atmosphere, visualization and eye effects. The perception task has been modelled using the Targeting Task Performance metric (TTP metric) developed by R. Vollmerhausen from the Night Vision and Electronic Sensors Directorate (NVESD). Sensor and atmosphere models have been validated separately. In order to validate the whole model, two identification tests have been set up. The first set submitted to trained observers was made of hybrid images. The target to background contrast, the blur and the noise were added to armoured vehicles signatures in accordance to sensor and atmosphere models. The second set of images was made with the same targets, sensed by a real active sensor during field trials. Images were recorded, showing different vehicles, at different ranges and orientations, under different illumination and acquisition configurations. Indeed, this set of real images was built with three different types of gating: wide illumination, illumination of the background and illumination of the target. Analysis of the perception experiments results showed a good concordance between the two sets of images. The calculation of an identification criterion, related to this set of vehicles in the near infrared, gave the same results in both cases. The impact of gating on observer's performance was also evaluated.

  6. SPECT data acquisition and image reconstruction in a stationary small animal SPECT/MRI system

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Chen, Si; Yu, Jianhua; Meier, Dirk; Wagenaar, Douglas J.; Patt, Bradley E.; Tsui, Benjamin M. W.

    2010-04-01

    The goal of the study was to investigate data acquisition strategies and image reconstruction methods for a stationary SPECT insert that can operate inside an MRI scanner with a 12 cm bore diameter for simultaneous SPECT/MRI imaging of small animals. The SPECT insert consists of 3 octagonal rings of 8 MR-compatible CZT detectors per ring surrounding a multi-pinhole (MPH) collimator sleeve. Each pinhole is constructed to project the field-of-view (FOV) to one CZT detector. All 24 pinholes are focused to a cylindrical FOV of 25 mm in diameter and 34 mm in length. The data acquisition strategies we evaluated were optional collimator rotations to improve tomographic sampling; and the image reconstruction methods were iterative ML-EM with and without compensation for the geometric response function (GRF) of the MPH collimator. For this purpose, we developed an analytic simulator that calculates the system matrix with the GRF models of the MPH collimator. The simulator was used to generate projection data of a digital rod phantom with pinhole aperture sizes of 1 mm and 2 mm and with different collimator rotation patterns. Iterative ML-EM reconstruction with and without GRF compensation were used to reconstruct the projection data from the central ring of 8 detectors only, and from all 24 detectors. Our results indicated that without GRF compensation and at the default design of 24 projection views, the reconstructed images had significant artifacts. Accurate GRF compensation substantially improved the reconstructed image resolution and reduced image artifacts. With accurate GRF compensation, useful reconstructed images can be obtained using 24 projection views only. This last finding potentially enables dynamic SPECT (and/or MRI) studies in small animals, one of many possible application areas of the SPECT/MRI system. Further research efforts are warranted including experimentally measuring the system matrix for improved geometrical accuracy, incorporating the co

  7. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  8. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  9. Optimal Short-Time Acquisition Schemes in High Angular Resolution Diffusion-Weighted Imaging

    PubMed Central

    Prčkovska, V.; Achterberg, H. C.; Bastiani, M.; Pullens, P.; Balmashnova, E.; ter Haar Romeny, B. M.; Vilanova, A.; Roebroeck, A.

    2013-01-01

    This work investigates the possibilities of applying high-angular-resolution-diffusion-imaging- (HARDI-) based methods in a clinical setting by investigating the performance of non-Gaussian diffusion probability density function (PDF) estimation for a range of b-values and diffusion gradient direction tables. It does so at realistic SNR levels achievable in limited time on a high-performance 3T system for the whole human brain in vivo. We use both computational simulations and in vivo brain scans to quantify the angular resolution of two selected reconstruction methods: Q-ball imaging and the diffusion orientation transform. We propose a new analytical solution to the ODF derived from the DOT. Both techniques are analytical decomposition approaches that require identical acquisition and modest postprocessing times and, given the proposed modifications of the DOT, can be analyzed in a similar fashion. We find that an optimal HARDI protocol given a stringent time constraint (<10 min) combines a moderate b-value (around 2000 s/mm2) with a relatively low number of acquired directions (>48). Our findings generalize to other methods and additional improvements in MR acquisition techniques. PMID:23554808

  10. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  11. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  12. IECON '87: Signal acquisition and processing; Proceedings of the 1987 International Conference on Industrial Electronics, Control, and Instrumentation, Cambridge, MA, Nov. 3, 4, 1987

    NASA Astrophysics Data System (ADS)

    Niederjohn, Russell J.

    1987-01-01

    Theoretical and applications aspects of signal processing are examined in reviews and reports. Topics discussed include speech processing methods, algorithms, and architectures; signal-processing applications in motor and power control; digital signal processing; signal acquisition and analysis; and processing algorithms and applications. Consideration is given to digital coding of speech algorithms, an algorithm for continuous-time processes in discrete-time measurement, quantization noise and filtering schemes for digital control systems, distributed data acquisition for biomechanics research, a microcomputer-based differential distance and velocity measurement system, velocity observations from discrete position encoders, a real-time hardware image preprocessor, and recognition of partially occluded objects by a knowledge-based system.

  13. Accelerated image processing on FPGAs.

    PubMed

    Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica

    2003-01-01

    The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709

  14. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  15. Multibeam Sonar Backscatter Data Acquisition and Processing: Guidelines and Recommendations from the GEOHAB Backscatter Working Group

    NASA Astrophysics Data System (ADS)

    Heffron, E.; Lurton, X.; Lamarche, G.; Brown, C.; Lucieer, V.; Rice, G.; Schimel, A.; Weber, T.

    2015-12-01

    Backscatter data acquired with multibeam sonars are now commonly used for the remote geological interpretation of the seabed. The systems hardware, software, and processing methods and tools have grown in numbers and improved over the years, yet many issues linger: there are no standard procedures for acquisition, poor or absent calibration, limited understanding and documentation of processing methods, etc. A workshop organized at the GeoHab (a community of geoscientists and biologists around the topic of marine habitat mapping) annual meeting in 2013 was dedicated to seafloor backscatter data from multibeam sonars and concluded that there was an overwhelming need for better coherence and agreement on the topics of acquisition, processing and interpretation of data. The GeoHab Backscatter Working Group (BSWG) was subsequently created with the purpose of documenting and synthetizing the state-of-the-art in sensors and techniques available today and proposing methods for best practice in the acquisition and processing of backscatter data. Two years later, the resulting document "Backscatter measurements by seafloor-mapping sonars: Guidelines and Recommendations" was completed1. The document provides: An introduction to backscatter measurements by seafloor-mapping sonars; A background on the physical principles of sonar backscatter; A discussion on users' needs from a wide spectrum of community end-users; A review on backscatter measurement; An analysis of best practices in data acquisition; A review of data processing principles with details on present software implementation; and finally A synthesis and key recommendations. This presentation reviews the BSWG mandate, structure, and development of this document. It details the various chapter contents, its recommendations to sonar manufacturers, operators, data processing software developers and end-users and its implication for the marine geology community. 1: Downloadable at https://www.niwa.co.nz/coasts-and-oceans/research-projects/backscatter-measurement-guidelines

  16. Checking Fits With Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  17. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    NASA Technical Reports Server (NTRS)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  18. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  19. 48 CFR 436.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Short selection process... Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.602-5 Short selection process for contracts...

  20. 48 CFR 1336.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 1336.602-5 Short selection process... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section...

  1. 48 CFR 736.602-5 - Short selection process for procurements not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Short selection process... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 736.602-5 Short selection process for procurements not to exceed the simplified acquisition threshold. References to FAR...

  2. 48 CFR 836.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process... Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 836.602-5 Short selection process...

  3. 48 CFR 436.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Short selection process... Acquisition Regulations System DEPARTMENT OF AGRICULTURE SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.602-5 Short selection process for contracts...

  4. 48 CFR 1336.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 1336.602-5 Short selection process... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1336.602-5 Section...

  5. 48 CFR 736.602-5 - Short selection process for procurements not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Short selection process... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 736.602-5 Short selection process for procurements not to exceed the simplified acquisition threshold. References to FAR...

  6. 48 CFR 836.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 836.602-5 Section 836.602-5 Federal... AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 836.602-5 Short selection process...

  7. 48 CFR 436.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal... ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.602-5 Short selection process for contracts...

  8. 48 CFR 436.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 436.602-5 Section 436.602-5 Federal... ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Service 436.602-5 Short selection process for contracts...

  9. CAD/CAM-coupled image processing systems

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen; Rauh, W.

    1990-08-01

    Image processing systems have found wide application in industry. For most computer integrated manufacturing faci- lities it is necessary to adapt these systems thus that they can automate the interaction with and the integration of CAD and CAM Systems. In this paper new approaches will be described that make use of the coupling of CAD and image processing as well as the automatic generation of programmes for the machining of products.

  10. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  11. Image acquisition, geometric correction and display of images from a 2×2 x-ray detector array based on Electron Multiplying Charge Coupled Device (EMCCD) technology

    PubMed Central

    Vasan, S.N Swetadri; Sharma, P.; Ionita, Ciprian N.; Titus, A.H.; Cartwright, A.N.; Bednarek, D.R; Rudin, S.

    2013-01-01

    A high resolution (up to 11.2 lp/mm) x-ray detector with larger field of view (8.5 cm × 8.5 cm) has been developed. The detector is a 2 × 2 array of individual imaging modules based on EMCCD technology. Each module outputs a frame of size 1088 × 1037 pixels, each 12 bits. The frames from the 4 modules are acquired into the processing computer using one of two techniques. The first uses 2 CameraLink communication channels with each carrying information from two modules, the second uses a application specific custom integrated circuits, the Multiple Module Multiplexer Integrated Circuit (MMMIC), 3 of which are used to multiplex the data from 4 modules into one CameraLink channel. Once the data is acquired using either of the above mentioned techniques, it is decoded in the graphics processing unit (GPU) to form one single frame of size 2176 × 2074 pixels each 16 bits. Each imaging module uses a fiber optic taper coupled to the EMCCD sensor. To correct for mechanical misalignment between the sensors and the fiber optic tapers and produce a single seamless image, the images in each module may be rotated and translated slightly in the x–y plane with respect to each other. To evaluate the detector acquisition and correction techniques, an aneurysm model was placed over an anthropomorphic head phantom and a coil was guided into the aneurysm under fluoroscopic guidance using the detector array. Image sequences before and after correction are presented which show near-seamless boundary matching and are well suited for fluoroscopic imaging. PMID:24353388

  12. DIII-D Thomson Scattering Diagnostic Data Acquisition, Processing and Analysis Software

    SciTech Connect

    Middaugh, K.R.; Bray, B.D.; Hsieh, C.L.; McHarg, B.B., Jr.; Penaflor, B.G.

    1999-06-01

    One of the diagnostic systems critical to the success of the DIII-D tokamak experiment is the Thomson scattering diagnostic. This diagnostic is unique in that it measures local electron temperature and density: (1) at multiple locations within the tokamak plasma; and (2) at different times throughout the plasma duration. Thomson ''raw'' data are digitized signals of scattered light, measured at different times and locations, from the laser beam paths fired into the plasma. Real-time acquisition of this data is performed by specialized hardware. Once obtained, the raw data are processed into meaningful temperature and density values which can be analyzed for measurement quality. This paper will provide an overview of the entire Thomson scattering diagnostic software and will focus on the data acquisition, processing, and analysis software implementation. The software falls into three general categories: (1) Set-up and Control: Initializes and controls all Thomson hardware and software, synchronizes with other DIII-D computers, and invokes other Thomson software as appropriate. (2) Data Acquisition and Processing: Obtains raw measured data from memory and processes it into temperature and density values. (3) Analysis: Provides a graphical user interface in which to perform analysis and sophisticated plotting of analysis parameters.

  13. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  14. Nanosecond image processing using stimulated photon echoes.

    PubMed

    Xu, E Y; Kröll, S; Huestis, D L; Kachru, R; Kim, M K

    1990-05-15

    Processing of two-dimensional images on a nanosecond time scale is demonstrated using the stimulated photon echoes in a rare-earth-doped crystal (0.1 at. % Pr(3+):LaF(3)). Two spatially encoded laser pulses (pictures) resonant with the (3)P(0)-(3)H(4) transition of Pr(3+) were stored by focusing the image pulses sequentially into the Pr(3+):LaF(3) crystal. The stored information is retrieved and processed by a third read pulse, generating the echo that is the spatial convolution or correlation of the input images. Application of this scheme to high-speed pattern recognition is discussed. PMID:19768008

  15. New approach for underwater imaging and processing

    NASA Astrophysics Data System (ADS)

    Wen, Yanan; Tian, Weijian; Zheng, Bing; Zhou, Guozun; Dong, Hui; Wu, Qiong

    2014-05-01

    Due to the absorptive and scattering nature of water, the characteristic of underwater image is different with it in the air. Underwater image is characterized by their poor visibility and noise. Getting clear original image and image processing are two important problems to be solved in underwater clear vision area. In this paper a new approach technology is presented to solve these problems. Firstly, an inhomogeneous illumination method is developed to get the clear original image. Normal illumination image system and inhomogeneous illumination image system are used to capture the image in same distance. The result shows that the contrast and definition of processed image is get great improvement by inhomogeneous illumination method. Secondly, based on the theory of photon transmitted in the water and the particularity of underwater target detecting, the characters of laser scattering on underwater target surface and spatial and temporal characters of oceanic optical channel have been studied. Based on the Monte Carlo simulation, we studied how the parameters of water quality and other systemic parameters affect the light transmitting through water at spatial and temporal region and provided the theoretical sustentation of enhancing the SNR and operational distance.

  16. Distributed real time data processing architecture for the TJ-II data acquisition system

    NASA Astrophysics Data System (ADS)

    Ruiz, M.; Barrera, E.; López, S.; Machón, D.; Vega, J.; Sánchez, E.

    2004-10-01

    This article describes the performance of a new model of architecture that has been developed for the TJ-II data acquisition system in order to increase its real time data processing capabilities. The current model consists of several compact PCI extension for instrumentation (PXI) standard chassis, each one with various digitizers. In this architecture, the data processing capability is restricted to the PXI controller's own performance. The controller must share its CPU resources between the data processing and the data acquisition tasks. In the new model, distributed data processing architecture has been developed. The solution adds one or more processing cards to each PXI chassis. This way it is possible to plan how to distribute the data processing of all acquired signals among the processing cards and the available resources of the PXI controller. This model allows scalability of the system. More or less processing cards can be added based on the requirements of the system. The processing algorithms are implemented in LabVIEW (from National Instruments), providing efficiency and time-saving application development when compared with other efficient solutions.

  17. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  18. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  19. A decoupled coil detector array for fast image acquisition in magnetic resonance imaging.

    PubMed

    Kwiat, D; Einav, S; Navon, G

    1991-01-01

    A method for magnetic resonance imaging (MRI) is investigated here, whereby an object is put under a homogeneous magnetic field, and the image is obtained by applying inverse source procedures to the data collected in an array of coil detectors surrounding the object. The induced current in each coil due to the precession of the magnetic dipole in each voxel depends on the characteristics of both the magnetic dipole frequency and strength, together with its distance from the coil, the coil direction in space, and the electrical properties of the coils. By calculating the induced current signals over an array of coil detectors, a relationship is established between the set of signals and the structure of the body under investigation. The linear relation can then be represented in matrix notation, and inversion of this matrix will produce an image of the body. Important problems which must be considered in the proposed method are signal-to-noise ratio (SNR) and coupling between adjacent coils. Solutions to these problems will provide a new method for obtaining an instantaneous image by NMR, with no need for gradient switching for encoding. A general algorithm for decoupling of the coils is presented and fast sampling of the signal, instead of filtering, is used in order to reduce both noise and numerical roundoff errors at the same time. Sensitivity considerations are made with respect to the number of coils that is required and its connection with coil radius and SNR. A computer simulation demonstrates the feasibility of this new modality. Based on the solutions presented here for the problems involved in the use of a large number of coils for a simultaneous recording of the signal, an improved method of multicoil recording is suggested, whereby it is combined with the conventional zeugmatographic method with read and phase gradients, to result in a novel method of magnetic resonance imaging. In the combined method, there are no phase-encoding gradients. Only a

  20. Characterization of digital signal processing in the DiDAC data acquisition system

    SciTech Connect

    Parson, J.D.; Olivier, T.L.; Habbersett, R.C.; Martin, J.C.; Wilder, M.E.; Jett, J.H. )

    1993-01-01

    A new generation data acquisition system for flow cytometers has been constructed. This Digital Data Acquisition and Control (DiDAC) system is based on the VME architecture and uses both the standard VME bus and a private bus for system communication and data transfer. At the front end of the system is a free running 20 MHz ADC. The output of a detector preamp provides the signal for digitization. The digitized waveform is passed to a custom built digital signal processing circuit that extracts the height, width, and integral of the waveform. Calculation of these parameters is started (and stopped) when the waveform exceeds (and falls below) a preset threshold value. The free running ADC is specified to have 10 bit accuracy at 25 MHZ. The authors have characterized it to the results obtained with conventional analog signal processing followed by digitization. Comparisons are made between the two approaches in terms of measurement CV, linearity and in other aspects.

  1. Processes of language acquisition in children with autism: evidence from preferential looking.

    PubMed

    Swensen, Lauren D; Kelley, Elizabeth; Fein, Deborah; Naigles, Letitia R

    2007-01-01

    Two language acquisition processes (comprehension preceding production of word order, the noun bias) were examined in 2- and 3-year-old children (n=10) with autistic spectrum disorder and in typically developing 21-month-olds (n=13). Intermodal preferential looking was used to assess comprehension of subject-verb-object word order and the tendency to map novel words onto objects rather than actions. Spontaneous speech samples were also collected. Results demonstrated significant comprehension of word order in both groups well before production. Moreover, children in both groups consistently showed the noun bias. Comprehension preceding production and the noun bias appear to be robust processes of language acquisition, observable in both typical and language-impaired populations. PMID:17381789

  2. Summary of the activities of the subgroup on data acquisition and processing

    SciTech Connect

    Connolly, P.L.; Doughty, D.C.; Elias, J.E.

    1981-01-01

    A data acquisition and handling subgroup consisting of approximately 20 members met during the 1981 ISABELLE summer study. Discussions were led by members of the BNL ISABELLE Data Acquisition Group (DAG) with lively participation from outside users. Particularly large contributions were made by representatives of BNL experiments 734, 735, and the MPS, as well as the Fermilab Colliding Detector Facility and the SLAC LASS Facility. In contrast to the 1978 study, the subgroup did not divide its activities into investigations of various individual detectors, but instead attempted to review the current state-of-the-art in the data acquisition, trigger processing, and data handling fields. A series of meetings first reviewed individual pieces of the problem, including status of the Fastbus Project, the Nevis trigger processor, the SLAC 168/E and 3081/E emulators, and efforts within DAG. Additional meetings dealt with the question involving specifying and building complete data acquisition systems. For any given problem, a series of possible solutions was proposed by the members of the subgroup. In general, any given solution had both advantages and disadvantages, and there was never any consensus on which approach was best. However, there was agreement that certain problems could only be handled by systems of a given power or greater. what will be given here is a review of various solutions with associated powers, costs, advantages, and disadvantages.

  3. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and

  4. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  5. Digital signal processing and data acquisition employing diode lasers for lidar-hygrometer

    NASA Astrophysics Data System (ADS)

    Naboko, Sergei V.; Pavlov, Lyubomir Y.; Penchev, Stoyan P.; Naboko, Vassily N.; Pencheva, Vasilka H.; Donchev, T.

    2003-11-01

    The paper refers to novel aspects of application of the laser radar (LIDAR) to differential absorption spectroscopy and atmospheric gas monitoring, accenting on the advantages of the class of powerful pulsed laser diodes. The implementation of the task for determination of atmospheric humidity, which is a major green house gas, and the set demands of measurement match well the potential of the acquisition system. The projected system is designed by transmission of the operations to Digital Signal Processing (DSP) module allowing preservation of the informative part of the signal by real-time pre-processing and following post-processing by personal computer.

  6. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  7. Mobile digital data acquisition and recording system for geoenergy process monitoring and control

    SciTech Connect

    Kimball, K B; Ogden, H C

    1980-12-01

    Three mobile, general purpose data acquisition and recording systems have been built to support geoenergy field experiments. These systems were designed to record and display information from large assortments of sensors used to monitor in-situ combustion recovery or similar experiments. They provide experimenters and operations personnel with easy access to current and past data for evaluation and control of the process, and provide permanent recordings for subsequent detailed analysis. The configurations of these systems and their current capabilities are briefly described.

  8. Methods of Hematoxylin and Erosin Image Information Acquisition and Optimization in Confocal Microscopy

    PubMed Central

    Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin

    2016-01-01

    Objectives We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. Methods We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. Results An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. Conclusions The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis. PMID:27525165

  9. An intelligent pre-processing framework for standardizing medical images for CAD and other post-processing applications

    NASA Astrophysics Data System (ADS)

    Raghupathi, Lakshminarasimhan; Devarakota, Pandu R.; Wolf, Matthias

    2012-03-01

    There is an increasing need to provide end-users with seamless and secure access to healthcare information acquired from a diverse range of sources. This might include local and remote hospital sites equipped with different vendors and practicing varied acquisition protocols and also heterogeneous external sources such as the Internet cloud. In such scenarios, image post-processing tools such as CAD (computer-aided diagnosis) which were hitherto developed using a smaller set of images may not always work optimally on newer set of images having entirely different characteristics. In this paper, we propose a framework that assesses the quality of a given input image and automatically applies an appropriate pre-processing method in such a manner that the image characteristics are normalized regardless of its source. We focus mainly on medical images, and the objective of the said preprocessing method is to standardize the performance of various image processing and workflow applications like CAD to perform in a consistent manner. First, our system consists of an assessment step wherein an image is evaluated based on criteria such as noise, image sharpness, etc. Depending on the measured characteristic, we then apply an appropriate normalization technique thus giving way to our overall pre-processing framework. A systematic evaluation of the proposed scheme is carried out on large set of CT images acquired from various vendors including images reconstructed with next generation iterative methods. Results demonstrate that the images are normalized and thus suitable for an existing LungCAD prototype1.

  10. Analog signal processing for optical coherence imaging systems

    NASA Astrophysics Data System (ADS)

    Xu, Wei

    Optical coherence tomography (OCT) and optical coherence microscopy (OCM) are non-invasive optical coherence imaging techniques, which enable micron-scale resolution, depth resolved imaging capability. Both OCT and OCM are based on Michelson interferometer theory. They are widely used in ophthalmology, gastroenterology and dermatology, because of their high resolution, safety and low cost. OCT creates cross sectional images whereas OCM obtains en face images. In this dissertation, the design and development of three increasingly complicated analog signal processing (ASP) solutions for optical coherence imaging are presented. The first ASP solution was implemented for a time domain OCT system with a Rapid Scanning Optical Delay line (RSOD)-based optical signal modulation and logarithmic amplifier (Log amp) based demodulation. This OCT system can acquire up to 1600 A-scans per second. The measured dynamic range is 106dB at 200A-scan per second. This OCT signal processing electronics includes an off-the-shelf filter box with a Log amp circuit implemented on a PCB board. The second ASP solution was developed for an OCM system with synchronized modulation and demodulation and compensation for interferometer phase drift. This OCM acquired micron-scale resolution, high dynamic range images at acquisition speeds up to 45,000 pixels/second. This OCM ASP solution is fully custom designed on a perforated circuit board. The third ASP solution was implemented on a single 2.2 mm x 2.2 mm complementary metal oxide semiconductor (CMOS) chip. This design is expandable to a multiple channel OCT system. A single on-chip CMOS photodetector and ASP channel was used for coherent demodulation in a time domain OCT system. Cross-sectional images were acquired with a dynamic range of 76dB (limited by photodetector responsivity). When incorporated with a bump-bonded InGaAs photodiode with higher responsivity, the expected dynamic range is close to 100dB.

  11. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  12. Hyperspectral image acquisition and analysis of cultured bacteria for the discrimination of urinary tract infections.

    PubMed

    Turra, Giovanni; Conti, Nicola; Signoroni, Alberto

    2015-08-01

    Because of their widespread diffusion and impact on human health, early identification of pathogens responsible for urinary tract infections (UTI) is one of the main challenges of clinical microbiology. Currently, bacteria culturing on Chromogenic plates is widely adopted for UTI detection for its readily interpretable visual outcomes. However, the search of alternate solutions can be highly attractive, especially in the rapidly developing context of bacteriology laboratory automation and digitization, as long as they can improve cost-effectiveness or allow early discrimination. In this work, we consider and develop hyperspectral image acquisition and analysis solutions to verify the feasibility of a "virtual chromogenic agar" approach, based on the acquisition of spectral signatures from bacterial colonies growing on blood agar plates, and their interpretation by means of machine learning solutions. We implemented and tested two classification approaches (PCA+SVM and RSIMCA) that evidenced good capability to discriminate among five selected UTI bacteria. For its better performance, robustness and attitude to work with an expanding set of pathogens, we conclude that the RSIMCA-based approach is worth to be further investigated in a clinical usage perspective. PMID:26736373

  13. The Effect of Light Conditions on Photoplethysmographic Image Acquisition Using a Commercial Camera

    PubMed Central

    Liu, He; Wang, Yadong

    2014-01-01

    Cameras embedded in consumer devices have previously been used as physiological information sensors. The waveform of the photoplethysmographic image (PPGi) signals may be significantly affected by the light spectra and intensity. The purpose of this paper is to evaluate the performance of PPGi waveform acquisition in the red, green, and blue channels using a commercial camera in different light conditions. The system, developed for this paper, comprises of a commercial camera and light sources with varied spectra and intensities. Signals were acquired from the fingertips of 12 healthy subjects. Extensive experiments, using different wavelength lights and white light with variation light intensities, respectively, reported in this paper, showed that almost all light spectra can acquire acceptable pulse rates, but only 470-, 490-, 505-, 590-, 600-, 610-, 625-, and 660-nm wavelength lights showed better performance in PPGi waveform compared with gold standard. With lower light intensity, the light spectra >600 nm still showed better performance. The change in pulse amplitude (ac) and dc amplitude was also investigated with the different light intensity and light spectra. With increasing light intensity, the dc amplitude increased, whereas ac component showed an initial increase followed by a decrease. Most of the subjects achieved their maximum averaging ac output when averaging dc output was in the range from 180 to 220 pixel values (8 b, 255 maximum pixel value). The results suggested that an adaptive solution could be developed to optimize the design of PPGi-based physiological signal acquisition devices in different light conditions. PMID:27170870

  14. Applications of image processing technologies to fine arts

    NASA Astrophysics Data System (ADS)

    Bartolini, Franco; Cappellini, Vito; Del Mastio, Andrea; Piva, Alessandro

    2003-10-01

    Over the past years the progresses of electronic imaging have encouraged researchers to develop applications for the fine arts sector. In particular the aspects that have been mostly investigated have regarded, the high quality acquisition of paintings (both from the point of view of spatial resolution and of color calibration), the actual restoration of the works (for giving to restorers an aid to forecast the results of the tasks they choose), the virtual restoration (to try to build a digital copy of the painting as it was at the origin), and the diagnosis (to automatically highlights, evaluate and monitor the possible damages that a work has suffered). Partially related to image processing are also the technologies for 3D acquisition and modeling of statues. Finally particular care has been given recently also to the distribution of the digital copies of cultural heritage objects over the Internet, thus posing novel problems regarding the effective browsing of digital multimedia archives, and the protection of the Intellectual Property connected to art-works reproductions. The goal of this paper is to review the research results that have been obtained in the past in this field, and to present some problems that are still open and can represent a challenging research field for the future.

  15. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  16. Angiographic imaging using an 18.9 MHz swept-wavelength laser that is phase-locked to the data acquisition clock and resonant scanners (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tozburun, Serhat; Blatter, Cedric; Siddiqui, Meena; Nam, Ahhyun S.; Vakoc, Benjamin J.

    2016-03-01

    In this study, we present an angiographic system comprised from a novel 18.9 MHz swept wavelength source integrated with a MEMs-based 23.7 kHz fast-axis scanner. The system provides rapid acquisition of frames and volumes on which a range of Doppler and intensity-based angiographic analyses can be performed. Interestingly, the source and data acquisition computer can be directly phase-locked to provide an intrinsically phase stable imaging system supporting Doppler measurements without the need for individual A-line triggers or post-processing phase calibration algorithms. The system is integrated with a 1.8 Gigasample (GS) per second acquisition card supporting continuous acquisition to computer RAM for 10 seconds. Using this system, we demonstrate phase-stable acquisitions across volumes acquired at 60 Hz frequency. We also highlight the ability to perform c-mode angiography providing volume perfusion measurements with 30 Hz temporal resolution. Ultimately, the speed and phase-stability of this laser and MEMs scanner platform can be leveraged to accelerate OCT-based angiography and both phase-sensitive and phase-insensitive extraction of blood flow velocity.

  17. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  18. VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS

    NASA Technical Reports Server (NTRS)

    Rizzi, S. A.

    1994-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech

  19. SENTINEL-2 Level 1 Products and Image Processing Performances

    NASA Astrophysics Data System (ADS)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m) and multi-temporal (better than 0.3 pixels) cases. Then, the prototyped image processing techniques (both radiometric and geometric) will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …). Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

  20. Reducing the formation of image artifacts during spectroscopic micro-CT acquisitions

    NASA Astrophysics Data System (ADS)

    Zuber, Marcus; Koenig, Thomas; Hussain, Rubaiya; Hamann, Elias; Ballabriga, Rafael; Campbell, Michael; Fauler, Alex; Fiederle, Michael; Baumbach, Tilo

    2015-03-01

    Spectroscopic micro-computed tomography using photon counting detectors is a technology that promises to deliver material-specific images in pre-clinical research. Inherent to such applications is the need for a high spatial resolution, which can only be achieved with small focal spot sizes in the micrometer range. This limits the achievable x-ray fluxes and implies long acquisitions easily exceeding one hour, during which it is paramount to maintain a constant detector response. Given that photon-counting detectors are delicate systems, with each pixel hosting advanced analog and digital circuitry, this can represent a challenging task. In this contribution, we illustrate our findings on how to reduce image artifacts in computed tomography reconstructions under these conditions, using a Medipix3RX detector featuring a cadmium telluride sensor. We find that maintaining a constant temperature is a prerequisite to guarantee energy threshold stability. More importantly, we identify varying sensor leakage currents as a significant source to artifact formation. We show that these leakage currents can render the corresponding images unusable if the ambient temperature fluctuates, as caused by an air conditioning, for example. We conclude with demonstrating the necessity of an adjustable leakage current compensation.

  1. Acquisition of priori tissue optical structure based on non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Wan, Wenbo; Li, Jiao; Liu, Lingling; Wang, Yihan; Zhang, Yan; Gao, Feng

    2015-03-01

    Shape-parameterized diffuse optical tomography (DOT), which is based on a priori that assumes the uniform distribution of the optical properties in the each region, shows the effectiveness of complex biological tissue optical heterogeneities reconstruction. The priori tissue optical structure could be acquired with the assistance of anatomical imaging methods such as X-ray computed tomography (XCT) which suffers from low-contrast for soft tissues including different optical characteristic regions. For the mouse model, a feasible strategy of a priori tissue optical structure acquisition is proposed based on a non-rigid image registration algorithm. During registration, a mapping matrix is calculated to elastically align the XCT image of reference mouse to the XCT image of target mouse. Applying the matrix to the reference atlas which is a detailed mesh of organs/tissues in reference mouse, registered atlas can be obtained as the anatomical structure of target mouse. By assigning the literature published optical parameters of each organ to the corresponding anatomical structure, optical structure of the target organism can be obtained as a priori information for DOT reconstruction algorithm. By applying the non-rigid image registration algorithm to a target mouse which is transformed from the reference mouse, the results show that the minimum correlation coefficient can be improved from 0.2781 (before registration) to 0.9032 (after fine registration), and the maximum average Euclid distances can be decreased from 12.80mm (before registration) to 1.02mm (after fine registration), which has verified the effectiveness of the algorithm.

  2. A digital receiver module with direct data acquisition for magnetic resonance imaging systems

    NASA Astrophysics Data System (ADS)

    Tang, Weinan; Sun, Hongyu; Wang, Weimin

    2012-10-01

    A digital receiver module for magnetic resonance imaging (MRI) with detailed hardware implementations is presented. The module is based on a direct sampling scheme using the latest mixed-signal circuit design techniques. A single field-programmable gate array chip is employed to perform software-based digital down conversion for radio frequency signals. The modular architecture of the receiver allows multiple acquisition channels to be implemented on a highly integrated printed circuit board. To maintain the phase coherence of the receiver and the exciter in the context of direct sampling, an effective phase synchronization method was proposed to achieve a phase deviation as small as 0.09°. The performance of the described receiver module was verified in the experiments for both low- and high-field (0.5 T and 1.5 T) MRI scanners and was compared to a modern commercial MRI receiver system.

  3. A structured database and image acquisition system in support of palynological studies: CHITINOS.

    PubMed

    Achab, A; Asselin, E; Liang, B

    2000-12-01

    CHITINOS is a microfossil image and data acquisition system developed to support palynologists from field work to report production. The system is intended for chitinozoans, but it can also accommodate other fossil groups. Thanks to its client-server architecture, the system can be accessed by multiple users. The database can be filled with data acquired during palynological work or taken from the literature. The system allows for the easy input, update, management, analysis and retrieval of paleontological data to enable the paleontologist to elucidate paleogeographic patterns, changes in biodiversity and taxonomic differentiations. Query and plot interfaces are intended for report production. The system was designed as the basis of a knowledge expert system by providing a new perspective in the interpretation of interrelated data. PMID:11164209

  4. Recovery of phase inconsistencies in continuously moving table extended field of view magnetic resonance imaging acquisitions.

    PubMed

    Kruger, David G; Riederer, Stephen J; Rossman, Phillip J; Mostardi, Petrice M; Madhuranthakam, Ananth J; Hu, Houchun H

    2005-09-01

    MR images formed using extended FOV continuously moving table data acquisition can have signal falloff and loss of lateral spatial resolution at localized, periodic positions along the direction of table motion. In this work we identify the origin of these artifacts and provide a means for correction. The artifacts are due to a mismatch of the phase of signals acquired from contiguous sampling fields of view and are most pronounced when the central k-space views are being sampled. Correction can be performed using the phase information from a periodically sampled central view to adjust the phase of all other views of that view cycle, making the net phase uniform across each axial plane. Results from experimental phantom and contrast-enhanced peripheral MRA studies show that the correction technique substantially eliminates the artifact for a variety of phase encode orders. PMID:16086304

  5. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  6. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  7. Quantitative assessment of susceptibility weighted imaging processing methods

    PubMed Central

    Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

    2013-01-01

    Purpose To evaluate different susceptibility weighted imaging (SWI) phase processing methods and parameter selection, thereby improving understanding of potential artifacts, as well as facilitating choice of methodology in clinical settings. Materials and Methods Two major phase processing methods, Homodyne-filtering and phase unwrapping-high pass (HP) filtering, were investigated with various phase unwrapping approaches, filter sizes, and filter types. Magnitude and phase images were acquired from a healthy subject and brain injury patients on a 3T clinical Siemens MRI system. Results were evaluated based on image contrast to noise ratio and presence of processing artifacts. Results When using a relatively small filter size (32 pixels for the matrix size 512 × 512 pixels), all Homodyne-filtering methods were subject to phase errors leading to 2% to 3% masked brain area in lower and middle axial slices. All phase unwrapping-filtering/smoothing approaches demonstrated fewer phase errors and artifacts compared to the Homodyne-filtering approaches. For performing phase unwrapping, Fourier-based methods, although less accurate, were 2–4 orders of magnitude faster than the PRELUDE, Goldstein and Quality-guide methods. Conclusion Although Homodyne-filtering approaches are faster and more straightforward, phase unwrapping followed by HP filtering approaches perform more accurately in a wider variety of acquisition scenarios. PMID:24923594

  8. Optimized acquisition time for x-ray fluorescence imaging of gold nanoparticles: a preliminary study using photon counting detector

    NASA Astrophysics Data System (ADS)

    Ren, Liqiang; Wu, Di; Li, Yuhua; Chen, Wei R.; Zheng, Bin; Liu, Hong

    2016-03-01

    X-ray fluorescence (XRF) is a promising spectroscopic technique to characterize imaging contrast agents with high atomic numbers (Z) such as gold nanoparticles (GNPs) inside small objects. Its utilization for biomedical applications, however, is greatly limited to experimental research due to longer data acquisition time. The objectives of this study are to apply a photon counting detector array for XRF imaging and to determine an optimized XRF data acquisition time, at which the acquired XRF image is of acceptable quality to allow the maximum level of radiation dose reduction. A prototype laboratory XRF imaging configuration consisting of a pencil-beam X-ray and a photon counting detector array (1 × 64 pixels) is employed to acquire the XRF image through exciting the prepared GNP/water solutions. In order to analyze the signal to noise ratio (SNR) improvement versus the increased exposure time, all the XRF photons within the energy range of 63 - 76KeV that include two Kα gold fluorescence peaks are collected for 1s, 2s, 3s, and so on all the way up to 200s. The optimized XRF data acquisition time for imaging different GNP solutions is determined as the moment when the acquired XRF image just reaches a quality with a SNR of 20dB which corresponds to an acceptable image quality.

  9. Hardware System for Real-Time EMG Signal Acquisition and Separation Processing during Electrical Stimulation.

    PubMed

    Hsueh, Ya-Hsin; Yin, Chieh; Chen, Yan-Hong

    2015-09-01

    The study aimed to develop a real-time electromyography (EMG) signal acquiring and processing device that can acquire signal during electrical stimulation. Since electrical stimulation output can affect EMG signal acquisition, to integrate the two elements into one system, EMG signal transmitting and processing method has to be modified. The whole system was designed in a user-friendly and flexible manner. For EMG signal processing, the system applied Altera Field Programmable Gate Array (FPGA) as the core to instantly process real-time hybrid EMG signal and output the isolated signal in a highly efficient way. The system used the power spectral density to evaluate the accuracy of signal processing, and the cross correlation showed that the delay of real-time processing was only 250 μs. PMID:26210898

  10. Age Effects on the Process of L2 Acquisition? Evidence from the Acquisition of Negation and Finiteness in L2 German

    ERIC Educational Resources Information Center

    Dimroth, Christine

    2008-01-01

    It is widely assumed that ultimate attainment in adult second language (L2) learners often differs quite radically from ultimate attainment in child L2 learners. This article addresses the question of whether learners at different ages also show qualitative differences in the process of L2 acquisition. Longitudinal production data from two…

  11. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  12. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  13. APNEA list mode data acquisition and real-time event processing

    SciTech Connect

    Hogle, R.A.; Miller, P.; Bramblett, R.L.

    1997-11-01

    The LMSC Active Passive Neutron Examinations and Assay (APNEA) Data Logger is a VME-based data acquisition system using commercial-off-the-shelf hardware with the application-specific software. It receives TTL inputs from eighty-eight {sup 3}He detector tubes and eight timing signals. Two data sets are generated concurrently for each acquisition session: (1) List Mode recording of all detector and timing signals, timestamped to 3 microsecond resolution; (2) Event Accumulations generated in real-time by counting events into short (tens of microseconds) and long (seconds) time bins following repetitive triggers. List Mode data sets can be post-processed to: (1) determine the optimum time bins for TRU assay of waste drums, (2) analyze a given data set in several ways to match different assay requirements and conditions and (3) confirm assay results by examining details of the raw data. Data Logger events are processed and timestamped by an array of 15 TMS320C40 DSPs and delivered to an embedded controller (PowerPC604) for interim disk storage. Three acquisition modes, corresponding to different trigger sources are provided. A standard network interface to a remote host system (Windows NT or SunOS) provides for system control, status, and transfer of previously acquired data. 6 figs.

  14. A knowledge acquisition process to analyse operational problems in solid waste management facilities.

    PubMed

    Dokas, Ioannis M; Panagiotakopoulos, Demetrios C

    2006-08-01

    The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions. PMID:16941992

  15. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  16. Future projects in pulse image processing

    NASA Astrophysics Data System (ADS)

    Kinser, Jason M.

    1999-03-01

    Pulse-Couple Neural Networks have generated quite a bit of interest as image processing tools. Past applications include image segmentation, edge extraction, texture extraction, de-noising, object isolation, foveation and fusion. These past applications do not comprise a complete list of useful applications of the PCNN. Future avenues of research will include level set analysis, binary (optical) correlators, artificial life simulations, maze running and filter jet analysis. This presentation will explore these future avenues of PCNN research.

  17. Image analysis in modern ophthalmology: from acquisition to computer assisted diagnosis and telemedicine

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip

    2012-06-01

    Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.

  18. A prototype data acquisition and processing system for Schumann resonance measurements

    NASA Astrophysics Data System (ADS)

    Tatsis, Giorgos; Votis, Constantinos; Christofilakis, Vasilis; Kostarakis, Panos; Tritakis, Vasilis; Repapis, Christos

    2015-12-01

    In this paper, a cost-effective prototype data acquisition system specifically designed for Schumann resonance measurements and an adequate signal processing method are described in detail. The implemented system captures the magnetic component of the Schumann resonance signal, using a magnetic antenna, at much higher sampling rates than the Nyquist rate for efficient signal improvement. In order to obtain the characteristics of the individual resonances of the SR spectrum a new and efficient software was developed. The processing techniques used in this software are analyzed thoroughly in the following. Evaluation of system's performance and operation is realized using preliminary measurements taken in the region of Northwest Greece.

  19. A Psychometric Study of Reading Processes in L2 Acquisition: Deploying Deep Processing to Push Learners' Discourse Towards Syntactic Processing-Based Constructions

    ERIC Educational Resources Information Center

    Manuel, Carlos J.

    2009-01-01

    This study assesses reading processes and/or strategies needed to deploy deep processing that could push learners towards syntactic-based constructions in L2 classrooms. Research has found L2 acquisition to present varying degrees of success and/or fossilization (Bley-Vroman 1989, Birdsong 1992 and Sharwood Smith 1994). For example, learners have…

  20. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  1. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  2. On the Contrastive Analysis of Features in Second Language Acquisition: Uninterpretable Gender on Past Participles in English-French Processing

    ERIC Educational Resources Information Center

    Dekydtspotter, Laurent; Renaud, Claire

    2009-01-01

    Lardiere's discussion raises important questions about the use of features in second language (L2) acquisition. This response examines predictions for processing of a feature-valuing model vs. a frequency-sensitive, associative model in explaining the acquisition of French past participle agreement. Results from a reading-time experiment support…

  3. 48 CFR 1036.602-5 - Short selection process for contracts not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for contracts not to exceed the simplified acquisition threshold. 1036.602-5 Section 1036.602-5... CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 1036.602-5 Short selection...

  4. 48 CFR 736.602-5 - Short selection process for procurements not to exceed the simplified acquisition threshold.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Short selection process for procurements not to exceed the simplified acquisition threshold. 736.602-5 Section 736.602-5... CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Architect-Engineer Services 736.602-5...

  5. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  6. Technique for real-time frontal face image acquisition using stereo system

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.; Kudryashov, Yuri I.

    2013-04-01

    Most part of existing systems for face recognition is usually based on two-dimensional images. And the quality of recognition is rather high for frontal images of face. But for other kind of images the quality decreases significantly. It is necessary to compensate for the effect of a change in the posture of a person (the camera angle) for correct operation of such systems. There are methods of transformation of 2D image of the person to the canonical orientation. The efficiency of these methods depends on the accuracy of determination of specific anthropometric points. Problems can arise for cases of partly occlusion of the person`s face. Another approach is to have a set of person images for different view angles for the further processing. But a need for storing and processing a large number of two-dimensional images makes this method considerably time-consuming. The proposed technique uses stereo system for fast generation of person face 3D model and obtaining face image in given orientation using this 3D model. Real-time performance is provided by implementing and graph cut methods for face surface 3D reconstruction and applying CUDA software library for parallel calculation.

  7. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  8. DSP based image processing for retinal prosthesis.

    PubMed

    Parikh, Neha J; Weiland, James D; Humayun, Mark S; Shah, Saloni S; Mohile, Gaurav S

    2004-01-01

    The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be highly computationally efficient while working on low power. DSPs have computational capabilities of hundreds of millions of instructions per second (MIPS) or millions of floating point operations per second (MFLOPS) along with certain processor configurations having low power. The various image processing algorithms, the DSP requirements and capabilities of different platforms would be discussed in this paper. PMID:17271974

  9. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  10. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  11. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  12. Signal displacement in spiral-in acquisitions: simulations and implications for imaging in SFG regions.

    PubMed

    Brewer, Kimberly D; Rioux, James A; Klassen, Martyn; Bowen, Chris V; Beyea, Steven D

    2012-07-01

    Susceptibility field gradients (SFGs) cause problems for functional magnetic resonance imaging (fMRI) in regions like the orbital frontal lobes, leading to signal loss and image artifacts (signal displacement and "pile-up"). Pulse sequences with spiral-in k-space trajectories are often used when acquiring fMRI in SFG regions such as inferior/medial temporal cortex because it is believed that they have improved signal recovery and decreased signal displacement properties. Previously postulated theories explain differing reasons why spiral-in appears to perform better than spiral-out; however it is clear that multiple mechanisms are occurring in parallel. This study explores differences in spiral-in and spiral-out images using human and phantom empirical data, as well as simulations consistent with the phantom model. Using image simulations, the displacement of signal was characterized using point spread functions (PSFs) and target maps, the latter of which are conceptually inverse PSFs describing which spatial locations contribute signal to a particular voxel. The magnitude of both PSFs and target maps was found to be identical for spiral-out and spiral-in acquisitions, with signal in target maps being displaced from distant regions in both cases. However, differences in the phase of the signal displacement patterns that consequently lead to changes in the intervoxel phase coherence were found to be a significant mechanism explaining differences between the spiral sequences. The results demonstrate that spiral-in trajectories do preserve more total signal in SFG regions than spiral-out; however, spiral-in does not in fact exhibit decreased signal displacement. Given that this signal can be displaced by significant distances, its recovery may not be preferable for all fMRI applications. PMID:22503093

  13. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain

  14. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  15. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  16. Cardiac imaging in diagnostic VCT using multi-sector data acquisition and image reconstruction: step-and-shoot scan vs. helical scan

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Seamans, John L.; Dong, Fang; Okerlund, Darin

    2008-03-01

    Since the advent of multi-slice CT, helical scan has played an increasingly important role in cardiac imaging. With the availability of diagnostic volumetric CT, step-and-shoot scan has been becoming popular recently. Step-and-shoot scan decouples patient table motion from heart beating, and thus the temporal window for data acquisition and image reconstruction can be optimized, resulting in significantly reduced radiation dose, improved tolerance to heart beat rate variation and inter-cycle cardiac motion inconsistency. Multi-sector data acquisition and image reconstruction have been utilized in helical cardiac imaging to improve temporal resolution, but suffers from the coupling of heart beating and patient table motion. Recognizing the clinical demands, the multi-sector data acquisition scheme for step-and-shoot scan is investigated in this paper. The most outstanding feature of the multi-sector data acquisition combined with the stepand- shoot scan is the decoupling of patient table proceeding from heart beating, which offers the opportunities of employing prospective ECG-gating to improve dose efficiency and fine adjusting cardiac imaging phase to suppress artifacts caused by inter-cycle cardiac motion inconsistency. The improvement in temporal resolution and the resultant suppression of motion artifacts are evaluated via motion phantoms driven by artificial ECG signals. Both theoretical analysis and experimental evaluation show promising results for multi-sector data acquisition scheme to be employed with the step-and-shoot scan. With the ever-increasing gantry rotation speed and detector longitudinal coverage in stateof- the-art VCT scanners, it is expected that the step-and-shoot scan with multi-sector data acquisition scheme would play an increasingly important role in cardiac imaging using diagnostic VCT scanners.

  17. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  18. Processing strategies and software solutions for data-independent acquisition in mass spectrometry.

    PubMed

    Bilbao, Aivett; Varesio, Emmanuel; Luban, Jeremy; Strambio-De-Castillia, Caterina; Hopfgartner, Gérard; Müller, Markus; Lisacek, Frédérique

    2015-03-01

    Data-independent acquisition (DIA) offers several advantages over data-dependent acquisition (DDA) schemes for characterizing complex protein digests analyzed by LC-MS/MS. In contrast to the sequential detection, selection, and analysis of individual ions during DDA, DIA systematically parallelizes the fragmentation of all detectable ions within a wide m/z range regardless of intensity, thereby providing broader dynamic range of detected signals, improved reproducibility for identification, better sensitivity, and accuracy for quantification, and, potentially, enhanced proteome coverage. To fully exploit these advantages, composite or multiplexed fragment ion spectra generated by DIA require more elaborate processing algorithms compared to DDA. This review examines different DIA schemes and, in particular, discusses the concepts applied to and related to data processing. Available software implementations for identification and quantification are presented as comprehensively as possible and examples of software usage are cited. Processing workflows, including complete proprietary frameworks or combinations of modules from different open source data processing packages are described and compared in terms of software availability and usability, programming language, operating system support, input/output data formats, as well as the main principles employed in the algorithms used for identification and quantification. This comparative study concludes with further discussion of current limitations and expectable improvements in the short- and midterm future. PMID:25430050

  19. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    NASA Technical Reports Server (NTRS)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  20. The effect of acquisition interval and spatial resolution on dynamic cardiac imaging with a stationary SPECT camera.

    PubMed

    Roberts, J; Maddula, R; Clackdoyle, R; DiBella, E; Fu, Z

    2007-08-01

    The current SPECT scanning paradigm that acquires images by slow rotation of multiple detectors in body-contoured orbits around the patient is not suited to the rapid collection of tomographically complete data. During rapid image acquisition, mechanical and patient safety constraints limit the detector orbit to circular paths at increased distances from the patient, resulting in decreased spatial resolution. We consider a novel dynamic rotating slant-hole (DyRoSH) SPECT camera that can collect full tomographic data every 2 s, employing three stationary detectors mounted with slant-hole collimators that rotate at 30 rpm. Because the detectors are stationary, they can be placed much closer to the patient than is possible with conventional SPECT systems. We propose that the decoupling of the detector position from the mechanics of rapid image acquisition offers an additional degree of freedom which can be used to improve accuracy in measured kinetic parameter estimates. With simulations and list-mode reconstructions, we consider the effects of different acquisition intervals on dynamic cardiac imaging, comparing a conventional three detector SPECT system with the proposed DyRoSH SPECT system. Kinetic parameters of a two-compartment model of myocardial perfusion for technetium-99m-teboroxime were estimated. When compared to a conventional SPECT scanner for the same acquisition periods, the proposed DyRoSH system shows equivalent or reduced bias or standard deviation values for the kinetic parameter estimates. The DyRoSH camera with a 2 s acquisition period does not show any improvement compared to a DyRoSH camera with a 10 s acquisition period. PMID:17634648

  1. How to crack nuts: acquisition process in captive chimpanzees (Pan troglodytes) observing a model.

    PubMed

    Hirata, Satoshi; Morimura, Naruki; Houki, Chiharu

    2009-10-01

    Stone tool use for nut cracking consists of placing a hard-shelled nut onto a stone anvil and then cracking the shell open by pounding it with a stone hammer to get to the kernel. We investigated the acquisition of tool use for nut cracking in a group of captive chimpanzees to clarify what kind of understanding of the tools and actions will lead to the acquisition of this type of tool use in the presence of a skilled model. A human experimenter trained a male chimpanzee until he mastered the use of a hammer and anvil stone to crack open macadamia nuts. He was then put in a nut-cracking situation together with his group mates, who were naïve to this tool use; we did not have a control group without a model. The results showed that the process of acquisition could be broken down into several steps, including recognition of applying pressure to the nut,emergence of the use of a combination of three objects, emergence of the hitting action, using a tool for hitting, and hitting the nut. The chimpanzees recognized these different components separately and practiced them one after another. They gradually united these factors in their behavior leading to their first success. Their behavior did not clearly improve immediately after observing successful nut cracking by a peer, but observation of a skilled group member seemed to have a gradual, long-term influence on the acquisition of nut cracking by naïve chimpanzees. PMID:19727866

  2. Product review: lucis image processing software.

    PubMed

    Johnson, J E

    1999-04-01

    Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

  3. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  4. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  5. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. FITSH: Software Package for Image Processing

    NASA Astrophysics Data System (ADS)

    Pál, András

    2011-11-01

    FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

  7. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  8. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  9. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  10. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1972-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  11. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  12. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  13. Memory acquisition and retrieval impact different epigenetic processes that regulate gene expression

    PubMed Central

    2015-01-01

    Background A fundamental question in neuroscience is how memories are stored and retrieved in the brain. Long-term memory formation requires transcription, translation and epigenetic processes that control gene expression. Thus, characterizing genome-wide the transcriptional changes that occur after memory acquisition and retrieval is of broad interest and importance. Genome-wide technologies are commonly used to interrogate transcriptional changes in discovery-based approaches. Their ability to increase scientific insight beyond traditional candidate gene approaches, however, is usually hindered by batch effects and other sources of unwanted variation, which are particularly hard to control in the study of brain and behavior. Results We examined genome-wide gene expression after contextual conditioning in the mouse hippocampus, a brain region essential for learning and memory, at all the time-points in which inhibiting transcription has been shown to impair memory formation. We show that most of the variance in gene expression is not due to conditioning and that by removing unwanted variance through additional normalization we are able provide novel biological insights. In particular, we show that genes downregulated by memory acquisition and retrieval impact different functions: chromatin assembly and RNA processing, respectively. Levels of histone 2A variant H2AB are reduced only following acquisition, a finding we confirmed using quantitative proteomics. On the other hand, splicing factor Rbfox1 and NMDA receptor-dependent microRNA miR-219 are only downregulated after retrieval, accompanied by an increase in protein levels of miR-219 target CAMKIIγ. Conclusions We provide a thorough characterization of coding and non-coding gene expression during long-term memory formation. We demonstrate that unwanted variance dominates the signal in transcriptional studies of learning and memory and introduce the removal of unwanted variance through normalization as a

  14. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  15. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  16. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  17. Progressive band processing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Schultz, Robert C.

    Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

  18. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  19. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  20. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  1. Novel ultrahigh resolution data acquisition and image reconstruction for multi-detector row CT

    SciTech Connect

    Flohr, T. G.; Stierstorfer, K.; Suess, C.; Schmidt, B.; Primak, A. N.; McCollough, C. H.

    2007-05-15

    We present and evaluate a special ultrahigh resolution mode providing considerably enhanced spatial resolution both in the scan plane and in the z-axis direction for a routine medical multi-detector row computed tomography (CT) system. Data acquisition is performed by using a flying focal spot both in the scan plane and in the z-axis direction in combination with tantalum grids that are inserted in front of the multi-row detector to reduce the aperture of the detector elements both in-plane and in the z-axis direction. The dose utilization of the system for standard applications is not affected, since the grids are moved into place only when needed and are removed for standard scanning. By means of this technique, image slices with a nominal section width of 0.4 mm (measured full width at half maximum=0.45 mm) can be reconstructed in spiral mode on a CT system with a detector configuration of 32x0.6 mm. The measured 2% value of the in-plane modulation transfer function (MTF) is 20.4 lp/cm, the measured 2% value of the longitudinal (z axis) MTF is 21.5 lp/cm. In a resolution phantom with metal line pair test patterns, spatial resolution of 20 lp/cm can be demonstrated both in the scan plane and along the z axis. This corresponds to an object size of 0.25 mm that can be resolved. The new mode is intended for ultrahigh resolution bone imaging, in particular for wrists, joints, and inner ear studies, where a higher level of image noise due to the reduced aperture is an acceptable trade-off for the clinical benefit brought about by the improved spatial resolution.

  2. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    SciTech Connect

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  3. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  4. Processing Infrared Images For Fire Management Applications

    NASA Astrophysics Data System (ADS)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  5. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  6. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  7. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  8. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420

  9. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  10. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  11. Super-resolution reconstruction to increase the spatial resolution of diffusion weighted images from orthogonal anisotropic acquisitions.

    PubMed

    Scherrer, Benoit; Gholipour, Ali; Warfield, Simon K

    2012-10-01

    Diffusion-weighted imaging (DWI) enables non-invasive investigation and characterization of the white matter but suffers from a relatively poor spatial resolution. Increasing the spatial resolution in DWI is challenging with a single-shot EPI acquisition due to the decreased signal-to-noise ratio and T2(∗) relaxation effect amplified with increased echo time. In this work we propose a super-resolution reconstruction (SRR) technique based on the acquisition of multiple anisotropic orthogonal DWI scans. DWI scans acquired in different planes are not typically closely aligned due to the geometric distortion introduced by magnetic susceptibility differences in each phase-encoding direction. We compensate each scan for geometric distortion by acquisition of a dual echo gradient echo field map, providing an estimate of the field inhomogeneity. We address the problem of patient motion by aligning the volumes in both space and q-space. The SRR is formulated as a maximum a posteriori problem. It relies on a volume acquisition model which describes how the acquired scans are observations of an unknown high-resolution image which we aim to recover. Our model enables the introduction of image priors that exploit spatial homogeneity and enables regularized solutions. We detail our SRR optimization procedure and report experiments including numerical simulations, synthetic SRR and real world SRR. In particular, we demonstrate that combining distortion compensation and SRR provides better results than acquisition of a single isotropic scan for the same acquisition duration time. Importantly, SRR enables DWI with resolution beyond the scanner hardware limitations. This work provides the first evidence that SRR, which employs conventional single shot EPI techniques, enables resolution enhancement in DWI, and may dramatically impact the role of DWI in both neuroscience and clinical applications. PMID:22770597

  12. An Integrated Data Acquisition / User Request/ Processing / Delivery System for Airborne Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Chapman, B.; Chu, A.; Tung, W.

    2003-12-01

    Airborne science data has historically played an important role in the development of the scientific underpinnings for spaceborne missions. When the science community determines the need for new types of spaceborne measurements, airborne campaigns are often crucial in risk mitigation for these future missions. However, full exploitation of the acquired data may be difficult due to its experimental and transitory nature. Externally to the project, most problematic (in particular, for those not involved in requesting the data acquisitions) may be the difficulty in searching for, requesting, and receiving the data, or even knowing the data exist. This can result in a rather small, insular community of users for these data sets. Internally, the difficulty for the project is in maintaining a robust processing and archival system during periods of changing mission priorities and evolving technologies. The NASA/JPL Airborne Synthetic Aperture Radar (AIRSAR) has acquired data for a large and varied community of scientists and engineers for 15 years. AIRSAR is presently supporting current NASA Earth Science Enterprise experiments, such as the Soil Moisture EXperiment (SMEX) and the Cold Land Processes experiment (CLPX), as well as experiments conducted as many as 10 years ago. During that time, it's processing, data ordering, and data delivery system has undergone evolutionary change as the cost and capability of resources has improved. AIRSAR now has a fully integrated data acquisition/user request/processing/delivery system through which most components of the data fulfillment process communicate via shared information within a database. The integration of these functions has reduced errors and increased throughput of processed data to customers.

  13. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  14. Improving in situ data acquisition using training images and a Bayesian mixture model

    NASA Astrophysics Data System (ADS)

    Abdollahifard, Mohammad Javad; Mariethoz, Gregoire; Pourfard, Mohammadreza

    2016-06-01

    Estimating the spatial distribution of physical processes using a minimum number of samples is of vital importance in earth science applications where sampling is costly. In recent years, training image-based methods have received a lot of attention for interpolation and simulation. However, training images have never been employed to optimize spatial sampling process. In this paper, a sequential compressive sampling method is presented which decides the location of new samples based on a training image. First, a Bayesian mixture model is developed based on the training patterns. Then, using this model, unknown values are estimated based on a limited number of random samples. Since the model is probabilistic, it allows estimating local uncertainty conditionally to the available samples. Based on this, new samples are sequentially extracted from the locations with maximum uncertainty. Experiments show that compared to a random sampling strategy, the proposed supervised sampling method significantly reduces the number of samples needed to achieve the same level of accuracy, even when the training image is not optimally chosen. The method has the potential to reduce the number of observations necessary for the characterization of environmental processes.

  15. A Practical Approach to Quantitative Processing and Analysis of Small Biological Structures by Fluorescent Imaging.

    PubMed

    Noller, Crystal M; Boulina, Maria; McNamara, George; Szeto, Angela; McCabe, Philip M; Mendez, Armando J

    2016-09-01

    Standards in quantitative fluorescent imaging are vaguely recognized and receive insufficient discussion. A common best practice is to acquire images at Nyquist rate, where highest signal frequency is assumed to be the highest obtainable resolution of the imaging system. However, this particular standard is set to insure that all obtainable information is being collected. The objective of the current study was to demonstrate that for quantification purposes, these correctly set acquisition rates can be redundant; instead, linear size of the objects of interest can be used to calculate sufficient information density in the image. We describe optimized image acquisition parameters and unbiased methods for processing and quantification of medium-size cellular structures. Sections of rabbit aortas were immunohistochemically stained to identify and quantify sympathetic varicosities, >2 μm in diameter. Images were processed to reduce background noise and segment objects using free, open-access software. Calculations of the optimal sampling rate for the experiment were based on the size of the objects of interest. The effect of differing sampling rates and processing techniques on object quantification was demonstrated. Oversampling led to a substantial increase in file size, whereas undersampling hindered reliable quantification. Quantification of raw and incorrectly processed images generated false structures, misrepresenting the underlying data. The current study emphasizes the importance of defining image-acquisition parameters based on the structure(s) of interest. The proposed postacquisition processing steps effectively removed background and noise, allowed for reliable quantification, and eliminated user bias. This customizable, reliable method for background subtraction and structure quantification provides a reproducible tool for researchers across biologic disciplines. PMID:27182204

  16. A Practical Approach to Quantitative Processing and Analysis of Small Biological Structures by Fluorescent Imaging

    PubMed Central

    Noller, Crystal M.; Boulina, Maria; McNamara, George; Szeto, Angela; McCabe, Philip M.

    2016-01-01

    Standards in quantitative fluorescent imaging are vaguely recognized and receive insufficient discussion. A common best practice is to acquire images at Nyquist rate, where highest signal frequency is assumed to be the highest obtainable resolution of the imaging system. However, this particular standard is set to insure that all obtainable information is being collected. The objective of the current study was to demonstrate that for quantification purposes, these correctly set acquisition rates can be redundant; instead, linear size of the objects of interest can be used to calculate sufficient information density in the image. We describe optimized image acquisition parameters and unbiased methods for processing and quantification of medium-size cellular structures. Sections of rabbit aortas were immunohistochemically stained to identify and quantify sympathetic varicosities, >2 μm in diameter. Images were processed to reduce background noise and segment objects using free, open-access software. Calculations of the optimal sampling rate for the experiment were based on the size of the objects of interest. The effect of differing sampling rates and processing techniques on object quantification was demonstrated. Oversampling led to a substantial increase in file size, whereas undersampling hindered reliable quantification. Quantification of raw and incorrectly processed images generated false structures, misrepresenting the underlying data. The current study emphasizes the importance of defining image-acquisition parameters based on the structure(s) of interest. The proposed postacquisition processing steps effectively removed background and noise, allowed for reliable quantification, and eliminated user bias. This customizable, reliable method for background subtraction and structure quantification provides a reproducible tool for researchers across biologic disciplines. PMID:27182204

  17. Acquisition of quantitative physiological data and computerized image reconstruction using a single scan TV system

    NASA Technical Reports Server (NTRS)

    Baily, N. A.

    1975-01-01

    Single scan operation of television X-ray fluoroscopic systems allow both analog and digital reconstruction of tomographic sections from single plan images. This type of system combined with a minimum of statistical processing showed excellent capabilities for delineating small changes in differential X-ray attenuation. Patient dose reduction is significant when compared to normal operation or film recording. Flat screen, low light level systems were both rugged and light in weight, making them applicable for a variety of special purposes. Three dimensional information was available from the tomographic methods and the recorded data was sufficient when used with appropriate computer display devices to give representative 3D images.

  18. Electronics Signal Processing for Medical Imaging

    NASA Astrophysics Data System (ADS)

    Turchetta, Renato

    This paper describes the way the signal coming from a radiation detector is conditioned and processed to produce images useful for medical applications. First of all, the small signal produce by the radiation is processed by analogue electronics specifically designed to produce a good signal-over-noise ratio. The optimised analogue signal produced at this stage can then be processed and transformed into digital information that is eventually stored in a computer, where it can be further processed as required. After an introduction to the general requirements of the processing electronics, we will review the basic building blocks that process the `tiny' analogue signal coming from a radiation detector. We will in particular analyse how it is possible to optimise the signal-over-noise ratio of the electronics. Some exercises, developed in the tutorial, will help to understand this fundamental part. The blocks needed to process the analogue signal and transform it into a digital code will be described. The description of electronics systems used for medical imaging systems will conclude the lecture.

  19. Human movement analysis with image processing in real time

    NASA Astrophysics Data System (ADS)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.

    1991-04-01

    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  20. A multiple process solution to the logical problem of language acquisition*

    PubMed Central

    MACWHINNEY, BRIAN

    2006-01-01

    Many researchers believe that there is a logical problem at the center of language acquisition theory. According to this analysis, the input to the learner is too inconsistent and incomplete to determine the acquisition of grammar. Moreover, when corrective feedback is provided, children tend to ignore it. As a result, language learning must rely on additional constraints from universal grammar. To solve this logical problem, theorists have proposed a series of constraints and parameterizations on the form of universal grammar. Plausible alternatives to these constraints include: conservatism, item-based learning, indirect negative evidence, competition, cue construction, and monitoring. Careful analysis of child language corpora has cast doubt on claims regarding the absence of positive exemplars. Using demonstrably available positive data, simple learning procedures can be formulated for each of the syntactic structures that have traditionally motivated invocation of the logical problem. Within the perspective of emergentist theory (MacWhinney, 2001), the operation of a set of mutually supportive processes is viewed as providing multiple buffering for developmental outcomes. However, the fact that some syntactic structures are more difficult to learn than others can be used to highlight areas of intense grammatical competition and processing load. PMID:15658750