Science.gov

Sample records for acquisition image analysis

  1. Troubleshooting digital macro photography for image acquisition and the analysis of biological samples.

    PubMed

    Liepinsh, Edgars; Kuka, Janis; Dambrova, Maija

    2013-01-01

    For years, image acquisition and analysis have been an important part of life science experiments to ensure the adequate and reliable presentation of research results. Since the development of digital photography and digital planimetric methods for image analysis approximately 20 years ago, new equipment and technologies have emerged, which have increased the quality of image acquisition and analysis. Different techniques are available to measure the size of stained tissue samples in experimental animal models of disease; however, the most accurate method is digital macro photography with software that is based on planimetric analysis. In this study, we described the methodology for the preparation of infarcted rat heart and brain tissue samples before image acquisition, digital macro photography techniques and planimetric image analysis. These methods are useful in the macro photography of biological samples and subsequent image analysis. In addition, the techniques that are described in this study include the automated analysis of digital photographs to minimize user input and exclude the risk of researcher-generated errors or bias during image analysis.

  2. Multislice perfusion of the kidneys using parallel imaging: image acquisition and analysis strategies.

    PubMed

    Gardener, Alexander G; Francis, Susan T

    2010-06-01

    Flow-sensitive alternating inversion recovery arterial spin labeling with parallel imaging acquisition is used to acquire single-shot, multislice perfusion maps of the kidney. A considerable problem for arterial spin labeling methods, which are based on sequential subtraction, is the movement of the kidneys due to respiratory motion between acquisitions. The effects of breathing strategy (free, respiratory-triggered and breath hold) are studied and the use of background suppression is investigated. The application of movement correction by image registration is assessed and perfusion rates are measured. Postacquisition image realignment is shown to improve visual quality and subsequent perfusion quantification. Using such correction, data can be collected from free breathing alone, without the need for a good respiratory trace and in the shortest overall acquisition time, advantageous for patient comfort. The addition of background suppression to arterial spin labeling data is shown to reduce the perfusion signal-to-noise ratio and underestimate perfusion.

  3. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  4. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing

    PubMed Central

    2014-01-01

    Introduction Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator’s (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. Material and method The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. Results For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient’s back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects – error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18

  5. Superimposed fringe projection for three-dimensional shape acquisition by image analysis.

    PubMed

    Sasso, Marco; Chiappini, Gianluca; Palmieri, Giacomo; Amodio, Dario

    2009-05-01

    The aim in this work is the development of an image analysis technique for 3D shape acquisition, based on luminous fringe projections. In more detail, the method is based on the simultaneous use of several projectors, which is desirable whenever the surface under inspection has a complex geometry, with undercuts or shadow areas. In these cases, the usual fringe projection technique needs to perform several acquisitions, each time moving the projector or using several projectors alternately. Besides the procedure of fringe projection and phase calculation, an unwrap algorithm has been developed in order to obtain continuous phase maps needed in following calculations for shape extraction. With the technique of simultaneous projections, oriented in such a way to cover all of the surface, it is possible to increase the speed of the acquisition process and avoid the postprocessing problems related to the matching of different point clouds.

  6. Image analysis and data-acquisition techniques for infrared and CCD cameras for ATF

    NASA Astrophysics Data System (ADS)

    Young, K. G.; Hillis, D. L.

    1988-08-01

    A multipurpose image processing system has been developed for the Advanced Toroidal Facility (ATF) stellarator experiment. This system makes it possible to investigate the complicated topology inherent in stellarator plasmas with conventional video technology. Infrared (IR) and charge-coupled device (CCD) cameras, operated at the standard video framing rate, are used on ATF to measure heat flux patterns to the vacuum vessel wall and visible-light emission from the ionized plasma. These video cameras are coupled with fast acquisition and display systems, developed for a MicroVAX-II, which allow between-shot observation of the dynamic temperature and spatial extent of the plasma generated by ATF. The IR camera system provides acquisition of one frame of 60×80 eight-bit pixels every 16.7 ms via storage in a CAMAC module. The CCD data acquisition proceeds automatically, storing the video frames until its 12-bit, 1-Mbyte CAMAC memory is filled. After analysis, transformation, and compression, selected portions of the data are stored on disk. Interactive display of experimental data and theoretical calculations are performed with software written in Interactive Data Language.

  7. Automated ship image acquisition

    NASA Astrophysics Data System (ADS)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  8. Colony image acquisition and segmentation

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2007-12-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.

  9. Data acquisition and analysis for the energy-subtraction Compton scatter camera for medical imaging

    NASA Astrophysics Data System (ADS)

    Khamzin, Murat Kamilevich

    In response to the shortcomings of the Anger camera currently being used in conventional SPECT, particularly the trade-off between sensitivity and spatial resolution, a novel energy-subtraction Compton scatter camera, or the ESCSC, has been proposed. A successful clinical implementation of the ESCSC could revolutionize the field of SPECT. Features of this camera include utilization of silicon and CdZnTe detectors in primary and secondary detector systems, list-mode time stamping data acquisition, modular architecture, and post-acquisition data analysis. Previous ESCSC studies were based on Monte Carlo modeling. The objective of this work is to test the theoretical framework developed in previous studies by developing the data acquisition and analysis techniques necessary to implement the ESCSC. The camera model working in list-mode with time stamping was successfully built and tested thus confirming potential of the ESCSC that was predicted in previous simulation studies. The obtained data were processed during the post-acquisition data analysis based on preferred event selection criteria. Along with the construction of a camera model and proving the approach, the post-acquisition data analysis was further extended to include preferred event weighting based on the likelihood of a preferred event to be a true preferred event. While formulated to show ESCSC capabilities, the results of this study are important for any Compton scatter camera implementation as well as for coincidence data acquisition systems in general.

  10. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions

    PubMed Central

    Shenoy, Shailesh M.

    2016-01-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software’s support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity. PMID:27516727

  11. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  12. Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing

    NASA Astrophysics Data System (ADS)

    Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.

    2016-06-01

    In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).

  13. Particle size determination using TEM: a discussion of image acquisition and analysis for the novice microscopist.

    PubMed

    Pyrz, William D; Buttrey, Douglas J

    2008-10-21

    As nanoparticle synthesis capabilities advance, there is an increasing need for reliable nanoparticle size distribution analysis. Transmission electron microscopy (TEM) can be used to directly image nanoparticles at scales approaching a single atom. However, the advantage gained by being able to "see" these nanoparticles comes with several tradeoffs that must be addressed and balanced. For effective nanoparticle characterization, the proper selection of imaging type (bright vs dark field), magnification, and analysis method (manual vs automated) is critical. These decisions control the measurement resolution, the contrast between the particle and background, the number of particles in each image, the subsequent analysis efficiency, and the proper determination of the particle-background boundary and affect the significance of electron beam damage to the sample. In this work, the relationship between the critical decisions required for TEM analysis of small nanoparticles and the statistical effects of these factors on the resulting size distribution is presented.

  14. Optimizing the acquisition and analysis of confocal images for quantitative single-mobile-particle detection.

    PubMed

    Friaa, Ouided; Furukawa, Melissa; Shamas-Din, Aisha; Leber, Brian; Andrews, David W; Fradin, Cécile

    2013-08-01

    Quantification of the fluorescence properties of diffusing particles in solution is an invaluable source of information for characterizing the interactions, stoichiometry, or conformation of molecules directly in their native environment. In the case of heterogeneous populations, single-particle detection should be the method of choice and it can, in principle, be achieved by using confocal imaging. However, the detection of single mobile particles in confocal images presents specific challenges. In particular, it requires an adapted set of imaging parameters for capturing the confocal images and an adapted event-detection scheme for analyzing the image. Herein, we report a theoretical framework that allows a prediction of the properties of a homogenous particle population. This model assumes that the particles have linear trajectories with reference to the confocal volume, which holds true for particles with moderate mobility. We compare the predictions of our model to the results as obtained by analyzing the confocal images of solutions of fluorescently labeled liposomes. Based on this comparison, we propose improvements to the simple line-by-line thresholding event-detection scheme, which is commonly used for single-mobile-particle detection. We show that an optimal combination of imaging and analysis parameters allows the reliable detection of fluorescent liposomes for concentrations between 1 and 100 pM. This result confirms the importance of confocal single-particle detection as a complementary technique to ensemble fluorescence-correlation techniques for the studies of mobile particle.

  15. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  16. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  17. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods.

    PubMed

    Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P

    2012-06-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.

  18. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods

    PubMed Central

    Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.

    2012-01-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394

  19. Super-resolved image acquisition with full-field localization-based microscopy: theoretical analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Lee, Wonju; Kim, Donghyun

    2016-02-01

    We analyze and evaluate super-resolved image acquisition with full-field localization microscopy in which an individual signal sampled by localization may or may not be switched. For the analysis, Nyquist-Shannon sampling theorem based on ideal delta function was extended to sampling with unit pulse comb and surface-enhanced localized near-field that was numerically calculated with finite difference time domain. Sampling with unit pulse was investigated in Fourier domain where magnitude of baseband becomes larger than that of adjacent subband, i.e. aliasing effect is reduced owing to pulse width. Standard Lena image was employed as imaging target and a diffraction-limited optical system is assumed. A peak signal-to-noise ratio (PSNR) was introduced to evaluate the efficiency of image reconstruction quantitatively. When the target was sampled without switching by unit pulse as the sampling width and period are varied, PSNR increased eventually to 18.1 dB, which is the PSNR of a conventional diffraction-limited image. PSNR was found to increase with a longer pulse width due to reduced aliasing effect. When switching of individual sampling pulses was applied, blurry artifact outside the excited field is removed for each pulse and PSNR soars to 25.6 dB with a shortened pulse period, i.e. effective resolution of 72 nm is obtained, which can further be decreased.

  20. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  1. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  2. EMOTIONS AND IMAGES IN LANGUAGE--A LEARNING ANALYSIS OF THEIR ACQUISITION AND FUNCTION.

    ERIC Educational Resources Information Center

    STAATS, ARTHUR W.

    THIS ARTICLE PRESENTED THEORETICAL AND EXPERIMENTAL ANALYSES CONCERNING IMPORTANT ASPECTS OF LANGUAGE. IT WAS SUGGESTED THAT A LEARNING THEORY WHICH INEGRATES INSTRUMENTAL AND CLASSICAL CONDITIONING, CUTTING ACROSS THEORETICAL LINES, COULD SERVE AS THE BASIS FOR A COMPREHENSIVE THEORY OF LANGUAGE ACQUISITION AND FUNCTION. THE PAPER ILLUSTRATED THE…

  3. Applications Of Digital Image Acquisition In Anthropometry

    NASA Astrophysics Data System (ADS)

    Woolford, Barbara; Lewis, James L.

    1981-10-01

    Anthropometric data on reach and mobility have traditionally been collected by time consuming and relatively inaccurate manual methods. Three dimensional digital image acquisition promises to radically increase the speed and ease of data collection and analysis. A three-camera video anthropometric system for collecting position, velocity, and force data in real time is under development for the Anthropometric Measurement Laboratory at NASA's Johnson Space Center. The use of a prototype of this system for collecting data on reach capabilities and on lateral stability is described. Two extensions of this system are planned.

  4. Noun Imageability Facilitates the Acquisition of Plurals: Survival Analysis of Plural Emergence in Children

    ERIC Educational Resources Information Center

    Smolík, Filip

    2014-01-01

    Some research in child language suggests that semantically general verbs appear in grammatical structures earlier than semantically complex, specific ones. The present study examines whether this was the case in nouns, using imageability as a proxy measure of semantic generality. Longitudinal corpus data from 12 children from the Manchester corpus…

  5. Optimisation of acquisition time in bioluminescence imaging

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie; Cobbold, Mark; Styles, Iain B.; Dehghani, Hamid

    2015-03-01

    Decreasing the acquisition time in bioluminescence imaging (BLI) and bioluminescence tomography (BLT) will enable animals to be imaged within the window of stable emission of the bioluminescent source, a higher imaging throughput and minimisation of the time which an animal is anaesthetised. This work investigates, through simulation using a heterogeneous mouse model, two methods of decreasing acquisition time: 1. Imaging at fewer wavelengths (a reduction from five to three); and 2. Increasing the bandwidth of filters used for imaging. The results indicate that both methods are viable ways of decreasing the acquisition time without a loss in quantitative accuracy. Importantly, when choosing imaging wavelengths, the spectral attenuation of tissue and emission spectrum of the source must be considered, in order to choose wavelengths at which a high signal can be achieved. Additionally, when increasing the bandwidth of the filters used for imaging, the bandwidth must be accounted for in the reconstruction algorithm.

  6. Low-Dose Micro-CT Imaging for Vascular Segmentation and Analysis Using Sparse-View Acquisitions

    PubMed Central

    Vandeghinste, Bert; Vandenberghe, Stefaan; Vanhove, Chris; Staelens, Steven; Van Holen, Roel

    2013-01-01

    The aim of this study is to investigate whether reliable and accurate 3D geometrical models of the murine aortic arch can be constructed from sparse-view data in vivo micro-CT acquisitions. This would considerably reduce acquisition time and X-ray dose. In vivo contrast-enhanced micro-CT datasets were reconstructed using a conventional filtered back projection algorithm (FDK), the image space reconstruction algorithm (ISRA) and total variation regularized ISRA (ISRA-TV). The reconstructed images were then semi-automatically segmented. Segmentations of high- and low-dose protocols were compared and evaluated based on voxel classification, 3D model diameters and centerline differences. FDK reconstruction does not lead to accurate segmentation in the case of low-view acquisitions. ISRA manages accurate segmentation with 1024 or more projection views. ISRA-TV needs a minimum of 256 views. These results indicate that accurate vascular models can be obtained from micro-CT scans with 8 times less X-ray dose and acquisition time, as long as regularized iterative reconstruction is used. PMID:23840893

  7. Optimization of the imaging quality of 64-slice CT acquisition protocol using Taguchi analysis: A phantom study.

    PubMed

    Pan, Lung Fa; Erdene, Erdenetsetseg; Chen, Chun Chi; Pan, Lung Kwang

    2015-01-01

    In this study, the phantom imaging quality of 64-slice CT acquisition protocol was quantitatively evaluated using Taguchi. The phantom acrylic line group was designed and assembled with multiple layers of solid water plate in order to imitate the adult abdomen, and scanned with Philips brilliance CT in order to simulate a clinical examination. According to the Taguchi L8(2(7)) orthogonal array, four major factors of the acquisition protocol were optimized, including (A) CT slice thickness, (B) the image reconstruction filter type, (C) the spiral CT pitch, and (D) the matrix size. The reconstructed line group phantom image was counted by four radiologists for three discrete rounds in order to obtain the averages and standard deviations of the line counts and the corresponding signal to noise ratios (S/N). The quantified S/N values were analyzed and the optimal combination of the four factor settings was determined to be comprised of (A) a 1-mm thickness, (B) a sharp filter type, (C) a 1.172 spiral CT pitch, and (D) a 1024×1024 matrix size. The dominant factors included the (A) filter type and the cross interaction between the filter type and CT slice thickness (A×B). The minor factors were determined to be (C) the spiral CT pitch and (D) the matrix size since neither was capable of yielding a 95% confidence level in the ANOVA test. PMID:26405931

  8. In vivo confocal microscopy of the cornea: New developments in image acquisition, reconstruction and analysis using the HRT-Rostock Corneal Module

    PubMed Central

    Petroll, W. Matthew; Robertson, Danielle M.

    2015-01-01

    The optical sectioning ability of confocal microscopy allows high magnification images to be obtained from different depths within a thick tissue specimen, and is thus ideally suited to the study of intact tissue in living subjects. In vivo confocal microscopy has been used in a variety of corneal research and clinical applications since its development over 25 years ago. In this article we review the latest developments in quantitative corneal imaging with the Heidelberg Retinal Tomograph with Rostock Corneal Module (HRT-RCM). We provide an overview of the unique strengths and weaknesses of the HRT-RCM. We discuss techniques for performing 3-D imaging with the HRT-RCM, including hardware and software modifications that allow full thickness confocal microscopy through focusing (CMTF) of the cornea, which can provide quantitative measurements of corneal sublayer thicknesses, stromal cell and extracellular matrix backscatter, and depth dependent changes in corneal keratocyte density. We also review current approaches for quantitative imaging of the subbasal nerve plexus, which require a combination of advanced image acquisition and analysis procedures, including wide field mapping and 3-D reconstruction of nerve structures. The development of new hardware, software, and acquisition techniques continues to expand the number of applications of the HRT-RCM for quantitative in vivo corneal imaging at the cellular level. Knowledge of these rapidly evolving strategies should benefit corneal clinicians and basic scientists alike. PMID:25998608

  9. Image Acquisition in Real Time

    NASA Technical Reports Server (NTRS)

    2003-01-01

    In 1995, Carlos Jorquera left NASA s Jet Propulsion Laboratory (JPL) to focus on erasing the growing void between high-performance cameras and the requisite software to capture and process the resulting digital images. Since his departure from NASA, Jorquera s efforts have not only satisfied the private industry's cravings for faster, more flexible, and more favorable software applications, but have blossomed into a successful entrepreneurship that is making its mark with improvements in fields such as medicine, weather forecasting, and X-ray inspection. Formerly a JPL engineer who constructed imaging systems for spacecraft and ground-based astronomy projects, Jorquera is the founder and president of the three-person firm, Boulder Imaging Inc., based in Louisville, Colorado. Joining Jorquera to round out the Boulder Imaging staff are Chief Operations Engineer Susan Downey, who also gained experience at JPL working on space-bound projects including Galileo and the Hubble Space Telescope, and Vice President of Engineering and Machine Vision Specialist Jie Zhu Kulbida, who has extensive industrial and research and development experience within the private sector.

  10. Image acquisition system for a hospital enterprise

    NASA Astrophysics Data System (ADS)

    Moore, Stephen M.; Beecher, David E.

    1998-07-01

    Hospital enterprises are being created through mergers and acquisitions of existing hospitals. One area of interest in the PACS literature has been the integration of information systems and imaging systems. Hospital enterprises with multiple information and imaging systems provide new challenges to the integration task. This paper describes the requirements at the BJC Health System and a testbed system that is designed to acquire images from a number of different modalities and hospitals. This testbed system is integrated with Project Spectrum at BJC which is designed to provide a centralized clinical repository and a single desktop application for physician review of the patient chart (text, lab values, images).

  11. Effective GPR Data Acquisition and Imaging

    NASA Astrophysics Data System (ADS)

    Sato, M.

    2014-12-01

    We have demonstrated that dense GPR data acquisition typically antenna step increment less than 1/10 wave length can provide clear 3-dimeantiona subsurface images, and we created 3DGPR images. Now we are interested in developing GPR survey methodologies which required less data acquisition time. In order to speed up the data acquisition, we are studying efficient antenna positioning for GPR survey and 3-D imaging algorithm. For example, we have developed a dual sensor "ALIS", which combines GPR with metal detector (Electromagnetic Induction sensor) for humanitarian demining, which acquires GPR data by hand scanning. ALIS is a pulse radar system, which has a frequency range 0.5-3GHz.The sensor position tracking system has accuracy about a few cm, and the data spacing is typically more than a few cm, but it can visualize the mines, which has a diameter about 8cm. 2 systems of ALIS have been deployed by Cambodian Mine Action Center (CMAC) in mine fields in Cambodia since 2009 and have detected more than 80 buried land mines. We are now developing signal processing for an array type GPR "Yakumo". Yakumo is a SFCW radar system which is a multi-static radar, consisted of 8 transmitter antennas and 8 receiver antennas. We have demonstrated that the multi-static data acquisition is not only effective in data acquisition, but at the same time, it can increase the quality of GPR images. Archaeological survey by Yakumo in large areas, which are more than 100m by 100m have been conducted, for promoting recovery from Tsunami attacked East Japan in March 2011. With a conventional GPR system, we are developing an interpolation method of radar signals, and demonstrated that it can increase the quality of the radar images, without increasing the data acquisition points. When we acquire one dimensional GPR profile along a survey line, we can acquire relatively high density data sets. However, when we need to relocate the data sets along a "virtual" survey line, for example a

  12. Self-adaptive iris image acquisition system

    NASA Astrophysics Data System (ADS)

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu; Qiu, Xianchao

    2008-03-01

    Iris image acquisition is the fundamental step of the iris recognition, but capturing high-resolution iris images in real-time is very difficult. The most common systems have small capture volume and demand users to fully cooperate with machines, which has become the bottleneck of iris recognition's application. In this paper, we aim at building an active iris image acquiring system which is self-adaptive to users. Two low resolution cameras are co-located in a pan-tilt-unit (PTU), for face and iris image acquisition respectively. Once the face camera detects face region in real-time video, the system controls the PTU to move towards the eye region and automatically zooms, until the iris camera captures an clear iris image for recognition. Compared with other similar works, our contribution is that we use low-resolution cameras, which can transmit image data much faster and are much cheaper than the high-resolution cameras. In the system, we use Haar-like cascaded feature to detect faces and eyes, linear transformation to predict the iris camera's position, and simple heuristic PTU control method to track eyes. A prototype device has been established, and experiments show that our system can automatically capture high-quality iris image in the range of 0.6m×0.4m×0.4m in average 3 to 5 seconds.

  13. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  14. MONSOON: Image Acquisition System or "Pixel Server"

    NASA Astrophysics Data System (ADS)

    Starr, Barry M.; Buchholz, Nick C.; Rahmer, Gustavo; Penegor, Gerald; Schmidt, Ricardo E.; Warner, Michael; Merrill, Michael; Claver, Charles F.; Ho, Y.; Chopra, K. N.; Shroff, C.; Shroff, D.

    2003-03-01

    The MONSOON Image Acquisition System has been designed to meet the need for scalable, multichannel, high-speed image acquisition required for the next-generation optical and infared detectors and mosaic projects currently under development at NOAO as described in other papers at this proceeding such as ORION, NEWFIRM, QUOTA, ODI and LSST. These new systems with their large scale (64 to 2000 channels) and high performance (up to 1Gbyte/s) raise new challenges in terms of communication bandwidth, data storage and data processing requirements which are not adequately met by existing astronomical controllers. In order to meet this demand, new techniques for not only a new detector controller, but rather a new image acquisition architecture, have been defined. These extremely large scale imaging systems also raise less obvious concerns in previously neglected areas of controller design such as physical size and form factor issues, power dissipation and cooling near the telescope, system assembly/test/ integration time, reliability, and total cost of ownership. At NOAO we have taken efforts to look outside of the astronomical community for solutions found in other disciplines to similar classes of problems. A large number of the challenges raised by these system needs are already successfully being faced in other areas such as telecommunications, instrumentation and aerospace. Efforts have also been made to use true commercial off the shelf (COTS) system elements, and find truly technology independent solutions for a number of system design issues whenever possible. The Monsoon effort is a full-disclosure development effort by NOAO in collaboration with the CARA ASTEROID project for the benefit of the astronomical community.

  15. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  16. Prospector: A web-based tool for rapid acquisition of gold standard data for pathology research and image analysis

    PubMed Central

    Wright, Alexander I.; Magee, Derek R.; Quirke, Philip; Treanor, Darren E.

    2015-01-01

    Background: Obtaining ground truth for pathological images is essential for various experiments, especially for training and testing image analysis algorithms. However, obtaining pathologist input is often difficult, time consuming and expensive. This leads to algorithms being over-fitted to small datasets, and inappropriate validation, which causes poor performance on real world data. There is a great need to gather data from pathologists in a simple and efficient manner, in order to maximise the amount of data obtained. Methods: We present a lightweight, web-based HTML5 system for administering and participating in data collection experiments. The system is designed for rapid input with minimal effort, and can be accessed from anywhere in the world with a reliable internet connection. Results: We present two case studies that use the system to assess how limitations on fields of view affect pathologist agreement, and to what extent poorly stained slides affect judgement. In both cases, the system collects pathologist scores at a rate of less than two seconds per image. Conclusions: The system has multiple potential applications in pathology and other domains. PMID:26110089

  17. Image acquisition in laparoscopic and endoscopic surgery

    NASA Astrophysics Data System (ADS)

    Gill, Brijesh S.; Georgeson, Keith E.; Hardin, William D., Jr.

    1995-04-01

    Laparoscopic and endoscopic surgery rely uniquely on high quality display of acquired images, but a multitude of problems plague the researcher who attempts to reproduce such images for educational purposes. Some of these are intrinsic limitations of current laparoscopic/endoscopic visualization systems, while others are artifacts solely of the process used to acquire and reproduce such images. Whatever the genesis of these problems, a glance at current literature will reveal the extent to which endoscopy suffers from an inability to reproduce what the surgeon sees during a procedure. The major intrinsic limitation to the acquisition of high-quality still images from laparoscopic procedures lies in the inability to couple directly a camera to the laparoscope. While many systems have this capability, this is useful mostly for otolaryngologists, who do not maintain a sterile field around their scopes. For procedures in which a sterile field must be maintained, one trial method has been to use a beam splitter to send light both to the still camera and the digital video camera. This is no solution, however, since this results in low quality still images as well as a degradation of the image that the surgeon must use to operate, something no surgeon tolerates lightly. Researchers thus must currently rely on other methods for producing images from a laparoscopic procedure. Most manufacturers provide an optional slide or print maker that provides a hardcopy output from the processed composite video signal. The results achieved from such devices are marginal, to say the least. This leaves only one avenue for possible image production, the videotape record of an endoscopic or laparoscopic operation. Video frame grabbing is at least a problem to which industry has applied considerable time and effort to solving. Our own experience with computerized enhancement of videotape frames has been very promising. Computer enhancement allows the researcher to correct several of the

  18. Quantitative ADF STEM: acquisition, analysis and interpretation

    NASA Astrophysics Data System (ADS)

    Jones, L.

    2016-01-01

    Quantitative annular dark-field in the scanning transmission electron microscope (ADF STEM), where image intensities are used to provide composition and thickness measurements, has enjoyed a renaissance during the last decade. Now in a post aberration-correction era many aspects of the technique are being revisited. Here the recent progress and emerging best-practice for such aberration corrected quantitative ADF STEM is discussed including issues relating to proper acquisition of experimental data and its calibration, approaches for data analysis, the utility of such data, its interpretation and limitations.

  19. Image acquisition planning for the CHRIS sensor onboard PROBA

    NASA Astrophysics Data System (ADS)

    Fletcher, Peter A.

    2004-10-01

    The CHRIS (Compact High Resolution Imaging Spectrometer) instrument was launched onboard the European Space Agency (ESA) PROBA satellite on 22 October 2001. CHRIS can acquire up to 63 bands of hyperspectral data at a ground spatial resolution of 36m. Alternatively, the instrument can be configured to acquire 18 bands of data with a spatial resolution of 17m. PROBA, by virtue of its agile pointing capability, enables CHRIS to acquire five different angle images of the selected site. Two sites can be acquired every 24 hours. The hyperspectral and multi-angle capability of CHRIS makes it an important resource for stydying BRDF phenomena of vegetation. Other applications include coastal and inland waters, wild fires, education and public relations. An effective data acquisition planning procedure has been implemented and since mid-2002 users have been receiving data for analysis. A cloud prediction routine has been adopted that maximises the image acquisition capacity of CHRIS-PROBA. Image acquisition planning is carried out by RSAC Ltd on behalf of ESA and in co-operation with Sira Technology Ltd and Redu, the ESA ground station in Belgium, responsible for CHRIS-PROBA.

  20. Camera settings for UAV image acquisition

    NASA Astrophysics Data System (ADS)

    O'Connor, James; Smith, Mike J.; James, Mike R.

    2016-04-01

    The acquisition of aerial imagery has become more ubiquitous than ever in the geosciences due to the advent of consumer-grade UAVs capable of carrying imaging devices. These allow the collection of high spatial resolution data in a timely manner with little expertise. Conversely, the cameras/lenses used to acquire this imagery are often given less thought, and can be unfit for purpose. Given weight constraints which are frequently an issue with UAV flights, low-payload UAVs (<1 kg) limit the types of cameras/lenses which could potentially be used for specific surveys, and therefore the quality of imagery which can be acquired. This contribution discusses these constraints, which need to be considered when selecting a camera/lens for conducting a UAV survey and how they can best be optimized. These include balancing of the camera exposure triangle (ISO, Shutter speed, Aperture) to ensure sharp, well exposed imagery, and its interactions with other camera parameters (Sensor size, Focal length, Pixel pitch) as well as UAV flight parameters (height, velocity).

  1. Reproducible high-resolution multispectral image acquisition in dermatology

    NASA Astrophysics Data System (ADS)

    Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir

    2015-07-01

    Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.

  2. Age of Acquisition and Imageability: A Cross-Task Comparison

    ERIC Educational Resources Information Center

    Ploetz, Danielle M.; Yates, Mark

    2016-01-01

    Previous research has reported an imageability effect on visual word recognition. Words that are high in imageability are recognised more rapidly than are those lower in imageability. However, later researchers argued that imageability was confounded with age of acquisition. In the current research, these two factors were manipulated in a…

  3. An Analysis of Spacecraft Localization from Descent Image Data for Pinpoint Landing on Mars and other Cratered Bodies Data Acquisition

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan; Cheng, Yang

    2005-01-01

    A pinpoint landing capability will be a critical component for many planned NASA missions to Mars and beyond. Implicit in the requirement is the ability to accurately localize the spacecraft with respect to the terrain during descent. In this paper, we present evidence that a vision-based solution using craters as landmarks is both practical and will meet the requirements of next generation missions. Our emphasis in this paper is on the feasibility of such a system in terms of (a) localization accuracy and (b) applicability to Martian terrain. We show that accuracy of well under 100 meters can be expected under suitable conditions. We also present a sensitivity analysis that makes an explicit connection between input data and robustness of our pose estimate. In addition, we present an analysis of the susceptibility of our technique to inherently ambiguous configurations of craters. We show that probability of failure due to such ambiguity is becoming increasingly small.

  4. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  5. Data acquisition for a medical imaging MWPC detector

    NASA Astrophysics Data System (ADS)

    McKee, B. T. A.; Harvey, P. J.; MacPhail, J. D.

    1991-12-01

    Multiwire proportional chambers, combined with drilled Pb converter stacks, are used as position sensitive gamma-ray detectors for medical imaging at Queen's University. This paper describes novel features of the address readout and data acquisition system. To obtain the interaction position, induced charges from wires in each cathode plane are combined using a three-level encoding scheme into 16 channels for amplification and discrimination, and then decoded within 150 ns using a lookup table in a 64 Kbyte EPROM. A custom interface card in an AT-class personal computer provides handshaking, rate buffering, and diagnostic capabilities for the detector data. Real-time software controls the data transfer and provides extensive monitor and control functions. The data are then transferred through an Ethernet link to a workstation for subsequent image analysis.

  6. Chemical Applications of a Programmable Image Acquisition System

    NASA Astrophysics Data System (ADS)

    Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian

    2003-06-01

    Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.

  7. The electron spectroscopy for chemical analysis microscopy beamline data acquisition system at ELETTRA

    NASA Astrophysics Data System (ADS)

    Gariazzo, C.; Krempaska, R.; Morrison, G. R.

    1996-07-01

    The electron spectroscopy for chemical analysis (ESCA) microscopy data acquisition system enables the user to control the imaging and spectroscopy modes of operation of the beamline ESCA microscopy at ELETTRA. It allows the user to integrate all experiment, beamline and machine operations in one single environment. The system also provides simple data analysis for both spectra and images data to guide further data acquisition.

  8. System of acquisition and processing of images of dynamic speckle

    NASA Astrophysics Data System (ADS)

    Vega, F.; >C Torres,

    2015-01-01

    In this paper we show the design and implementation of a system to capture and analysis of dynamic speckle. The device consists of a USB camera, an isolated system lights for imaging, a laser pointer 633 nm 10 mw as coherent light source, a diffuser and a laptop for processing video. The equipment enables the acquisition and storage of video, also calculated of different descriptors of statistical analysis (vector global accumulation of activity, activity matrix accumulation, cross-correlation vector, autocorrelation coefficient, matrix Fujji etc.). The equipment is designed so that it can be taken directly to the site where the sample for biological study and is currently being used in research projects within the group.

  9. Towards a Platform for Image Acquisition and Processing on RASTA

    NASA Astrophysics Data System (ADS)

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele

    2013-08-01

    This paper presents the architecture of a platform for image acquisition and processing based on commercial hardware and space qualified hardware. The aim is to extend the Reference Architecture Test-bed for Avionics (RASTA) system in order to obtain a Test-bed that allows testing different hardware and software solutions in the field of image acquisition and processing. The platform will allow the integration of space qualified hardware and Commercial Off The Shelf (COTS) hardware in order to test different architectural configurations. The first implementation is being performed on a low cost commercial board and on the GR712RC board based on the Dual Core Leon3 fault tolerant processor. The platform will include an actuation module with the aim of implementing a complete pipeline from image acquisition to actuation, making possible the simulation of a real case scenario involving acquisition and actuation.

  10. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  11. Image Acquisition and Quality in Digital Radiography.

    PubMed

    Alexander, Shannon

    2016-09-01

    Medical imaging has undergone dramatic changes and technological breakthroughs since the introduction of digital radiography. This article presents information on the development of digital radiography and types of digital radiography systems. Aspects of image quality and radiation exposure control are highlighted as well. In addition, the article includes related workplace changes and medicolegal considerations in the digital radiography environment. PMID:27601691

  12. TOM software toolbox: acquisition and analysis for electron tomography.

    PubMed

    Nickell, Stephan; Förster, Friedrich; Linaroudis, Alexandros; Net, William Del; Beck, Florian; Hegerl, Reiner; Baumeister, Wolfgang; Plitzko, Jürgen M

    2005-03-01

    Automated data acquisition procedures have changed the perspectives of electron tomography (ET) in a profound manner. Elaborate data acquisition schemes with autotuning functions minimize exposure of the specimen to the electron beam and sophisticated image analysis routines retrieve a maximum of information from noisy data sets. "TOM software toolbox" integrates established algorithms and new concepts tailored to the special needs of low dose ET. It provides a user-friendly unified platform for all processing steps: acquisition, alignment, reconstruction, and analysis. Designed as a collection of computational procedures it is a complete software solution within a highly flexible framework. TOM represents a new way of working with the electron microscope and can serve as the basis for future high-throughput applications.

  13. Payload Configurations for Efficient Image Acquisition - Indian Perspective

    NASA Astrophysics Data System (ADS)

    Samudraiah, D. R. M.; Saxena, M.; Paul, S.; Narayanababu, P.; Kuriakose, S.; Kiran Kumar, A. S.

    2014-11-01

    sounder for providing vertical profile of water vapour, temperature, etc. The same system has data relay transponders for acquiring data from weather stations. The payload configurations have gone through significant changes over the years to increase data rate per kilogram of payload. Future Indian remote sensing systems are planned with very high efficient ways of image acquisition. This paper analyses the strides taken by ISRO (Indian Space research Organisation) in achieving high efficiency in remote sensing image data acquisition. Parameters related to efficiency of image data acquisition are defined and a methodology is worked out to compute the same. Some of the Indian payloads are analysed with respect to some of the system/ subsystem parameters that decide the configuration of payload. Based on the analysis, possible configuration approaches that can provide high efficiency are identified. A case study is carried out with improved configuration and the results of efficiency improvements are reported. This methodology may be used for assessing other electro-optical payloads or missions and can be extended to other types of payloads and missions.

  14. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  15. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    PubMed

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-12-05

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  16. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    PubMed Central

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning

    2014-01-01

    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4∼2 dB comparing with current state-of-the-art, while maintaining a low computational complexity. PMID:25490597

  17. Spatial arrangement of color filter array for multispectral image acquisition

    NASA Astrophysics Data System (ADS)

    Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat

    2011-03-01

    In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.

  18. Guidelines for dynamic data acquisition and analysis

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1992-01-01

    The recommendations concerning pyroshock data presented in the final draft of a proposed military handbook on Guidelines for Dynamic Data Acquisition and Analysis are reviewed. The structural responses produced by pyroshocks are considered to be one of the most difficult types of dynamic data to accurately measure and analyze.

  19. Single Acquisition Quantitative Single Point Electron Paramagnetic Resonance Imaging

    PubMed Central

    Jang, Hyungseok; Subramanian, Sankaran; Devasahayam, Nallathamby; Saito, Keita; Matsumoto, Shingo; Krishna, Murali C; McMillan, Alan B

    2013-01-01

    Purpose Electron paramagnetic resonance imaging (EPRI) has emerged as a promising non-invasive technology to dynamically image tissue oxygenation. Due to its extremely short spin-spin relaxation times, EPRI benefits from a single-point imaging (SPI) scheme where the entire FID signal is captured using pure phase encoding. However, direct T2*/pO2 quantification is inhibited due to constant magnitude gradients which result in time-decreasing FOV. Therefore, conventional acquisition techniques require repeated imaging experiments with differing gradient amplitudes (typically 3), which results in long acquisition time. Methods In this study, gridding was evaluated as a method to reconstruct images with equal FOV to enable direct T2*/pO2 quantification within a single imaging experiment. Additionally, an enhanced reconstruction technique that shares high spatial k-space regions throughout different phase encoding time delays was investigated (k-space extrapolation). Results The combined application of gridding and k-space extrapolation enables pixelwise quantification of T2* from a single acquisition with improved image quality across a wide range of phase encoding delay times. The calculated T2*/pO2 does not vary across this time range. Conclusion By utilizing gridding and k-space extrapolation, accurate T2*/pO2 quantification can be achieved within a single dataset to allow enhanced temporal resolution (by a factor of 3). PMID:23913515

  20. Automatic image acquisition processor and method

    DOEpatents

    Stone, William J.

    1986-01-01

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  1. Automatic image acquisition processor and method

    DOEpatents

    Stone, W.J.

    1984-01-16

    A computerized method and point location system apparatus is disclosed for ascertaining the center of a primitive or fundamental object whose shape and approximate location are known. The technique involves obtaining an image of the object, selecting a trial center, and generating a locus of points having a predetermined relationship with the center. Such a locus of points could include a circle. The number of points overlying the object in each quadrant is obtained and the counts of these points per quadrant are compared. From this comparison, error signals are provided to adjust the relative location of the trial center. This is repeated until the trial center overlies the geometric center within the predefined accuracy limits.

  2. Imaging and Data Acquisition in Clinical Trials for Radiation Therapy.

    PubMed

    FitzGerald, Thomas J; Bishop-Jodoin, Maryann; Followill, David S; Galvin, James; Knopp, Michael V; Michalski, Jeff M; Rosen, Mark A; Bradley, Jeffrey D; Shankar, Lalitha K; Laurie, Fran; Cicchetti, M Giulia; Moni, Janaki; Coleman, C Norman; Deye, James A; Capala, Jacek; Vikram, Bhadrasain

    2016-02-01

    Cancer treatment evolves through oncology clinical trials. Cancer trials are multimodal and complex. Assuring high-quality data are available to answer not only study objectives but also questions not anticipated at study initiation is the role of quality assurance. The National Cancer Institute reorganized its cancer clinical trials program in 2014. The National Clinical Trials Network (NCTN) was formed and within it was established a Diagnostic Imaging and Radiation Therapy Quality Assurance Organization. This organization is Imaging and Radiation Oncology Core, the Imaging and Radiation Oncology Core Group, consisting of 6 quality assurance centers that provide imaging and radiation therapy quality assurance for the NCTN. Sophisticated imaging is used for cancer diagnosis, treatment, and management as well as for image-driven technologies to plan and execute radiation treatment. Integration of imaging and radiation oncology data acquisition, review, management, and archive strategies are essential for trial compliance and future research. Lessons learned from previous trials are and provide evidence to support diagnostic imaging and radiation therapy data acquisition in NCTN trials.

  3. Applications of digital image acquisition in anthropometry

    NASA Technical Reports Server (NTRS)

    Woolford, B.; Lewis, J. L.

    1981-01-01

    A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.

  4. TARA control, data acquisition and analysis system

    SciTech Connect

    Gaudreau, M.P.J.; Sullivan, J.D.; Fredian, T.W.; Karcher, C.A.; Sevillano, E.; Stillerman, J.; Thomas, P.

    1983-12-01

    The MIT tandem mirror (TARA) control, data acquisition and analysis system consists of two major parts: (1) a Gould 584 industrial programmable controller (PC) to control engineering functions; and (2) a VAX 11/750 based data acquisition and analysis system for physics analysis. The PC is designed for use in harsh industrial environments and has proven to be a reliable and cost-effective means for automated control. The PC configuration is dedicated to control tasks on the TARA magnet, vacuum, RF, neutral beam, diagnostics, and utility systems. The data transfer functions are used to download system operating parameters from menu selectable tables. Real time status reports are provided to video terminals and as blocks of data to the host computer for storage. The data acquisition and analysis system for TARA was designed to provide high throughput and ready access to data from earlier runs. The adopted design uses pre-existing software packages in a system which is simple, coherent, fast, and which requires a minimum of new software development. The computer configuration is a VAX 11/750 running VMS with 124 M byte massbus disk and 1.4 G byte unibus disk subsystem.

  5. The ADIS advanced data acquisition, imaging, and storage system

    SciTech Connect

    Flaherty, J.W.

    1986-01-01

    The design and development of Automated Ultrasonic Scanning Systems (AUSS) by McDonnell Aircraft Company has provided the background for the development of the ADIS advanced data acquisition, imaging, and storage system. The ADIS provides state-of-the-art ultrasonic data processing and imaging features which can be utilized in both laboratory and production line composite evaluation applications. System features, such as, real-time imaging, instantaneous electronic rescanning, multitasking capability, histograms, and cross-sections, provide the tools necessary to inspect and evaluate composite parts quickly and consistently.

  6. Multiplex Mass Spectrometric Imaging with Polarity Switching for Concurrent Acquisition of Positive and Negative Ion Images

    NASA Astrophysics Data System (ADS)

    Korte, Andrew R.; Lee, Young Jin

    2013-06-01

    We have recently developed a multiplex mass spectrometry imaging (MSI) method which incorporates high mass resolution imaging and MS/MS and MS3 imaging of several compounds in a single data acquisition utilizing a hybrid linear ion trap-Orbitrap mass spectrometer (Perdian and Lee, Anal. Chem. 82, 9393-9400, 2010). Here we extend this capability to obtain positive and negative ion MS and MS/MS spectra in a single MS imaging experiment through polarity switching within spiral steps of each raster step. This methodology was demonstrated for the analysis of various lipid class compounds in a section of mouse brain. This allows for simultaneous imaging of compounds that are readily ionized in positive mode (e.g., phosphatidylcholines and sphingomyelins) and those that are readily ionized in negative mode (e.g., sulfatides, phosphatidylinositols and phosphatidylserines). MS/MS imaging was also performed for a few compounds in both positive and negative ion mode within the same experimental set-up. Insufficient stabilization time for the Orbitrap high voltage leads to slight deviations in observed masses, but these deviations are systematic and were easily corrected with a two-point calibration to background ions.

  7. Data acquisition and analysis on a Macintosh

    NASA Technical Reports Server (NTRS)

    Watts, Michael E.; St. Jean, Megan M.

    1991-01-01

    The introduction of inexpensive analog-to-digital boards for the Macintosh opens the way for its use in areas that have previously been filled by either specialized, dedicated or more expensive mainframe based systems. Two such Macintosh-based systems are the Acoustic Laboratory Data Acquisition System (ALDAS) and the Jet Calibration and Hover Test Facility (JCAHT) data acquisition system. ALDAS provides an inexpensive, transportable means to digitize four channels at up to 50,000 samples per second and analyze this data. The ALDAS software package was written for use with rotorcraft acoustics and performs automatic acoustic calibration of channels, data display, and various types of data analysis. The program can use data obtained either from internal analog-to-digital conversion or discrete external data imported in ASCII format. All aspects of ALDAS can be improved as new hardware becomes available and new features are introduced into the code. The JCAHT data acquisition system was built as not only an analysis program but also to act as the online safety monitoring system. This paper will provide an overview of these systems.

  8. Q-ball imaging with PROPELLER EPI acquisition.

    PubMed

    Chou, Ming-Chung; Huang, Teng-Yi; Chung, Hsiao-Wen; Hsieh, Tsyh-Jyi; Chang, Hing-Chiu; Chen, Cheng-Yu

    2013-12-01

    Q-ball imaging (QBI) is an imaging technique that is capable of resolving intravoxel fiber crossings; however, the signal readout based on echo-planar imaging (EPI) introduces geometric distortions in the presence of susceptibility gradients. This study proposes an imaging technique that reduces susceptibility distortions in QBI by short-axis PROPELLER EPI acquisition. Conventional QBI and PROPELLER QBI data were acquired from two 3T MR scans of the brains of five healthy subjects. Prior to the PROPELLER reconstruction, residual distortions in single-blade low-resolution b0 and diffusion-weighted images (DWIs) were minimized by linear affine and nonlinear diffeomorphic demon registrations. Subsequently, the PROPELLER keyhole reconstruction was applied to the corrected DWIs to obtain high-resolution PROPELLER DWIs. The generalized fractional anisotropy and orientation distribution function maps contained fewer distortions in PROPELLER QBI than in conventional QBI, and the fiber tracts more closely matched the brain anatomy depicted by turbo spin-echo (TSE) T2-weighted imaging (T2WI). Furthermore, for fixed T(E), PROPELLER QBI enabled a shorter scan time than conventional QBI. We conclude that PROPELLER QBI can reduce susceptibility distortions without lengthening the acquisition time and is suitable for tracing neuronal fiber tracts in the human brain.

  9. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. PMID:27503078

  10. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  11. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    PubMed

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. PMID:26248715

  12. Variability in Fluoroscopic Image Acquisition During Operative Fixation of Ankle Fractures.

    PubMed

    Harris, Dorothy Y; Lindsey, Ronald W

    2015-10-01

    The goal of this study was to determine whether injury, level of surgeon training, and patient factors are associated with increased use of fluoroscopy during open reduction and internal fixation of ankle fractures. These relationships are not well defined. The study was a retrospective chart review of patients treated at an academic institution with primary open reduction and internal fixation of an ankle. Patient demographics, including sex, age, and body mass index, were collected, as was surgeon year of training (residency and fellowship). Image acquisition data included total number of images, total imaging time, and cumulative dose. Ankle fractures were classified according to the Weber and Lauge-Hansen classifications and the number of fixation points. Bivariate analysis and multiple regression models were used to predict increasing fluoroscopic image acquisition. Alpha was set at 0.05. Of 158 patients identified, 58 were excluded. After bivariate analysis, fracture complexity and year of training showed a significant correlation with increasing image acquisition. After multiple regression analysis, fracture complexity and year of training remained clinically significant and were independent predictors of increased image acquisition. Increasing fracture complexity resulted in 20 additional images, 16 additional seconds, and an increase in radiation of 0.7 mGy. Increasing year of training resulted in an additional 6 images and an increase of 0.35 mGy in cumulative dose. The findings suggest that protocols to educate trainee surgeons in minimizing the use of fluoroscopy would be beneficial at all levels of training and should target multiple fracture patterns.

  13. Isothermal thermogravimetric data acquisition analysis system

    NASA Technical Reports Server (NTRS)

    Cooper, Kenneth, Jr.

    1991-01-01

    The description of an Isothermal Thermogravimetric Analysis (TGA) Data Acquisition System is presented. The system consists of software and hardware to perform a wide variety of TGA experiments. The software is written in ANSI C using Borland's Turbo C++. The hardware consists of a 486/25 MHz machine with a Capital Equipment Corp. IEEE488 interface card. The interface is to a Hewlett Packard 3497A data acquisition system using two analog input cards and a digital actuator card. The system provides for 16 TGA rigs with weight and temperature measurements from each rig. Data collection is conducted in three phases. Acquisition is done at a rapid rate during initial startup, at a slower rate during extended data collection periods, and finally at a fast rate during shutdown. Parameters controlling the rate and duration of each phase are user programmable. Furnace control (raising and lowering) is also programmable. Provision is made for automatic restart in the event of power failure or other abnormal terminations. Initial trial runs were conducted to show system stability.

  14. Beach disturbance caused by off-road vehicles (ORVs) on sandy shores: relationship with traffic volumes and a new method to quantify impacts using image-based data acquisition and analysis.

    PubMed

    Schlacher, Thomas A; Morrison, Jennifer M

    2008-09-01

    Vehicles cause environmental damage on sandy beaches, including physical displacement and compaction of the sediment. Such physical habitat disturbance provides a relatively simple indicator of ORV-related impacts that is potentially useful in monitoring the efficacy of beach traffic management interventions; such interventions also require data on the relationship between traffic volumes and the resulting levels of impact. Here we determined how the extent of beach disturbance is linked to traffic volumes and tested the utility of image-based data acquisition to monitor beach surfaces. Experimental traffic application resulted in disturbance effects ranging from 15% of the intertidal zone being rutted after 10 vehicle passes to 85% after 100 passes. A new camera platform, specifically designed for beach surveys, was field tested and the resulting image-based data compared with traditional line-intercept methods and in situ measurements using quadrats. All techniques gave similar results in terms of quantifying the relationship between traffic intensity and beach disturbance. However, the physical, in situ measurements, using quadrats, generally produced higher (+4.68%) estimates than photos taken with the camera platform coupled with off-site image analysis. Image-based methods can be more costly, but in politically and socially sensitive monitoring applications, such as ORV use on sandy beaches, they are superior in providing unbiased and permanent records of environmental conditions in relation to anthropogenic pressures.

  15. 360-degree dense multiview image acquisition system using time multiplexing

    NASA Astrophysics Data System (ADS)

    Yendo, Tomohiro; Fujii, Toshiaki; Panahpour Tehrani, Mehrdad; Tanimoto, Masayuki

    2010-02-01

    A novel 360-degree 3D image acquisition system that captures multi-view images with narrow view interval is proposed. The system consists of a scanning optics system and a high-speed camera. The scanning optics system is composed of a double-parabolic mirror shell and a rotating flat mirror tilted at 45 degrees to the horizontal plane. The mirror shell produces a real image of an object that is placed at the bottom of the shell. The mirror shell is modified from usual system which is used as 3D illusion toy so that the real image can be captured from right horizontal viewing direction. The rotating mirror in the real image reflects the image to the camera-axis direction. The reflected image observed from the camera varies according to the angle of the rotating mirror. This means that the camera can capture the object from various viewing directions that are determined by the angle of the rotating mirror. To acquire the time-varying reflected images, we use a high-speed camera that is synchronized with the angle of the rotating mirror. We have used a high-speed camera which resolution is 256×256 and the maximum frame rate is 10000fps at the resolution. Rotating speed of the tilted flat mirror is about 27 rev./sec. The number of views is 360. The focus length of parabolic mirrors is 73mm and diameter is 360mm. Objects which length is less than about 30mm can be acquired. Captured images are compensated rotation and distortion caused by double-parabolic mirror system, and reproduced as 3D moving images by Seelinder display.

  16. Commonality analysis as a knowledge acquisition problem

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1987-01-01

    Commonality analysis is a systematic attempt to reduce costs in a large scale engineering project by discontinuing development of certain components during the design phase. Each discontinued component is replaced by another component that has sufficient functionality to be considered an appropriate substitute. The replacement strategy is driven by economic considerations. The System Commonality Analysis Tool (SCAT) is based on an oversimplified model of the problem and incorporates no knowledge acquisition component. In fact, the process of arriving at a compromise between functionality and economy is quite complex, with many opportunities for the application of expert knowledge. Such knowledge is of two types: general knowledge expressible as heuristics or mathematical laws potentially applicable to any set of components, and specific knowledge about the way in which elements of a given set of components interrelate. Examples of both types of knowledge are presented, and a framework is proposed for integrating the knowledge into a more general and useable tool.

  17. MTX data acquisition and analysis computer network

    SciTech Connect

    Butner, D.N.; Casper, T.A.; Brown, M.D.; Drlik, M.; Meyer, W.H.; Moller, J.M. )

    1990-10-01

    For the MTX experiment, we use a network of computers for plasma diagnostic data acquisition and analysis. This multivendor network employs VMS, UNIX, and BASIC based computers connected in a local area Ethernet network. Some of the data is acquired directly into a VAX/VMS computer cluster over a fiber-optic serial CAMAC highway. Several HP-Unix workstations and HP-BASIC instrument control computers acquire and analyze data for the more data intensive or specialized diagnostics. The VAX/VMS system is used for global analysis of the data and serves as the central data archiving and retrieval manager. Shot synchronization and control of data flow are implemented by task-to-task message passing using our interprocess communication system. The system has been in operation during our initial MTX tokamak and FEL experiments; it has operated reliably with data rates typically in the range of 5 Mbytes/shot without limiting the experimental shot rate.

  18. An effective data acquisition system using image processing

    NASA Astrophysics Data System (ADS)

    Poh, Chung-How; Poh, Chung-Kiak

    2005-12-01

    The authors investigate a data acquisition system utilising the widely available digital multi-meter and the webcam. The system is suited for applications that require sampling rates of less than about 1 Hz, such as for ambient temperature recording or the monitoring of the charging state of rechargeable batteries. The data displayed on the external digital readout is acquired into the computer through the process of template matching. MATLAB is used as the programming language for processing the captured 2-D images in this demonstration. A RC charging experiment with a time characteristic of approximately 33 s is setup to verify the accuracy of the image-to-data conversion. It is found that the acquired data matches the steady-state voltage value displayed by the digital meter after an error detection technique has been devised and implemented into the data acquisition script file. It is possible to acquire a number of different readings simultaneously from various sources with this imaging method by placing a number of digital readouts within the camera's field-of-view.

  19. PET/CT for radiotherapy: image acquisition and data processing.

    PubMed

    Bettinardi, V; Picchio, M; Di Muzio, N; Gianolli, L; Messa, C; Gilardi, M C

    2010-10-01

    This paper focuses on acquisition and processing methods in positron emission tomography/computed tomography (PET/CT) for radiotherapy (RT) applications. The recent technological evolutions of PET/CT systems are described. Particular emphasis is dedicated to the tools needed for the patient positioning and immobilization, to be used in PET/CT studies as well as during RT treatment sessions. The effect of organ and lesion motion due to patient's respiration on PET/CT imaging is discussed. Breathing protocols proposed to minimize PET/CT spatial mismatches in relation to respiratory movements are illustrated. The respiratory gated (RG) 4D-PET/CT techniques, developed to measure and compensate for organ and lesion motion, are then introduced. Finally a description is provided of different acquisition and data processing techniques, implemented with the aim at improving: i) image quality and quantitative accuracy of PET images, and ii) target volume definition and treatment planning in RT, by using specific and personalised motion information.

  20. Status of RAISE, the Rapid Acquisition Imaging Spectrograph Experiment

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, D. M.; DeForest, C.; Ayres, T. R.; Davis, M.; De Pontieu, B.; Schuehle, U.; Warren, H.

    2013-07-01

    The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100 ms, with 1 arcsec spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a new class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. The telescope and grating are coated with B4C to enhance short wavelength (2nd order) reflectance, enabling the instrument to record the brightest lines between 602-622Å and 761-780Å at the same time. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. We present an overview of the project, a summary of the maiden flight results, and an update on instrument status.Abstract (2,250 Maximum Characters): The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket payload is a high speed scanning-slit imaging spectrograph designed to observe the dynamics and heating of the solar

  1. Radiologist and automated image analysis

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.

    1999-07-01

    Significant advances are being made in the area of automated medical image analysis. Part of the progress is due to the general advances being made in the types of algorithms used to process images and perform various detection and recognition tasks. A more important reason for this growth in medical image analysis processes, may be due however to a very different reason. The use of computer workstations, digital image acquisition technologies and the use of CRT monitors for display of medical images for primary diagnostic reading is becoming more prevalent in radiology departments around the world. With the advance in computer- based displays, however, has come the realization that displaying images on a CRT monitor is not the same as displaying film on a viewbox. There are perceptual, cognitive and ergonomic issues that must be considered if radiologists are to accept this change in technology and display. The bottom line is that radiologists' performance must be evaluated with these new technologies and image analysis techniques in order to verify that diagnostic performance is at least as good with these new technologies and image analysis procedures as with film-based displays. The goal of this paper is to address some of the perceptual, cognitive and ergonomic issues associated with reading radiographic images from digital displays.

  2. RAISE (Rapid Acquisition Imaging Spectrograph Experiment): Results and Instrument Status

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald; DeForest, Craig; Ayres, Tom; Davis, Michael; DePontieu, Bart; Diller, Jed; Graham, Roy; Schule, Udo; Warren, Harry

    2015-04-01

    We present initial results from the successful November 2014 launch of the RAISE (Rapid Acquisition Imaging Spectrograph Experiment) sounding rocket program, including intensity maps, high-speed spectroheliograms and dopplergrams, as well as an update on instrument status. The RAISE sounding rocket payload is the fastest high-speed scanning-slit imaging spectrograph flown to date and is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2 km/s. The instrument is based on a class of UV/EUV imaging spectrometers that use only two reflections to provide quasi-stigmatic performance simultaneously over multiple wavelengths and spatial fields. The design uses an off-axis parabolic telescope mirror to form a real image of the sun on the spectrometer entrance aperture. A slit then selects a portion of the solar image, passing its light onto a near-normal incidence toroidal grating, which re-images the spectrally dispersed radiation onto two array detectors. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (1st-order 1205-1243Å and 1526-1564Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10 Hz, allowing us to record over 1,500 complete spectral observations in a single 5-minute rocket flight, opening up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to study small-scale multithermal dynamics in active region (AR) loops, explore the strength, spectrum and location of high frequency waves in the solar atmosphere, and investigate the nature of transient brightenings in the chromospheric network.

  3. Data acquisition system for harmonic motion microwave Doppler imaging.

    PubMed

    Tafreshi, Azadeh Kamali; Karadaş, Mürsel; Top, Can Barış; Gençer, Nevzat Güneri

    2014-01-01

    Harmonic Motion Microwave Doppler Imaging (HMMDI) is a hybrid method proposed for breast tumor detection, which images the coupled dielectric and elastic properties of the tissue. In this paper, the performance of a data acquisition system for HMMDI method is evaluated on breast phantom materials. A breast fat phantom including fibro-glandular and tumor phantom regions is produced. The phantom is excited using a focused ultrasound probe and a microwave transmitter. The received microwave signal level is measured on three different points inside the phantom (fat, fibro-glandular, and tumor regions). The experimental results using the designed homodyne receiver proved the effectiveness of the proposed setup. In tumor phantom region, the signal level decreased about 3 dB compared to the signal level obtained from the fibro-glandular phantom area, whereas this signal was about 4 dB higher than the received signal from the fat phantom.

  4. The Rapid Acquisition Imaging Spectrograph Experiment (RAISE) Sounding Rocket Investigation

    NASA Astrophysics Data System (ADS)

    Laurent, Glenn T.; Hassler, Donald M.; Deforest, Craig; Slater, David D.; Thomas, Roger J.; Ayres, Thomas; Davis, Michael; de Pontieu, Bart; Diller, Jed; Graham, Roy; Michaelis, Harald; Schuele, Udo; Warren, Harry

    2016-03-01

    We present a summary of the solar observing Rapid Acquisition Imaging Spectrograph Experiment (RAISE) sounding rocket program including an overview of the design and calibration of the instrument, flight performance, and preliminary chromospheric results from the successful November 2014 launch of the RAISE instrument. The RAISE sounding rocket payload is the fastest scanning-slit solar ultraviolet imaging spectrograph flown to date. RAISE is designed to observe the dynamics and heating of the solar chromosphere and corona on time scales as short as 100-200ms, with arcsecond spatial resolution and a velocity sensitivity of 1-2km/s. Two full spectral passbands over the same one-dimensional spatial field are recorded simultaneously with no scanning of the detectors or grating. The two different spectral bands (first-order 1205-1251Å and 1524-1569Å) are imaged onto two intensified Active Pixel Sensor (APS) detectors whose focal planes are individually adjusted for optimized performance. RAISE reads out the full field of both detectors at 5-10Hz, recording up to 1800 complete spectra (per detector) in a single 6-min rocket flight. This opens up a new domain of high time resolution spectral imaging and spectroscopy. RAISE is designed to observe small-scale multithermal dynamics in Active Region (AR) and quiet Sun loops, identify the strength, spectrum and location of high frequency waves in the solar atmosphere, and determine the nature of energy release in the chromospheric network.

  5. Reducing respiratory effect in motion correction for EPI images with sequential slice acquisition order.

    PubMed

    Cheng, Hu; Puce, Aina

    2014-04-30

    Motion correction is critical for data analysis of fMRI time series. Most motion correction algorithms treat the head as a rigid body. Respiration of the subject, however, can alter the static magnetic field in the head and result in motion-like slice shifts for echo planar imaging (EPI). The delay of acquisition between slices causes a phase difference in respiration so that the shifts vary with slice positions. To characterize the effect of respiration on motion correction, we acquired fast sampled fMRI data using multi-band EPI and then simulated different acquisition schemes. Our results indicated that respiration introduces additional noise after motion correction. The signal variation between volumes after motion correction increases when the effective TR increases from 675ms to 2025ms. This problem can be corrected if slices are acquired sequentially. For EPI with a sequential acquisition scheme, we propose to divide the image volumes into several segments so that slices within each segment are acquired close in time and then perform motion correction on these segments separately. We demonstrated that the temporal signal-to-noise ratio (TSNR) was increased when the motion correction was performed on the segments separately rather than on the whole image. This enhancement of TSNR was not evenly distributed across the segments and was not observed for interleaved acquisition. The level of increase was higher for superior slices. On superior slices the percentage of TSNR gain was comparable to that using image based retrospective correction for respiratory noise. Our results suggest that separate motion correction on segments is highly recommended for sequential acquisition schemes, at least for slices distal to the chest.

  6. The selection of field acquisition parameters for dispersion images from multichannel surface wave data

    USGS Publications Warehouse

    Zhang, S.X.; Chan, L.S.; Xia, J.

    2004-01-01

    The accuracy and resolution of surface wave dispersion results depend on the parameters used for acquiring data in the field. The optimized field parameters for acquiring multichannel analysis of surface wave (MASW) dispersion images can be determined if preliminary information on the phase velocity range and interface depth is available. In a case study on a fill slope in Hong Kong, the optimal acquisition parameters were first determined from a preliminary seismic survey prior to a MASW survey. Field tests using different sets of receiver distances and array lengths showed that the most consistent and useful dispersion images were obtained from the optimal acquisition parameters predicted. The inverted S-wave velocities from the dispersion curve obtained at the optimal offset distance range also agreed with those obtained by using direct refraction survey.

  7. Advances in GPR data acquisition and analysis for archaeology

    NASA Astrophysics Data System (ADS)

    Zhao, Wenke; Tian, Gang; Forte, Emanuele; Pipan, Michele; Wang, Yimin; Li, Xuejing; Shi, Zhanjie; Liu, Haiyan

    2015-07-01

    The primary objective of this study is to evaluate the applicability and the effectiveness of ground-penetrating radar (GPR) to identify a thin burnt soil layer, buried more than 2 m below the topographic surface at the Liangzhu Site, in Southeastern China. The site was chosen for its relatively challenging conditions of GPR techniques due to electrical conductivity and to the presence of peach tree roots that produced scattering. We completed the data acquisition by using 100 and 200 MHz antennas in TE and TM broadside and cross-polarized configurations. In the data processing and interpretation phase, we used GPR attribute analysis, including instantaneous phase and geometrical attributes. Validation analysis ground-truthing performed after the geophysical surveys, validated the GPR imaging, confirmed the electrical conductivity and relative dielectric permittivity (RDP) measurements performed at different depths, and allowed a reliable quantitative correlation between GPR results and subsurface physical properties. The research demonstrates that multiple antenna configurations in GPR data acquisition combined with attribute analysis can enhance the ability to characterize prehistoric archaeological remains even in complex subsurface conditions.

  8. Acquisition and analysis of accelerometer data

    NASA Technical Reports Server (NTRS)

    Verges, Keith R.

    1990-01-01

    Acceleration data reduction must be undertaken with a complete understanding of the physical process, the means by which the data are acquired, and finally, the calculations necessary to put the data into a meaningful format. Discussed here are the acceleration sensor requirements dictated by the measurements desired. Sensor noise, dynamic range, and linearity will be determined from the physical parameters of the experiment. The digitizer requirements are discussed. Here the system from sensor to digital storage medium will be integrated, and rules of thumb for experiment duration, filter response, and number of bits are explained. Data reduction techniques after storage are also discussed. Time domain operations including decimating, digital filtering, and averaging are covered, as well as frequency domain methods, including windowing and the difference between power and amplitude spectra, and simple noise determination via coherence analysis. Finally, an example experiment using the Teledyne Geotech Model 44000 Seismometer to measure from 1 Hz to 10(exp -6) Hz is discussed. The sensor, data acquisition system, and example spectra are presented.

  9. Optimization of image acquisition techniques for dual-energy imaging of the chest

    SciTech Connect

    Shkumat, N. A.; Siewerdsen, J. H.; Dhanantwari, A. C.; Williams, D. B.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2007-10-15

    Experimental and theoretical studies were conducted to determine optimal acquisition techniques for a prototype dual-energy (DE) chest imaging system. Technique factors investigated included the selection of added x-ray filtration, kVp pair, and the allocation of dose between low- and high-energy projections, with total dose equal to or less than that of a conventional chest radiograph. Optima were computed to maximize lung nodule detectability as characterized by the signal-difference-to-noise ratio (SDNR) in DE chest images. Optimal beam filtration was determined by cascaded systems analysis of DE image SDNR for filter selections across the periodic table (Z{sub filter}=1-92), demonstrating the importance of differential filtration between low- and high-kVp projections and suggesting optimal high-kVp filters in the range Z{sub filter}=25-50. For example, added filtration of {approx}2.1 mm Cu, {approx}1.2 mm Zr, {approx}0.7 mm Mo, and {approx}0.6 mm Ag to the high-kVp beam provided optimal (and nearly equivalent) soft-tissue SDNR. Optimal kVp pair and dose allocation were investigated using a chest phantom presenting simulated lung nodules and ribs for thin, average, and thick body habitus. Low- and high-energy techniques ranged from 60-90 kVp and 120-150 kVp, respectively, with peak soft-tissue SDNR achieved at [60/120] kVp for all patient thicknesses and all levels of imaging dose. A strong dependence on the kVp of the low-energy projection was observed. Optimal allocation of dose between low- and high-energy projections was such that {approx}30% of the total dose was delivered by the low-kVp projection, exhibiting a fairly weak dependence on kVp pair and dose. The results have guided the implementation of a prototype DE imaging system for imaging trials in early-stage lung nodule detection and diagnosis.

  10. Fidelity Analysis of Sampled Imaging Systems

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1999-01-01

    Many modeling, simulation and performance analysis studies of sampled imaging systems are inherently incomplete because they are conditioned on a discrete-input, discrete-output model that only accounts for blurring during image acquisition and additive noise. For those sampled imaging systems where the effects of digital image acquisition, digital filtering and reconstruction are significant, the modeling, simulation and performance analysis should be based on a more comprehensive continuous-input, discrete-processing, continuous-output end-to-end model. This more comprehensive model should properly account for the low-pass filtering effects of image acquisition prior to sampling, the potentially important noiselike effects of the aliasing caused by sampling, additive noise due to device electronics and quantization, the generally high-boost filtering effects of digital processing, and the low-pass filtering effects of image reconstruction. This model should not, however, be so complex as to preclude significant mathematical analysis, particularly the mean-square (fidelity) type of analysis so common in linear system theory. We demonstrate that, although the mathematics of such a model is more complex, the increase in complexity is not so great as to prevent a complete fidelity-metric analysis at both the component level and at the end-to-end system level: that is, computable mean-square-based fidelity metrics are developed by which both component-level and system-level performance can be quantified. In addition, we demonstrate that system performance can be assessed qualitatively by visualizing the output image as the sum of three component images, each of which relates to a corresponding fidelity metric. The cascaded, or filtered, component accounts for the end-to-end system filtering of image acquisition, digital processing, and image reconstruction; the random noise component accounts for additive random noise, modulated by digital processing and image

  11. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  12. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials

    NASA Astrophysics Data System (ADS)

    Fada, Justin S.; Wheeler, Nicholas R.; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J.; French, Roger H.

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  13. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    PubMed

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics. PMID:27587162

  14. Democratizing an electroluminescence imaging apparatus and analytics project for widespread data acquisition in photovoltaic materials.

    PubMed

    Fada, Justin S; Wheeler, Nicholas R; Zabiyaka, Davis; Goel, Nikhil; Peshek, Timothy J; French, Roger H

    2016-08-01

    We present a description of an electroluminescence (EL) apparatus, easily sourced from commercially available components, with a quantitative image processing platform that demonstrates feasibility for the widespread utility of EL imaging as a characterization tool. We validated our system using a Gage R&R analysis to find a variance contribution by the measurement system of 80.56%, which is typically unacceptable, but through quantitative image processing and development of correction factors a variance contribution by the measurement system of 2.41% was obtained. We further validated the system by quantifying the signal-to-noise ratio (SNR) and found values consistent with other systems published in the literature, at SNR values of 10-100, albeit at exposure times of greater than 1 s compared to 10 ms for other systems. This SNR value range is acceptable for image feature recognition, providing the opportunity for widespread data acquisition and large scale data analytics of photovoltaics.

  15. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  16. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  17. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  18. NOTE: A method for controlling image acquisition in electronic portal imaging devices

    NASA Astrophysics Data System (ADS)

    Glendinning, A. G.; Hunt, S. G.; Bonnett, D. E.

    2001-02-01

    Certain types of camera-based electronic portal imaging devices (EPIDs) which initiate image acquisition based on sensing a change in video level have been observed to trigger unreliably at the beginning of dynamic multileaf collimation sequences. A simple, novel means of controlling image acquisition with an Elekta linear accelerator (Elekta Oncology Systems, Crawley, UK) is proposed which is based on illumination of a photodetector (ORP-12, Silonex Inc., Plattsburgh, NY, USA) by the electron gun of the accelerator. By incorporating a simple trigger circuit it is possible to derive a beam on/off status signal which changes at least 100 ms before any dose is measured by the accelerator. The status signal does not return to the beam-off state until all dose has been delivered and is suitable for accelerator pulse repetition frequencies of 50-400 Hz. The status signal is thus a reliable means of indicating the initiation and termination of radiation exposure, and thus controlling image acquisition of such EPIDs for this application.

  19. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  20. Star sensor image acquisition and preprocessing hardware system based on CMOS image sensor and FGPA

    NASA Astrophysics Data System (ADS)

    Hao, Xuetao; Jiang, Jie; Zhang, Guangjun

    2003-09-01

    Star Sensor is an avionics instrument used to provide the absolute 3-axis attitude of a spacecraft utilizing star observations. It consists of an electronic camera and associated processing electronics. As outcome of advancing state-of-the-art, new generation star sensor features faster, lower cost, power dissipation and size than the first generation star sensor. This paper describes a star sensor anterior image acquisition and pre-processing hardware system based on CMOS image-sensor and FPGA technology. Practically, star images are produced by a simple simulator on PC, acquired by CMOS image sensor, pre-processed by FPGA, saved in SRAM, read out by EPP protocol and validated by an image process software on PC. The hardware part of system acquires images thought CMOS image-sensor controlled by FPGA, then processes image data by a circuit module of FPGA, and save images to SRAM for test. Basic image data for star recognition and attitude determination of spacecrafts are provided by it. As an important reference for developing star sensor prototype, the system validates the performance advantages of new generation star sensor.

  1. Web-based data acquisition and management system for GOSAT validation Lidar data analysis

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra N.; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei

    2012-11-01

    An web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data analysis is developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS ground-level meteorological data, Rawinsonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data.

  2. Dual-energy imaging of the chest: Optimization of image acquisition techniques for the 'bone-only' image

    SciTech Connect

    Shkumat, N. A.; Siewerdsen, J. H.; Richard, S.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2008-02-15

    Experiments were conducted to determine optimal acquisition techniques for bone image decompositions for a prototype dual-energy (DE) imaging system. Technique parameters included kVp pair (denoted [kVp{sup L}/kVp{sup H}]) and dose allocation (the proportion of dose in low- and high-energy projections), each optimized to provide maximum signal difference-to-noise ratio in DE images. Experiments involved a chest phantom representing an average patient size and containing simulated ribs and lung nodules. Low- and high-energy kVp were varied from 60-90 and 120-150 kVp, respectively. The optimal kVp pair was determined to be [60/130] kVp, with image quality showing a strong dependence on low-kVp selection. Optimal dose allocation was approximately 0.5--i.e., an equal dose imparted by the low- and high-energy projections. The results complement earlier studies of optimal DE soft-tissue image acquisition, with differences attributed to the specific imaging task. Together, the results help to guide the development and implementation of high-performance DE imaging systems, with applications including lung nodule detection and diagnosis, pneumothorax identification, and musculoskeletal imaging (e.g., discrimination of rib fractures from metastasis)

  3. Theory of Adaptive Acquisition Method for Image Reconstruction from Projections and Application to EPR Imaging

    NASA Astrophysics Data System (ADS)

    Placidi, G.; Alecci, M.; Sotgiu, A.

    1995-07-01

    An adaptive method for selecting the projections to be used for image reconstruction is presented. The method starts with the acquisition of four projections at angles of 0°, 45°, 90°, 135° and selects the new angles by computing a function of the previous projections. This makes it possible to adapt the selection of projections to the arbitrary shape of the sample, thus measuring a more informative set of projections. When the sample is smooth or has internal symmetries, this technique allows a reduction in the number of projections required to reconstruct the image without loss of information. The method has been tested on simulated data at different values of signal-to-noise ratio (S/N) and on experimental data recorded by an EPR imaging apparatus.

  4. Oncological image analysis.

    PubMed

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade.

  5. 77 FR 40552 - Federal Acquisition Regulation; Price Analysis Techniques

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... Federal Acquisition Regulation; Price Analysis Techniques AGENCY: Department of Defense (DoD), General... clarify the use of a price analysis technique in order to establish a fair and reasonable price. DATES....404-1(b)(2) addresses various price analysis techniques and procedures the Government may use...

  6. NOVA-NREL Optimal Vehicle Acquisition Analysis (Brochure)

    SciTech Connect

    Blakley, H.

    2011-03-01

    Federal fleet managers face unique challenges in accomplishing their mission - meeting agency transportation needs while complying with Federal goals and mandates. Included in these challenges are a variety of statutory requirements, executive orders, and internal goals and objectives that typically focus on petroleum consumption and greenhouse gas (GHG) emissions reductions, alternative fuel vehicle (AFV) acquisitions, and alternative fuel use increases. Given the large number of mandates affecting Federal fleets and the challenges faced by all fleet managers in executing day-to-day operations, a primary challenge for agencies and other organizations is ensuring that they are as efficient as possible in using constrained fleet budgets. An NREL Optimal Vehicle Acquisition (NOVA) analysis makes use of a mathematical model with a variety of fleet-related data to create an optimal vehicle acquisition strategy for a given goal, such as petroleum or GHG reduction. The analysis can helps fleets develop a vehicle acquisition strategy that maximizes petroleum and greenhouse gas reductions.

  7. Radio reflection imaging of asteroid and comet interiors I: Acquisition and imaging theory

    NASA Astrophysics Data System (ADS)

    Sava, Paul; Ittharat, Detchai; Grimm, Robert; Stillman, David

    2015-05-01

    Imaging the interior structure of comets and asteroids can provide insight into their formation in the early Solar System, and can aid in their exploration and hazard mitigation. Accurate imaging can be accomplished using broadband wavefield data penetrating deep inside the object under investigation. This can be done in principle using seismic systems (which is difficult since it requires contact with the studied object), or using radar systems (which is easier since it can be conducted from orbit). We advocate the use of radar systems based on instruments similar to the ones currently deployed in space, e.g. the CONSERT experiment of the Rosetta mission, but perform imaging using data reflected from internal interfaces, instead of data transmitted through the imaging object. Our core methodology is wavefield extrapolation using time-domain finite differences, a technique often referred to as reverse-time migration and proven to be effective in high-quality imaging of complex geologic structures. The novelty of our approach consists in the use of dual orbiters around the studied object, instead of an orbiter and a lander. Dual orbiter systems can provide multi-offset data that illuminate the target object from many different illumination angles. Multi-offset data improve image quality (a) by avoiding illumination shadows, (b) by attenuating coherent noise (image artifacts) caused by wavefield multi-pathing, and (c) by providing information necessary to infer the model parameters needed to simulate wavefields inside the imaging target. The images obtained using multi-offset are robust with respect to instrument noise comparable in strength with the reflected signal. Dual-orbiter acquisition leads to improved image quality which is directly dependent on the aperture between the transmitter and receiver antennas. We illustrate the proposed methodology using a complex model based on a scaled version of asteroid 433 Eros.

  8. Dynamic whole-body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-10-01

    Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ˜15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ˜45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically

  9. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  10. Cardiac imaging in diagnostic VCT using multi-sector data acquisition and image reconstruction: step-and-shoot scan vs. helical scan

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Seamans, John L.; Dong, Fang; Okerlund, Darin

    2008-03-01

    Since the advent of multi-slice CT, helical scan has played an increasingly important role in cardiac imaging. With the availability of diagnostic volumetric CT, step-and-shoot scan has been becoming popular recently. Step-and-shoot scan decouples patient table motion from heart beating, and thus the temporal window for data acquisition and image reconstruction can be optimized, resulting in significantly reduced radiation dose, improved tolerance to heart beat rate variation and inter-cycle cardiac motion inconsistency. Multi-sector data acquisition and image reconstruction have been utilized in helical cardiac imaging to improve temporal resolution, but suffers from the coupling of heart beating and patient table motion. Recognizing the clinical demands, the multi-sector data acquisition scheme for step-and-shoot scan is investigated in this paper. The most outstanding feature of the multi-sector data acquisition combined with the stepand- shoot scan is the decoupling of patient table proceeding from heart beating, which offers the opportunities of employing prospective ECG-gating to improve dose efficiency and fine adjusting cardiac imaging phase to suppress artifacts caused by inter-cycle cardiac motion inconsistency. The improvement in temporal resolution and the resultant suppression of motion artifacts are evaluated via motion phantoms driven by artificial ECG signals. Both theoretical analysis and experimental evaluation show promising results for multi-sector data acquisition scheme to be employed with the step-and-shoot scan. With the ever-increasing gantry rotation speed and detector longitudinal coverage in stateof- the-art VCT scanners, it is expected that the step-and-shoot scan with multi-sector data acquisition scheme would play an increasingly important role in cardiac imaging using diagnostic VCT scanners.

  11. Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter.

    PubMed

    Latorre-Carmona, Pedro; Sánchez-Ortiga, Emilio; Xiao, Xiao; Pla, Filiberto; Martínez-Corral, Manuel; Navarro, Héctor; Saavedra, Genaro; Javidi, Bahram

    2012-11-01

    This paper presents an acquisition system and a procedure to capture 3D scenes in different spectral bands. The acquisition system is formed by a monochrome camera, and a Liquid Crystal Tunable Filter (LCTF) that allows to acquire images at different spectral bands in the [480, 680]nm wavelength interval. The Synthetic Aperture Integral Imaging acquisition technique is used to obtain the elemental images for each wavelength. These elemental images are used to computationally obtain the reconstruction planes of the 3D scene at different depth planes. The 3D profile of the acquired scene is also obtained using a minimization of the variance of the contribution of the elemental images at each image pixel. Experimental results show the viability to recover the 3D multispectral information of the scene. Integration of 3D and multispectral information could have important benefits in different areas, including skin cancer detection, remote sensing and pattern recognition, among others.

  12. Development of data acquisition and analysis software for multichannel detectors

    SciTech Connect

    Chung, Y.

    1988-06-01

    This report describes the development of data acquisition and analysis software for Apple Macintosh computers, capable of controlling two multichannel detectors. With the help of outstanding graphics capabilities, easy-to-use user interface, and several other built-in convenience features, this application has enhanced the productivity and the efficiency of data analysis. 2 refs., 6 figs.

  13. 78 FR 37690 - Federal Acquisition Regulation; Price Analysis Techniques

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... published a proposed rule in the Federal Register at 77 FR 40552 on July 10, 2012, to clarify and pinpoint a... Federal Acquisition Regulation; Price Analysis Techniques AGENCY: Department of Defense (DoD), General... clarify and give a precise reference in the use of a price analysis technique in order to establish a...

  14. Validation of a target acquisition model for active imager using perception experiments

    NASA Astrophysics Data System (ADS)

    Lapaz, Frédéric; Canevet, Loïc

    2007-10-01

    Active night vision systems based on laser diodes emitters have now reached a technology level allowing military applications. In order to predict the performance of observers using such systems, we built an analytic model including sensor, atmosphere, visualization and eye effects. The perception task has been modelled using the Targeting Task Performance metric (TTP metric) developed by R. Vollmerhausen from the Night Vision and Electronic Sensors Directorate (NVESD). Sensor and atmosphere models have been validated separately. In order to validate the whole model, two identification tests have been set up. The first set submitted to trained observers was made of hybrid images. The target to background contrast, the blur and the noise were added to armoured vehicles signatures in accordance to sensor and atmosphere models. The second set of images was made with the same targets, sensed by a real active sensor during field trials. Images were recorded, showing different vehicles, at different ranges and orientations, under different illumination and acquisition configurations. Indeed, this set of real images was built with three different types of gating: wide illumination, illumination of the background and illumination of the target. Analysis of the perception experiments results showed a good concordance between the two sets of images. The calculation of an identification criterion, related to this set of vehicles in the near infrared, gave the same results in both cases. The impact of gating on observer's performance was also evaluated.

  15. Data acquisition and processing system of the electron cyclotron emission imaging system of the KSTAR tokamak

    SciTech Connect

    Kim, J. B.; Lee, W.; Yun, G. S.; Park, H. K.; Domier, C. W.; Luhmann, N. C. Jr.

    2010-10-15

    A new innovative electron cyclotron emission imaging (ECEI) diagnostic system for the Korean Superconducting Tokamak Advanced Research (KSTAR) produces a large amount of data. The design of the data acquisition and processing system of the ECEI diagnostic system should consider covering the large data production and flow. The system design is based on the layered structure scalable to the future extension to accommodate increasing data demands. Software architecture that allows a web-based monitoring of the operation status, remote experiment, and data analysis is discussed. The operating software will help machine operators and users validate the acquired data promptly, prepare next discharge, and enhance the experiment performance and data analysis in a distributed environment.

  16. Repetition time and flip angle variation in SPRITE imaging for acquisition time and SAR reduction.

    PubMed

    Shah, N Jon; Kaffanke, Joachim B; Romanzetti, Sandro

    2009-08-01

    Single point imaging methods such as SPRITE are often the technique of choice for imaging fast-relaxing nuclei in solids. Single point imaging sequences based on SPRITE in their conventional form are ill-suited for in vivo applications since the acquisition time is long and the SAR is high. A new sequence design is presented employing variable repetition times and variable flip angles in order to improve the characteristics of SPRITE for in vivo applications. The achievable acquisition time savings as well as SAR reductions and/or SNR increases afforded by this approach were investigated using a resolution phantom as well as PSF simulations. Imaging results in phantoms indicate that acquisition times may be reduced by up to 70% and the SAR may be reduced by 40% without an appreciable loss of image quality. PMID:19447652

  17. Multi-channel high-speed CMOS image acquisition and pre-processing system

    NASA Astrophysics Data System (ADS)

    Sun, Chun-feng; Yuan, Feng; Ding, Zhen-liang

    2008-10-01

    A new multi-channel high-speed CMOS image acquisition and pre-processing system is designed to realize the image acquisition, data transmission, time sequential control and simple image processing by high-speed CMOS image sensor. The modular structure design, LVDS and ping-pong cache techniques used during the designed image data acquisition sub-system design ensure the real-time data acquisition and transmission. Furthermore, a new histogram equalization algorithm of adaptive threshold value based on the reassignment of redundant gray level is incorporated in the image preprocessing module of FPGA. The iterative method is used in the course of setting threshold value, and a redundant graylevel is redistributed rationally according to the proportional gray level interval. The over-enhancement of background is restrained and the feasibility of mergence of foreground details is reduced. The experimental certificates show that the system can be used to realize the image acquisition, transmission, memory and pre-processing to 590MPixels/s data size, and make for the design and realization of the subsequent system.

  18. Reference radiochromic film dosimetry in kilovoltage photon beams during CBCT image acquisition

    SciTech Connect

    Tomic, Nada; Devic, Slobodan; DeBlois, Francois; Seuntjens, Jan

    2010-03-15

    Purpose: A common approach for dose assessment during cone beam computed tomography (CBCT) acquisition is to use thermoluminescent detectors for skin dose measurements (on patients or phantoms) or ionization chamber (in phantoms) for body dose measurements. However, the benefits of a daily CBCT image acquisition such as margin reduction in planning target volume and the image quality must be weighted against the extra dose received during CBCT acquisitions. Methods: The authors describe a two-dimensional reference dosimetry technique for measuring dose from CBCT scans using the on-board imaging system on a Varian Clinac-iX linear accelerator that employs the XR-QA radiochromic film model, specifically designed for dose measurements at low energy photons. The CBCT dose measurements were performed for three different body regions (head and neck, pelvis, and thorax) using humanoid Rando phantom. Results: The authors report on both surface dose and dose profiles measurements during clinical CBCT procedures carried out on a humanoid Rando phantom. Our measurements show that the surface doses per CBCT scan can range anywhere between 0.1 and 4.7 cGy, with the lowest surface dose observed in the head and neck region, while the highest surface dose was observed for the Pelvis spot light CBCT protocol in the pelvic region, on the posterior side of the Rando phantom. The authors also present results of the uncertainty analysis of our XR-QA radiochromic film dosimetry system. Conclusions: Radiochromic film dosimetry protocol described in this work was used to perform dose measurements during CBCT acquisitions with the one-sigma dose measurement uncertainty of up to 3% for doses above 1 cGy. Our protocol is based on film exposure calibration in terms of ''air kerma in air,'' which simplifies both the calibration procedure and reference dosimetry measurements. The results from a full Monte Carlo investigation of the dose conversion of measured XR-QA film dose at the surface into

  19. Metrics for image-based modeling of target acquisition

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan D.

    2012-06-01

    This paper presents an image-based system performance model. The image-based system model uses an image metric to compare a given degraded image of a target, as seen through the modeled system, to the set of possible targets in the target set. This is repeated for all possible targets to generate a confusion matrix. The confusion matrix is used to determine the probability of identifying a target from the target set when using a particular system in a particular set of conditions. The image metric used in the image-based model should correspond closely to human performance. The image-based model performance is compared to human perception data on Contrast Threshold Function (CTF) tests, naked eye Triangle Orientation Discrimination (TOD), and TOD including an infrared camera system. Image-based system performance modeling is useful because it allows modeling of arbitrary image processing. Modern camera systems include more complex image processing, much of which is nonlinear. Existing linear system models, such as the TTP metric model implemented in NVESD models such as NV-IPM, assume that the entire system is linear and shift invariant (LSI). The LSI assumption makes modeling nonlinear processes difficult, such as local area processing/contrast enhancement (LAP/LACE), turbulence reduction, and image fusion.

  20. Analysis of patient movement during 3D USCT data acquisition

    NASA Astrophysics Data System (ADS)

    Ruiter, N. V.; Hopp, T.; Zapf, M.; Kretzek, E.; Gemmeke, H.

    2016-04-01

    In our first clinical study with a full 3D Ultrasound Computer Tomography (USCT) system patient data was acquired in eight minutes for one breast. In this paper the patient movement during the acquisition was analyzed quantitatively and as far as possible corrected in the resulting images. The movement was tracked in ten successive reflectivity reconstructions of full breast volumes acquired during 10 s intervals at different aperture positions, which were separated by 41 s intervals. The mean distance between initial and final position was 2.2 mm (standard deviation (STD) +/- 0.9 mm, max. 4.1 mm, min. 0.8 mm) and the average sum of all moved distances was 4.9 mm (STD +/- 1.9 mm, max. 8.8 mm, min. 2.7 mm). The tracked movement was corrected by summing successive images, which were transformed according to the detected movement. The contrast of these images increased and additional image content became visible.

  1. Performance of reduced bit-depth acquisition for optical frequency domain imaging.

    PubMed

    Goldberg, Brian D; Vakoc, Benjamin J; Oh, Wang-Yuhl; Suter, Melissa J; Waxman, Sergio; Freilich, Mark I; Bouma, Brett E; Tearney, Guillermo J

    2009-09-14

    High-speed optical frequency domain imaging (OFDI) has enabled practical wide-field microscopic imaging in the biological laboratory and clinical medicine. The imaging speed of OFDI, and therefore the field of view, of current systems is limited by the rate at which data can be digitized and archived rather than the system sensitivity or laser performance. One solution to this bottleneck is to natively digitize OFDI signals at reduced bit depths, e.g., at 8-bit depth rather than the conventional 12-14 bit depth, thereby reducing overall bandwidth. However, the implications of reduced bit-depth acquisition on image quality have not been studied. In this paper, we use simulations and empirical studies to evaluate the effects of reduced depth acquisition on OFDI image quality. We show that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. Images of a human coronary artery acquired in vivo at 8-bit depth are presented and compared with images at higher bit-depth acquisition.

  2. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  3. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  4. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  5. Quantitative image analysis of synovial tissue.

    PubMed

    van der Hall, Pascal O; Kraan, Maarten C; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the acquisition, storage and evaluation of images with dedicated hardware and software. Major advantages of quantitative image analysis over traditional techniques include sophisticated calibration systems, interaction, speed, and control of inter- and intraobserver variation. This results in a well controlled environment, which is essential for quality control and reproducibility, and helps to optimize sensitivity and specificity. To achieve this, an optimal quantitative image analysis system combines solid software engineering with easy interactivity with the operator. Moreover, the system also needs to be as transparent as possible in generating the data because a "black box design" will deliver uncontrollable results. In addition to these more general aspects, specifically for the analysis of synovial tissue the necessity of interactivity is highlighted by the added value of identification and quantification of information as present in areas such as the intimal lining layer, blood vessels, and lymphocyte aggregates. Speed is another important aspect of digital cytometry. Currently, rapidly increasing numbers of samples, together with accumulation of a variety of markers and detection techniques has made the use of traditional analysis techniques such as manual quantification and semi-quantitative analysis unpractical. It can be anticipated that the development of even more powerful computer systems with sophisticated software will further facilitate reliable analysis at high speed.

  6. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  7. Acquisition and evaluation of radiography images by digital camera.

    PubMed

    Cone, Stephen W; Carucci, Laura R; Yu, Jinxing; Rafiq, Azhar; Doarn, Charles R; Merrell, Ronald C

    2005-04-01

    To determine applicability of low-cost digital imaging for different radiographic modalities used in consultations from remote areas of the Ecuadorian rainforest with limited resources, both medical and financial. Low-cost digital imaging, consisting of hand-held digital cameras, was used for image capture at a remote location. Diagnostic radiographic images were captured in Ecuador by digital camera and transmitted to a password-protected File Transfer Protocol (FTP) server at VCU Medical Center in Richmond, Virginia, using standard Internet connectivity with standard security. After capture and subsequent transfer of images via low-bandwidth Internet connections, attending radiologists in the United States compared diagnoses to those from Ecuador to evaluate quality of image transfer. Corroborative diagnoses were obtained with the digital camera images for greater than 90% of the plain film and computed tomography studies. Ultrasound (U/S) studies demonstrated only 56% corroboration. Images of radiographs captured utilizing commercially available digital cameras can provide quality sufficient for expert consultation for many plain film studies for remote, underserved areas without access to advanced modalities.

  8. Constrained acquisition of ink spreading curves from printed color images.

    PubMed

    Bugnon, Thomas; Hersch, Roger D

    2011-02-01

    Today's spectral reflection prediction models are able to predict the reflection spectra of printed color images with an accuracy as high as the reproduction variability allows. However, to calibrate such models, special uniform calibration patches need to be printed. These calibration patches use space and have to be removed from the final product. The present contribution shows how to deduce the ink spreading behavior of the color halftones from spectral reflectances acquired within printed color images. Image tiles of a color as uniform as possible are selected within the printed images. The ink spreading behavior is fitted by relying on the spectral reflectances of the selected image tiles. A relevance metric specifies the impact of each ink spreading curve on the selected image tiles. These relevance metrics are used to constrain the corresponding ink spreading curves. Experiments performed on an inkjet printer demonstrate that the new constraint-based calibration of the spectral reflection prediction model performs well when predicting color halftones significantly different from the selected image tiles. For some prints, the proposed image based model calibration is more accurate than a classical calibration.

  9. Monitoring of HTS compound library quality via a high-resolution image acquisition and processing instrument.

    PubMed

    Baillargeon, Pierre; Scampavia, Louis; Einsteder, Ross; Hodder, Peter

    2011-06-01

    This report presents the high-resolution image acquisition and processing instrument for compound management applications (HIAPI-CM). The HIAPI-CM combines imaging spectroscopy and machine-vision analysis to perform rapid assessment of high-throughput screening (HTS) compound library quality. It has been customized to detect and classify typical artifacts found in HTS compound library microtiter plates (MTPs). These artifacts include (1) insufficient volume of liquid compound sample, (2) compound precipitation, and (3) colored compounds that interfere with HTS assay detection format readout. The HIAPI-CM is also configured to automatically query and compare its analysis results to data stored in a LIMS or corporate database, aiding in the detection of compound registration errors. To demonstrate its capabilities, several compound plates (n=5760 wells total) containing different artifacts were measured via automated HIAPI-CM analysis, and results compared with those obtained by manual (visual) inspection. In all cases, the instrument demonstrated high fidelity (99.8% empty wells; 100.1% filled wells; 94.4% for partially filled wells; 94.0% for wells containing colored compounds), and in the case of precipitate detection, the HIAPI-CM results significantly exceeded the fidelity of visual observations (220.0%). As described, the HIAPI-CM allows for noninvasive, nondestructive MTP assessment with a diagnostic throughput of about 1min per plate, reducing analytical expenses and improving the quality and stewardship of HTS compound libraries.

  10. FABIA: factor analysis for bicluster acquisition

    PubMed Central

    Hochreiter, Sepp; Bodenhofer, Ulrich; Heusel, Martin; Mayr, Andreas; Mitterecker, Andreas; Kasim, Adetayo; Khamiakova, Tatsiana; Van Sanden, Suzy; Lin, Dan; Talloen, Willem; Bijnens, Luc; Göhlmann, Hinrich W. H.; Shkedy, Ziv; Clevert, Djork-Arné

    2010-01-01

    Motivation: Biclustering of transcriptomic data groups genes and samples simultaneously. It is emerging as a standard tool for extracting knowledge from gene expression measurements. We propose a novel generative approach for biclustering called ‘FABIA: Factor Analysis for Bicluster Acquisition’. FABIA is based on a multiplicative model, which accounts for linear dependencies between gene expression and conditions, and also captures heavy-tailed distributions as observed in real-world transcriptomic data. The generative framework allows to utilize well-founded model selection methods and to apply Bayesian techniques. Results: On 100 simulated datasets with known true, artificially implanted biclusters, FABIA clearly outperformed all 11 competitors. On these datasets, FABIA was able to separate spurious biclusters from true biclusters by ranking biclusters according to their information content. FABIA was tested on three microarray datasets with known subclusters, where it was two times the best and once the second best method among the compared biclustering approaches. Availability: FABIA is available as an R package on Bioconductor (http://www.bioconductor.org). All datasets, results and software are available at http://www.bioinf.jku.at/software/fabia/fabia.html Contact: hochreit@bioinf.jku.at Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20418340

  11. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  12. Image Analysis of Foods.

    PubMed

    Russ, John C

    2015-09-01

    The structure of foods, both natural and processed ones, is controlled by many variables ranging from biology to chemistry and mechanical forces. The structure also controls many of the properties of the food, including consumer acceptance, taste, mouthfeel, appearance, and so on, and nutrition. Imaging provides an important tool for measuring the structure of foods. This includes 2-dimensional (2D) images of surfaces and sections, for example, viewed in a microscope, as well as 3-dimensional (3D) images of internal structure as may be produced by confocal microscopy, or computed tomography and magnetic resonance imaging. The use of images also guides robotics for harvesting and sorting. Processing of images may be needed to calibrate colors, reduce noise, enhance detail, and delineate structure and dimensions. Measurement of structural information such as volume fraction and internal surface areas, as well as the analysis of object size, location, and shape in both 2- and 3-dimensional images is illustrated and described, with primary references and examples from a wide range of applications. PMID:26270611

  13. Effect of temporal acquisition parameters on image quality of strain time constant elastography.

    PubMed

    Nair, Sanjay; Varghese, Joshua; Chaudhry, Anuj; Righetti, Raffaella

    2015-04-01

    Ultrasound methods to image the time constant (TC) of elastographic tissue parameters have been recently developed. Elastographic TC images from creep or stress relaxation tests have been shown to provide information on the viscoelastic and poroelastic behavior of tissues. However, the effect of temporal ultrasonic acquisition parameters and input noise on the image quality of the resultant strain TC elastograms has not been fully investigated yet. Understanding such effects could have important implications for clinical applications of these novel techniques. This work reports a simulation study aimed at investigating the effects of varying windows of observation, acquisition frame rate, and strain signal-to-noise ratio (SNR) on the image quality of elastographic TC estimates. A pilot experimental study was used to corroborate the simulation results in specific testing conditions. The results of this work suggest that the total acquisition time necessary for accurate strain TC estimates has a linear dependence to the underlying strain TC (as estimated from the theoretical strain-vs.-time curve). The results also indicate that it might be possible to make accurate estimates of the elastographic TC (within 10% error) using windows of observation as small as 20% of the underlying TC, provided sufficiently fast acquisition rates (>100 Hz for typical acquisition depths). The limited experimental data reported in this study statistically confirm the simulation trends, proving that the proposed model can be used as upper bound guidance for the correct execution of the experiments.

  14. Noise-compensated homotopic non-local regularized reconstruction for rapid retinal optical coherence tomography image acquisitions

    PubMed Central

    2014-01-01

    Background Optical coherence tomography (OCT) is a minimally invasive imaging technique, which utilizes the spatial and temporal coherence properties of optical waves backscattered from biological material. Recent advances in tunable lasers and infrared camera technologies have enabled an increase in the OCT imaging speed by a factor of more than 100, which is important for retinal imaging where we wish to study fast physiological processes in the biological tissue. However, the high scanning rate causes proportional decrease of the detector exposure time, resulting in a reduction of the system signal-to-noise ratio (SNR). One approach to improving the image quality of OCT tomograms acquired at high speed is to compensate for the noise component in the images without compromising the sharpness of the image details. Methods In this study, we propose a novel reconstruction method for rapid OCT image acquisitions, based on a noise-compensated homotopic modified James-Stein non-local regularized optimization strategy. The performance of the algorithm was tested on a series of high resolution OCT images of the human retina acquired at different imaging rates. Results Quantitative analysis was used to evaluate the performance of the algorithm using two state-of-art denoising strategies. Results demonstrate significant SNR improvements when using our proposed approach when compared to other approaches. Conclusions A new reconstruction method based on a noise-compensated homotopic modified James-Stein non-local regularized optimization strategy was developed for the purpose of improving the quality of rapid OCT image acquisitions. Preliminary results show the proposed method shows considerable promise as a tool to improve the visualization and analysis of biological material using OCT. PMID:25319186

  15. Data acquisition and analysis procedures for high-resolution atomic force microscopy in three dimensions.

    PubMed

    Albers, Boris J; Schwendemann, Todd C; Baykara, Mehmet Z; Pilet, Nicolas; Liebmann, Marcus; Altman, Eric I; Schwarz, Udo D

    2009-07-01

    Data acquisition and analysis procedures for noncontact atomic force microscopy that allow the recording of dense three-dimensional (3D) surface force and energy fields with atomic resolution are presented. The main obstacles for producing high-quality 3D force maps are long acquisition times that lead to data sets being distorted by drift, and tip changes. Both problems are reduced but not eliminated by low-temperature operation. The procedures presented here employ an image-by-image data acquisition scheme that cuts measurement times by avoiding repeated recording of redundant information, while allowing post-acquisition drift correction. All steps are detailed with the example of measurements performed on highly oriented pyrolytic graphite in ultrahigh vacuum at a temperature of 6 K. The area covered spans several unit cells laterally and vertically from the attractive region to where no force could be measured. The resulting fine data mesh maps piconewton forces with <7 pm lateral and<2 pm vertical resolution. From this 3D data set, two-dimensional cuts along any plane can be plotted. Cuts in a plane parallel to the sample surface show atomic resolution, while cuts along the surface normal visualize how the attractive atomic force fields extend into vacuum. At the same time, maps of the tip-sample potential energy, the lateral tip-sample forces, and the energy dissipated during cantilever oscillation can be produced with identical resolution.

  16. Optical Image Acquisition by Vibrating KNIFE Edge Techniques

    NASA Astrophysics Data System (ADS)

    Samson, Scott A.

    Traditional optical microscopes have inherent limitations in their attainable resolution. These shortcomings are a result of non-propagating evanescent waves being created by the small details in the specimen to be imaged. These problems are circumvented in the Near-field Scanning Optical Microscope (NSOM). Previous NSOMs use physical apertures to sample the optical field created by the specimen. By scanning a sub-wavelength-sized aperture past the specimen, very minute details may be imaged. In this thesis, a new method for obtaining images of various objects is studied. The method is a derivative of scanned knife edge techniques commonly used in optical laboratories. The general setup consists of illuminating a vibrating optically-opaque knife edge placed in close proximity to the object. By detecting only the time-varying optical power and utilizing various signal processing techniques, including computer-subtraction, beat frequency detection, and tomographic reconstruction, two-dimensional images of the object may be formed. In essence, a sampler similar to the aperture NSOMs is created. Mathematics, computer simulations, and low-resolution experiments are used to verify the thesis. Various aspects associated with improving the resolution with regards to NSOM are discussed, both theoretically and practically. The vibrating knife edge as a high- resolution sampler is compared to the physically -small NSOM aperture. Finally, future uses of the vibrating knife edge techniques and further research are introduced. Applicable references and computer programs are listed in appendices.

  17. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants

    PubMed Central

    Navarro, Pedro J.; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-01-01

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation. PMID:27164103

  18. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-05-05

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.

  19. Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.

    PubMed

    Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos

    2016-01-01

    Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation. PMID:27164103

  20. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  1. MR imaging of ore for heap bioleaching studies using pure phase encode acquisition methods

    NASA Astrophysics Data System (ADS)

    Fagan, Marijke A.; Sederman, Andrew J.; Johns, Michael L.

    2012-03-01

    Various MRI techniques were considered with respect to imaging of aqueous flow fields in low grade copper ore. Spin echo frequency encoded techniques were shown to produce unacceptable image distortions which led to pure phase encoded techniques being considered. Single point imaging multiple point acquisition (SPI-MPA) and spin echo single point imaging (SESPI) techniques were applied. By direct comparison with X-ray tomographic images, both techniques were found to be able to produce distortion-free images of the ore packings at 2 T. The signal to noise ratios (SNRs) of the SESPI images were found to be superior to SPI-MPA for equal total acquisition times; this was explained based on NMR relaxation measurements. SESPI was also found to produce suitable images for a range of particles sizes, whereas SPI-MPA SNR deteriorated markedly as particles size was reduced. Comparisons on a 4.7 T magnet showed significant signal loss from the SPI-MPA images, the effect of which was accentuated in the case of unsaturated flowing systems. Hence it was concluded that SESPI was the most robust imaging method for the study of copper ore heap leaching hydrology.

  2. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  3. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  4. Contrast medium administration and image acquisition parameters in renal CT angiography: what radiologists need to know

    PubMed Central

    Saade, Charbel; Deeb, Ibrahim Alsheikh; Mohamad, Maha; Al-Mohiy, Hussain; El-Merhi, Fadi

    2016-01-01

    Over the last decade, exponential advances in computed tomography (CT) technology have resulted in improved spatial and temporal resolution. Faster image acquisition enabled renal CT angiography to become a viable and effective noninvasive alternative in diagnosing renal vascular pathologies. However, with these advances, new challenges in contrast media administration have emerged. Poor synchronization between scanner and contrast media administration have reduced the consistency in image quality with poor spatial and contrast resolution. Comprehensive understanding of contrast media dynamics is essential in the design and implementation of contrast administration and image acquisition protocols. This review includes an overview of the parameters affecting renal artery opacification and current protocol strategies to achieve optimal image quality during renal CT angiography with iodinated contrast media, with current safety issues highlighted. PMID:26728701

  5. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera`s frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera`s focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 {mu}s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  6. Development and application of a high speed digital data acquisition technique to study steam bubble collapse using particle image velocimetry

    SciTech Connect

    Schmidl, W.D.

    1992-08-01

    The use of a Particle Image Velocimetry (PIV) method, which uses digital cameras for data acquisition, for studying high speed fluid flows is usually limited by the digital camera's frame acquisition rate. The velocity of the fluid under study has to be limited to insure that the tracer seeds suspended in the fluid remain in the camera's focal plane for at least two consecutive images. However, the use of digital cameras for data acquisition is desirable to simplify and expedite the data analysis process. A technique was developed which will measure fluid velocities with PIV techniques using two successive digital images and two different framing rates simultaneously. The first part of the method will measure changes which occur to the flow field at the relatively slow framing rate of 53.8 ms. The second part will measure changes to the same flow field at the relatively fast framing rate of 100 to 320 [mu]s. The effectiveness of this technique was tested by studying the collapse of steam bubbles in a subcooled tank of water, a relatively high speed phenomena. The tracer particles were recorded and velocity vectors for the fluid were obtained far from the steam bubble collapse.

  7. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  8. Micro-MRI-based image acquisition and processing system for assessing the response to therapeutic intervention

    NASA Astrophysics Data System (ADS)

    Vasilić, B.; Ladinsky, G. A.; Saha, P. K.; Wehrli, F. W.

    2006-03-01

    Osteoporosis is the cause of over 1.5 million bone fractures annually. Most of these fractures occur in sites rich in trabecular bone, a complex network of bony struts and plates found throughout the skeleton. The three-dimensional structure of the trabecular bone network significantly determines mechanical strength and thus fracture resistance. Here we present a data acquisition and processing system that allows efficient noninvasive assessment of trabecular bone structure through a "virtual bone biopsy". High-resolution MR images are acquired from which the trabecular bone network is extracted by estimating the partial bone occupancy of each voxel. A heuristic voxel subdivision increases the effective resolution of the bone volume fraction map and serves a basis for subsequent analysis of topological and orientational parameters. Semi-automated registration and segmentation ensure selection of the same anatomical location in subjects imaged at different time points during treatment. It is shown with excerpts from an ongoing clinical study of early post-menopausal women, that significant reduction in network connectivity occurs in the control group while the structural integrity is maintained in the hormone replacement group. The system described should be suited for large-scale studies designed to evaluate the efficacy of therapeutic intervention in subjects with metabolic bone disease.

  9. Data acquisition and analysis using the IBM Computer System 9000

    SciTech Connect

    Mueller, G.E.

    1985-10-01

    A data-acquisition, analysis, and graphing program has been developed on the IBM CS-9000 multitask computer to support the UNM/SNL/GA Thermal-Hydraulic Test Facility. The software has been written in Computer System BASIC which allows accessing and configuring I/O devices. The CS-9000 has been interfaced with an HP 3497A Data Acquisition/Control Unit and an HP 7470A Graphics Plotter through the IEEE-488 Bus. With this configuration the system is capable of scanning 60 channels of analog thermocuple compensated input, 20 channels of analog pressure transducer input, and 16 channels of digital mass flow rate input. The CS-9000 graphics coupled with the HP 7470A provides useful visualization of changes in measured parameters. 8 refs., 7 figs.

  10. Improvement of web-based data acquisition and management system for GOSAT validation lidar data analysis

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Takubo, Shoichiro; Kawasaki, Takeru; Abdullah, Indra Nugraha; Uchino, Osamu; Morino, Isamu; Yokota, Tatsuya; Nagai, Tomohiro; Sakai, Tetsu; Maki, Takashi; Arai, Kohei

    2013-01-01

    A web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data-analysis has been developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS (Automated Meteorological Data Acquisition System) ground-level local meteorological data, GPS Radiosonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data. In this article, we briefly describe some improvement for higher performance and higher data usability. GPS Radiosonde upper-air meteorological data and U.S. standard atmospheric model in DAS automatically calculate molecule number density profiles. Predicted ozone density prole images above Saga city are also calculated by using Meteorological Research Institute (MRI) chemistry-climate model version 2 for comparison to actual ozone DIAL data.

  11. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  12. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  13. HASTE sequence with parallel acquisition and T2 decay compensation: application to carotid artery imaging.

    PubMed

    Zhang, Ling; Kholmovski, Eugene G; Guo, Junyu; Choi, Seong-Eun Kim; Morrell, Glen R; Parker, Dennis L

    2009-01-01

    T2-weighted carotid artery images acquired using the turbo spin-echo (TSE) sequence frequently suffer from motion artifacts due to respiration and blood pulsation. The possibility of using HASTE sequence to achieve motion-free carotid images was investigated. The HASTE sequence suffers from severe blurring artifacts due to signal loss in later echoes due to T2 decay. Combining HASTE with parallel acquisition (PHASTE) decreases the number of echoes acquired and thus effectively reduces the blurring artifact caused by T2 relaxation. Further improvement in image sharpness can be achieved by performing T2 decay compensation before reconstructing the PHASTE data. Preliminary results have shown successful suppression of motion artifacts with PHASTE imaging. The image quality was enhanced relative to the original HASTE image, but was still less sharp than a non-motion-corrupted TSE image.

  14. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages.

    PubMed

    Fautz, H P; Honal, M; Saueressig, U; Schäfer, O; Kannengiesser, S A R

    2007-01-01

    Two-dimensional (2D) axial continuously-moving-table imaging has to deal with artifacts due to gradient nonlinearity and breathing motion, and has to provide the highest scan efficiency. Parallel imaging techniques (e.g., generalized autocalibrating partially parallel acquisition GRAPPA)) are used to reduce such artifacts and avoid ghosting artifacts. The latter occur in T(2)-weighted multi-spin-echo (SE) acquisitions that omit an additional excitation prior to imaging scans for presaturation purposes. Multiple images are reconstructed from subdivisions of a fully sampled k-space data set, each of which is acquired in a single SE train. These images are then averaged. GRAPPA coil weights are estimated without additional measurements. Compared to conventional image reconstruction, inconsistencies between different subsets of k-space induce less artifacts when each k-space part is reconstructed separately and the multiple images are averaged afterwards. These inconsistencies may lead to inaccurate GRAPPA coil weights using the proposed intrinsic GRAPPA calibration. It is shown that aliasing artifacts in single images are canceled out after averaging. Phantom and in vivo studies demonstrate the benefit of the proposed reconstruction scheme for free-breathing axial continuously-moving-table imaging using fast multi-SE sequences.

  15. Patient-adaptive reconstruction and acquisition in dynamic imaging with sensitivity encoding (PARADISE).

    PubMed

    Sharif, Behzad; Derbyshire, J Andrew; Faranesh, Anthony Z; Bresler, Yoram

    2010-08-01

    MRI of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional nongated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly accelerated nongated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject's heart rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high-resolution nongated cardiac MRI during short breath-hold. PMID:20665794

  16. Artifact reduction in moving-table acquisitions using parallel imaging and multiple averages.

    PubMed

    Fautz, H P; Honal, M; Saueressig, U; Schäfer, O; Kannengiesser, S A R

    2007-01-01

    Two-dimensional (2D) axial continuously-moving-table imaging has to deal with artifacts due to gradient nonlinearity and breathing motion, and has to provide the highest scan efficiency. Parallel imaging techniques (e.g., generalized autocalibrating partially parallel acquisition GRAPPA)) are used to reduce such artifacts and avoid ghosting artifacts. The latter occur in T(2)-weighted multi-spin-echo (SE) acquisitions that omit an additional excitation prior to imaging scans for presaturation purposes. Multiple images are reconstructed from subdivisions of a fully sampled k-space data set, each of which is acquired in a single SE train. These images are then averaged. GRAPPA coil weights are estimated without additional measurements. Compared to conventional image reconstruction, inconsistencies between different subsets of k-space induce less artifacts when each k-space part is reconstructed separately and the multiple images are averaged afterwards. These inconsistencies may lead to inaccurate GRAPPA coil weights using the proposed intrinsic GRAPPA calibration. It is shown that aliasing artifacts in single images are canceled out after averaging. Phantom and in vivo studies demonstrate the benefit of the proposed reconstruction scheme for free-breathing axial continuously-moving-table imaging using fast multi-SE sequences. PMID:17191244

  17. Imageability, age of acquisition, and frequency factors in acronym comprehension.

    PubMed

    Playfoot, David; Izura, Cristina

    2013-06-01

    In spite of their unusual orthographic and phonological form, acronyms (e.g., BBC, HIV, NATO) can become familiar to the reader, and their meaning can be accessed well enough that they are understood. The factors in semantic access for acronym stimuli were assessed using a word association task. Two analyses examined the time taken to generate a word association response to acronym cues. Responses were recorded more quickly to cues that elicited a large proportion of semantic responses, and those that were high in associative strength. Participants were shown to be faster to respond to cues which were imageable or early acquired. Frequency was not a significant predictor of word association responses. Implications for theories of lexical organisation are discussed. PMID:23153389

  18. Imageability, age of acquisition, and frequency factors in acronym comprehension.

    PubMed

    Playfoot, David; Izura, Cristina

    2013-06-01

    In spite of their unusual orthographic and phonological form, acronyms (e.g., BBC, HIV, NATO) can become familiar to the reader, and their meaning can be accessed well enough that they are understood. The factors in semantic access for acronym stimuli were assessed using a word association task. Two analyses examined the time taken to generate a word association response to acronym cues. Responses were recorded more quickly to cues that elicited a large proportion of semantic responses, and those that were high in associative strength. Participants were shown to be faster to respond to cues which were imageable or early acquired. Frequency was not a significant predictor of word association responses. Implications for theories of lexical organisation are discussed.

  19. Temporal optimisation of image acquisition for land cover classification with Random Forest and MODIS time-series

    NASA Astrophysics Data System (ADS)

    Nitze, Ingmar; Barrett, Brian; Cawkwell, Fiona

    2015-02-01

    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8-10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of

  20. Magnetic resonance imaging acquisition techniques intended to decrease movement artefact in paediatric brain imaging: a systematic review.

    PubMed

    Woodfield, Julie; Kealey, Susan

    2015-08-01

    Attaining paediatric brain images of diagnostic quality can be difficult because of young age or neurological impairment. The use of anaesthesia to reduce movement in MRI increases clinical risk and cost, while CT, though faster, exposes children to potentially harmful ionising radiation. MRI acquisition techniques that aim to decrease movement artefact may allow diagnostic paediatric brain imaging without sedation or anaesthesia. We conducted a systematic review to establish the evidence base for ultra-fast sequences and sequences using oversampling of k-space in paediatric brain MR imaging. Techniques were assessed for imaging time, occurrence of movement artefact, the need for sedation, and either image quality or diagnostic accuracy. We identified 24 relevant studies. We found that ultra-fast techniques had shorter imaging acquisition times compared to standard MRI. Techniques using oversampling of k-space required equal or longer imaging times than standard MRI. Both ultra-fast sequences and those using oversampling of k-space reduced movement artefact compared with standard MRI in unsedated children. Assessment of overall diagnostic accuracy was difficult because of the heterogeneous patient populations, imaging indications, and reporting methods of the studies. In children with shunt-treated hydrocephalus there is evidence that ultra-fast MRI is sufficient for the assessment of ventricular size.

  1. A real-time satellite data acquisition, analysis and display system - A practical application of the GOES network

    NASA Technical Reports Server (NTRS)

    Sutherland, R. A.; Langford, J. L.; Bartholic, J. F.; Bill, R. G., Jr.

    1979-01-01

    A real-time satellite data acquisition, analysis and display system is described which uses analog data transmitted by telephone line over the GOES network. Results are displayed on the system color video monitor as 'thermal' images which originated from infrared surface radiation sensed by the Geostationary Operational Environmental Satellite (GOES).

  2. The experiment study of image acquisition system based on 3D machine vision

    NASA Astrophysics Data System (ADS)

    Zhou, Haiying; Xiao, Zexin; Zhang, Xuefei; Wei, Zhe

    2011-11-01

    Binocular vision is one of the key technology in three-dimensional reconstructed of scene of three-dimensional machine vision. Important information of three-dimensional image could be acquired by binocular vision. When use it, we first get two or more pictures by camera, then we could get three-dimensional imformation included in these pictures by geometry and other relationship. In order to measurement accuracy of image acquisition system improved, image acquisition system of binocular vision about scene three-dimensional reconstruction is studyed in this article. Base on parallax principle and human eye binocular imaging, image acquired system between double optical path and double CCD mothd is comed up with. Experiment could obtain the best angle of double optical path optical axis and the best operating distance of double optical path. Then, through the bset angle of optical axis of double optical path and the best operating distance of double optical path, the centre distance of double CCD could be made sure. The two images of the same scene with different viewpoints is shoot by double CCD. This two images could establish well foundation for three-dimensional reconstructed of image processing in the later period. Through the experimental data shows the rationality of this method.

  3. IMAGE FUSION OF RECONSTRUCTED DIGITAL TOMOSYNTHESIS VOLUMES FROM A FRONTAL AND A LATERAL ACQUISITION.

    PubMed

    Arvidsson, Jonathan; Söderman, Christina; Allansdotter Johnsson, Åse; Bernhardt, Peter; Starck, Göran; Kahl, Fredrik; Båth, Magnus

    2016-06-01

    Digital tomosynthesis (DTS) has been used in chest imaging as a low radiation dose alternative to computed tomography (CT). Traditional DTS shows limitations in the spatial resolution in the out-of-plane dimension. As a first indication of whether a dual-plane dual-view (DPDV) DTS data acquisition can yield a fair resolution in all three spatial dimensions, a manual registration between a frontal and a lateral image volume was performed. An anthropomorphic chest phantom was scanned frontally and laterally using a linear DTS acquisition, at 120 kVp. The reconstructed image volumes were resampled and manually co-registered. Expert radiologist delineations of the mediastinal soft tissues enabled calculation of similarity metrics in regard to delineations in a reference CT volume. The fused volume produced the highest total overlap, implying that the fused volume was a more isotropic 3D representation of the examined object than the traditional chest DTS volumes. PMID:26683464

  4. Automated system for acquisition and image processing for the control and monitoring boned nopal

    NASA Astrophysics Data System (ADS)

    Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.

    2013-11-01

    This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal

  5. New developments in electron microscopy for serial image acquisition of neuronal profiles.

    PubMed

    Kubota, Yoshiyuki

    2015-02-01

    Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed.

  6. New developments in electron microscopy for serial image acquisition of neuronal profiles.

    PubMed

    Kubota, Yoshiyuki

    2015-02-01

    Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed. PMID:25564566

  7. Whole Heart Coronary Imaging with Flexible Acquisition Window and Trigger Delay

    PubMed Central

    Kawaji, Keigo; Foppa, Murilo; Roujol, Sébastien; Akçakaya, Mehmet; Nezafat, Reza

    2015-01-01

    Coronary magnetic resonance imaging (MRI) requires a correctly timed trigger delay derived from a scout cine scan to synchronize k-space acquisition with the quiescent period of the cardiac cycle. However, heart rate changes between breath-held cine and free-breathing coronary imaging may result in inaccurate timing errors. Additionally, the determined trigger delay may not reflect the period of minimal motion for both left and right coronary arteries or different segments. In this work, we present a whole-heart coronary imaging approach that allows flexible selection of the trigger delay timings by performing k-space sampling over an enlarged acquisition window. Our approach addresses coronary motion in an interactive manner by allowing the operator to determine the temporal window with minimal cardiac motion for each artery region. An electrocardiogram-gated, k-space segmented 3D radial stack-of-stars sequence that employs a custom rotation angle is developed. An interactive reconstruction and visualization platform is then employed to determine the subset of the enlarged acquisition window for minimal coronary motion. Coronary MRI was acquired on eight healthy subjects (5 male, mean age = 37 ± 18 years), where an enlarged acquisition window of 166–220 ms was set 50 ms prior to the scout-derived trigger delay. Coronary visualization and sharpness scores were compared between the standard 120 ms window set at the trigger delay, and those reconstructed using a manually adjusted window. The proposed method using manual adjustment was able to recover delineation of five mid and distal right coronary artery regions that were otherwise not visible from the standard window, and the sharpness scores improved in all coronary regions using the proposed method. This paper demonstrates the feasibility of a whole-heart coronary imaging approach that allows interactive selection of any subset of the enlarged acquisition window for a tailored reconstruction for each branch

  8. Face acquisition camera design using the NV-IPM image generation tool

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.; Choi, Hee-Sue; Reynolds, Joseph P.

    2015-05-01

    In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.

  9. A review of breast tomosynthesis. Part I. The image acquisition process

    SciTech Connect

    Sechopoulos, Ioannis

    2013-01-15

    Mammography is a very well-established imaging modality for the early detection and diagnosis of breast cancer. However, since the introduction of digital imaging to the realm of radiology, more advanced, and especially tomographic imaging methods have been made possible. One of these methods, breast tomosynthesis, has finally been introduced to the clinic for routine everyday use, with potential to in the future replace mammography for screening for breast cancer. In this two part paper, the extensive research performed during the development of breast tomosynthesis is reviewed, with a focus on the research addressing the medical physics aspects of this imaging modality. This first paper will review the research performed on the issues relevant to the image acquisition process, including system design, optimization of geometry and technique, x-ray scatter, and radiation dose. The companion to this paper will review all other aspects of breast tomosynthesis imaging, including the reconstruction process.

  10. Sequential CW-EPR image acquisition with 760-MHz surface coil array.

    PubMed

    Enomoto, Ayano; Hirata, Hiroshi

    2011-04-01

    This paper describes the development of a surface coil array that consists of two inductively coupled surface-coil resonators, for use in continuous-wave electron paramagnetic resonance (CW-EPR) imaging at 760 MHz. To make sequential EPR image acquisition possible, we decoupled the surface coils using PIN-diode switches, to enable the shifting of the resonators resonance frequency by more than 200 MHz. To assess the effectiveness of the surface coil array in CW-EPR imaging, two-dimensional images of a solution of nitroxyl radicals were measured with the developed coil array. Compared to equivalent single coil acquired images, we found the visualized area to be extended approximately 2-fold when using the surface coil array. The ability to visualize larger regions of interest through the use of a surface coil array, may offer great potential in future EPR imaging studies. PMID:21320789

  11. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  12. Analysis Method for Non-Nominal First Acquisition

    NASA Technical Reports Server (NTRS)

    Sieg, Detlef; Mugellesi-Dow, Roberta

    2007-01-01

    First this paper describes a method how the trajectory of the launcher can be modelled for the contingency analysis without having much information about the launch vehicle itself. From a dense sequence of state vectors a velocity profile is derived which is sufficiently accurate to enable the Flight Dynamics Team to integrate parts of the launcher trajectory on its own and to simulate contingency cases by modifying the velocity profile. Then the paper focuses on the thorough visibility analysis which has to follow the contingency case or burn performance simulations. In the ideal case it is possible to identify a ground station which is able to acquire the satellite independent from the burn performance. The correlations between the burn performance and the pointing at subsequent ground stations are derived with the aim of establishing simple guidelines which can be applied quickly and which significantly improve the chance of acquisition at subsequent ground stations. In the paper the method is applied to the Soyuz/Fregat launch with the MetOp satellite. Overall the paper shows that the launcher trajectory modelling with the simulation of contingency cases in connection with a ground station visibility analysis leads to a proper selection of ground stations and acquisition methods. In the MetOp case this ensured successful contact of all ground stations during the first hour after separation without having to rely on any early orbit determination result or state vector update.

  13. Live dynamic OCT imaging of cardiac structure and function in mouse embryos with 43 Hz direct volumetric data acquisition

    NASA Astrophysics Data System (ADS)

    Wang, Shang; Singh, Manmohan; Lopez, Andrew L.; Wu, Chen; Raghunathan, Raksha; Schill, Alexander; Li, Jiasong; Larin, Kirill V.; Larina, Irina V.

    2016-03-01

    Efficient phenotyping of cardiac dynamics in live mouse embryos has significant implications on understanding of early mammalian heart development and congenital cardiac defects. Recent studies established optical coherence tomography (OCT) as a powerful tool for live embryonic heart imaging in various animal models. However, current four-dimensional (4D) OCT imaging of the beating embryonic heart largely relies on gated data acquisition or postacquisition synchronization, which brings errors when cardiac cycles lack perfect periodicity and is time consuming and computationally expensive. Here, we report direct 4D OCT imaging of the structure and function of cardiac dynamics in live mouse embryos achieved by employing a Fourier domain mode-locking swept laser source that enables ~1.5 MHz A-line rate. Through utilizing both forward and backward scans of a resonant mirror, we obtained a ~6.4 kHz frame rate, which allows for a direct volumetric data acquisition speed of ~43 Hz, around 20 times of the early-stage mouse embryonic heart rate. Our experiments were performed on mouse embryos at embryonic day 9.5. Time-resolved 3D cardiodynamics clearly shows the heart structure in motion. We present analysis of cardiac wall movement and its velocity from the primitive atrium and ventricle. Our results suggest that the combination of ultrahigh-speed OCT imaging with live embryo culture could be a useful embryonic heart phenotyping approach for mouse mutants modeling human congenital heart diseases.

  14. Towards Quantification of Functional Breast Images Using Dedicated SPECT With Non-Traditional Acquisition Trajectories

    PubMed Central

    Perez, Kristy L.; Cutler, Spencer J.; Madhav, Priti; Tornai, Martin P.

    2012-01-01

    Quantification of radiotracer uptake in breast lesions can provide valuable information to physicians in deciding patient care or determining treatment efficacy. Physical processes (e.g., scatter, attenuation), detector/collimator characteristics, sampling and acquisition trajectories, and reconstruction artifacts contribute to an incorrect measurement of absolute tracer activity and distribution. For these experiments, a cylinder with three syringes of varying radioactivity concentration, and a fillable 800 mL breast with two lesion phantoms containing aqueous 99mTc pertechnetate were imaged using the SPECT sub-system of the dual-modality SPECT-CT dedicated breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisitions including vertical axis of rotation, 30° tilted, and complex sinusoidal trajectories. Different energy windows around the photopeak were quantitatively compared, along with appropriate scatter energy windows, to determine the best quantification accuracy after attenuation and dual-window scatter correction. Measured activity concentrations in the reconstructed images for syringes with greater than 10 µCi /mL corresponded to within 10% of the actual dose calibrator measured activity concentration for ±4% and ±8% photopeak energy windows. The same energy windows yielded lesion quantification results within 10% in the breast phantom as well. Results for the more complete complex sinsusoidal trajectory are similar to the simple vertical axis acquisition, and additionally allows both anterior chest wall sampling, no image distortion, and reasonably accurate quantification. PMID:22262925

  15. Towards Quantification of Functional Breast Images Using Dedicated SPECT With Non-Traditional Acquisition Trajectories.

    PubMed

    Perez, Kristy L; Cutler, Spencer J; Madhav, Priti; Tornai, Martin P

    2011-10-01

    Quantification of radiotracer uptake in breast lesions can provide valuable information to physicians in deciding patient care or determining treatment efficacy. Physical processes (e.g., scatter, attenuation), detector/collimator characteristics, sampling and acquisition trajectories, and reconstruction artifacts contribute to an incorrect measurement of absolute tracer activity and distribution. For these experiments, a cylinder with three syringes of varying radioactivity concentration, and a fillable 800 mL breast with two lesion phantoms containing aqueous (99m)Tc pertechnetate were imaged using the SPECT sub-system of the dual-modality SPECT-CT dedicated breast scanner. SPECT images were collected using a compact CZT camera with various 3D acquisitions including vertical axis of rotation, 30° tilted, and complex sinusoidal trajectories. Different energy windows around the photopeak were quantitatively compared, along with appropriate scatter energy windows, to determine the best quantification accuracy after attenuation and dual-window scatter correction. Measured activity concentrations in the reconstructed images for syringes with greater than 10 µCi /mL corresponded to within 10% of the actual dose calibrator measured activity concentration for ±4% and ±8% photopeak energy windows. The same energy windows yielded lesion quantification results within 10% in the breast phantom as well. Results for the more complete complex sinsusoidal trajectory are similar to the simple vertical axis acquisition, and additionally allows both anterior chest wall sampling, no image distortion, and reasonably accurate quantification.

  16. Partition-based acquisition model for speed up navigated beta-probe surface imaging

    NASA Astrophysics Data System (ADS)

    Monge, Frédéric; Shakir, Dzhoshkun I.; Navab, Nassir; Jannin, Pierre

    2016-03-01

    Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.

  17. A pairwise image analysis with sparse decomposition

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-02-01

    This paper aims to detect the evolution between two images representing the same scene. The evolution detection problem has many practical applications, especially in medical images. Indeed, the concept of a patient "file" implies the joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The research presented in this paper is carried out within the application context of the development of computer assisted diagnosis (CAD) applied to mammograms. It is performed on already registered pair of images. As the registration is never perfect, we must develop a comparison method sufficiently adapted to detect real small differences between comparable tissues. In many applications, the assessment of similarity used during the registration step is also used for the interpretation step that yields to prompt suspicious regions. In our case registration is assumed to match the spatial coordinates of similar anatomical elements. In this paper, in order to process the medical images at tissue level, the image representation is based on elementary patterns, therefore seeking patterns, not pixels. Besides, as the studied images have low entropy, the decomposed signal is expressed in a parsimonious way. Parsimonious representations are known to help extract the significant structures of a signal, and generate a compact version of the data. This change of representation should allow us to compare the studied images in a short time, thanks to the low weight of the images thus represented, while maintaining a good representativeness. The good precision of our results show the approach efficiency.

  18. Data acquisition and analysis at the Structural Biology Center

    SciTech Connect

    Westbrook, M.L.; Coleman, T.A.; Daly, R.T.; Pflugrath, J.W.

    1996-12-31

    The Structural Biology Center (SBC), a national user facility for macromolecular crystallography located at Argonne National Laboratory`s Advanced Photon Source, is currently being built and commissioned. SBC facilities include a bending-magnet beamline, an insertion-device beamline, laboratory and office space adjacent to the beamlines, and associated instrumentation, experimental apparatus, and facilities. SBC technical facilities will support anomalous dispersion phasing experiments, data collection from microcrystals, data collection from crystals with large molecular structures and rapid data collection from multiple related crystal structures for protein engineering and drug design. The SBC Computing Systems and Software Engineering Group is tasked with developing the SBC Control System, which includes computing systems, network, and software. The emphasis of SBC Control System development has been to provide efficient and convenient beamline control, data acquisition, and data analysis for maximal facility and experimenter productivity. This paper describes the SBC Control System development, specifically data acquisition and analysis at the SBC, and the development methods used to meet this goal.

  19. Instrumentation for automated acquisition and analysis of TLD glow curves

    NASA Astrophysics Data System (ADS)

    Bostock, I. J.; Kennett, T. J.; Harvey, J. W.

    1991-04-01

    Instrumentation for the automated and complete acquisition of thermoluminescent dosimeter (TLD) data from a Panasonic UD-702E TLD reader is reported. The system that has been developed consists of both hardware and software components and is designed to operate with an IBM-type personal computer. Acquisition of glow curve, timing, and heating data has been integrated with elementary numerical analysis to permit real-time validity and diagnostic assessments to be made. This allows the optimization of critical parameters such as duration of the heating cycles and the time window for the integration of the dosimetry peak. The form of the Li 2B 4O 7:Cu TLD glow curve has been studied and a mathematical representation devised to assist in the implementation of automated analysis. Differences in the shape of the curve can be used to identify dosimetry peaks due to artifacts or to identify failing components. Examples of the use of this system for quality assurance in the TLD monitoring program at McMaster University are presented.

  20. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  1. A flexible high-rate USB2 data acquisition system for PET and SPECT imaging

    SciTech Connect

    J. Proffitt, W. Hammond, S. Majewski, V. Popov, R.R. Raylman, A.G. Weisenberger, R. Wojcik

    2006-02-01

    A new flexible data acquisition system has been developed to instrument gamma-ray imaging detectors designed by the Jefferson Lab Detector and Imaging Group. Hardware consists of 16-channel data acquisition modules installed on USB2 carrier boards. Carriers have been designed to accept one, two, and four modules. Application trigger rate and channel density determines the number of acquisition boards and readout computers used. Each channel has an independent trigger, gated integrator and a 2.5 MHz 12-bit ADC. Each module has an FPGA for analog control and signal processing. Processing includes a 5 ns 40-bit trigger time stamp and programmable triggering, gating, ADC timing, offset and gain correction, charge and pulse-width discrimination, sparsification, event counting, and event assembly. The carrier manages global triggering and transfers module data to a USB buffer. High-granularity time-stamped triggering is suitable for modular detectors. Time stamped events permit dynamic studies, complex offline event assembly, and high-rate distributed data acquisition. A sustained USB data rate of 20 Mbytes/s, a sustained trigger rate of 300 kHz for 32 channels, and a peak trigger rate of 2.5 MHz to FIFO memory were achieved. Different trigger, gating, processing, and event assembly techniques were explored. Target applications include >100 kHz coincidence rate PET detectors, dynamic SPECT detectors, miniature and portable gamma detectors for small-animal and clinical use.

  2. Liquid crystal materials and structures for image processing and 3D shape acquisition

    NASA Astrophysics Data System (ADS)

    Garbat, K.; Garbat, P.; Jaroszewicz, L.

    2012-03-01

    The image processing supported by liquid crystals device has been used in numerous imaging applications, including polarization imaging, digital holography and programmable imaging. Liquid crystals have been extensively studied and are massively used in display and optical processing technology. We present here the main relevant parameters of liquid crystal for image processing and 3D shape acquisition and we compare the main liquid crystal options which can be used with their respective advantages. We propose here to compare performance of several types of liquid crystal materials: nematic mixtures with high and medium optical and dielectrical anisotropies and relatively low rotational viscosities nematic materials which may operate in TN mode in mono and dual frequency addressing systems.

  3. Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging.

    PubMed

    Ke, Jun; Lam, Edmund Y

    2016-05-01

    Compressive measurements benefit low-light-level imaging (L3-imaging) due to the significantly improved measurement signal-to-noise ratio (SNR). However, as with other compressive imaging (CI) systems, compressive L3-imaging is slow. To accelerate the data acquisition, we develop an algorithm to compute the optimal binary sensing matrix that can minimize the image reconstruction error. First, we make use of the measurement SNR and the reconstruction mean square error (MSE) to define the optimal gray-value sensing matrix. Then, we construct an equality-constrained optimization problem to solve for a binary sensing matrix. From several experimental results, we show that the latter delivers a similar reconstruction performance as the former, while having a smaller dynamic range requirement to system sensors.

  4. In application specific integrated circuit and data acquisition system for digital X-ray imaging

    NASA Astrophysics Data System (ADS)

    Beuville, E.; Cederström, B.; Danielsson, M.; Luo, L.; Nygren, D.; Oltman, E.; Vestlund, J.

    1998-02-01

    We have developed an Application Specific Integrated Circuit (ASIC) and data acquisition system for digital X-ray imaging. The chip consists of 16 parallel channels, each containing preamplifier, shaper, comparator and a 16 bit counter. We have demonstrated noiseless single-photon counting over a threshold of 7.2 keV using Silicon detectors and are presently capable of maximum counting rates of 2 MHz per channel. The ASIC is controlled by a personal computer through a commercial PCI card, which is also used for data acquisition. The content of the 16 bit counters are loaded into a shift register and transferred to the PC at any time at a rate of 20 MHz. The system is non-complicated, low cost and high performance and is optimised for digital X-ray imaging applications.

  5. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  6. Diffusion weighted inner volume imaging of lumbar disks based on turbo-STEAM acquisition.

    PubMed

    Hiepe, Patrick; Herrmann, Karl-Heinz; Ros, Christian; Reichenbach, Jürgen R

    2011-09-01

    A magnetic resonance imaging (MRI) technique for diffusion weighted imaging (DWI) is described which, in contrast to echo planar imaging (EPI), is insensitive to off-resonance effects caused by tissue susceptibility differences, magnetic field inhomogeneities, or chemical shifts. The sequence combines a diffusion weighted (DW) spin-echo preparation and a stimulated echo acquisition mode (STEAM) module. Inner volume imaging (IVI) allows reduced rectangular field-of-view (FoV) in the phase encode direction, while suppressing aliasing artifacts that are usually the consequence of reduced FoVs. Sagittal turbo-STEAM images of the lumbar spine were acquired at 3.0T with 2.0 × 2.0 mm² in-plane resolution and 7 mm slice thickness with acquisition times of 407 ms per image. To calculate the apparent diffusion coefficient (ADC) in lumbar intervertebral disks (IVDs), the DW gradients were applied in three orthogonal gradient directions with b-values of 0 and 300 s/mm². For initial assessment of the ADC of normal and abnormal IVDs a pilot study with 8 subjects was performed. Mean ADC values of all normal IVDs were (2.27±0.40)×10⁻³ mm²/s and (1.89±0.34)×10⁻³ mm²/s for turbo-STEAM IVI and SE-EPI acquisition, respectively. Corresponding mean ADC values, averaged over all abnormal disks, were (1.93±0.39)×10⁻³ mm²/s and (1.51±0.46)×10⁻³ mm²/s, respectively, indicating a substantial ADC decrease (p<0.001).

  7. 3D Image Acquisition System Based on Shape from Focus Technique

    PubMed Central

    Billiot, Bastien; Cointault, Frédéric; Journaux, Ludovic; Simon, Jean-Claude; Gouton, Pierre

    2013-01-01

    This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene. PMID:23591964

  8. The Power of Imageability: How the Acquisition of Inflected Forms Is Facilitated in Highly Imageable Verbs and Nouns in Czech Children

    ERIC Educational Resources Information Center

    Smolík, Filip; Kríž, Adam

    2015-01-01

    Imageability is the ability of words to elicit mental sensory images of their referents. Recent research has suggested that imageability facilitates the processing and acquisition of inflected word forms. The present study examined whether inflected word forms are acquired earlier in highly imageable words in Czech children. Parents of 317…

  9. High-Speed MALDI-TOF Imaging Mass Spectrometry: Rapid Ion Image Acquisition and Considerations for Next Generation Instrumentation

    PubMed Central

    Spraggins, Jeffrey M.; Caprioli, Richard M.

    2012-01-01

    A prototype matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometer has been used for high-speed ion image acquisition. The instrument incorporates a Nd:YLF solid state laser capable of pulse repetition rates up to 5 kHz and continuous laser raster sampling for high-throughput data collection. Lipid ion images of a sagittal rat brain tissue section were collected in 10 min with an effective acquisition rate of roughly 30 pixels/s. These results represent more than a 10-fold increase in throughput compared with current commercially available instrumentation. Experiments aimed at improving conditions for continuous laser raster sampling for imaging are reported, highlighting proper laser repetition rates and stage velocities to avoid signal degradation from significant oversampling. As new high spatial resolution and large sample area applications present themselves, the development of high-speed microprobe MALDI imaging mass spectrometry is essential to meet the needs of those seeking new technologies for rapid molecular imaging. PMID:21953043

  10. Evolutionary analysis of iron (Fe) acquisition system in Marchantia polymorpha.

    PubMed

    Lo, Jing-Chi; Tsednee, Munkhtsetseg; Lo, Ying-Chu; Yang, Shun-Chung; Hu, Jer-Ming; Ishizaki, Kimitsune; Kohchi, Takayuki; Lee, Der-Chuen; Yeh, Kuo-Chen

    2016-07-01

    To acquire appropriate iron (Fe), vascular plants have developed two unique strategies, the reduction-based strategy I of nongraminaceous plants for Fe(2+) and the chelation-based strategy II of graminaceous plants for Fe(3+) . However, the mechanism of Fe uptake in bryophytes, the earliest diverging branch of land plants and dominant in gametophyte generation is less clear. Fe isotope fractionation analysis demonstrated that the liverwort Marchantia polymorpha uses reduction-based Fe acquisition. Enhanced activities of ferric chelate reductase and proton ATPase were detected under Fe-deficient conditions. However, M. polymorpha did not show mugineic acid family phytosiderophores, the key components of strategy II, or the precursor nicotianamine. Five ZIP (ZRT/IRT-like protein) homologs were identified and speculated to be involved in Fe uptake in M. polymorpha. MpZIP3 knockdown conferred reduced growth under Fe-deficient conditions, and MpZIP3 overexpression increased Fe content under excess Fe. Thus, a nonvascular liverwort, M. polymorpha, uses strategy I for Fe acquisition. This system may have been acquired in the common ancestor of land plants and coopted from the gametophyte to sporophyte generation in the evolution of land plants. PMID:26948158

  11. Implementation of a laser beam analyzer using the image acquisition card IMAQ (NI)

    NASA Astrophysics Data System (ADS)

    Rojas-Laguna, R.; Avila-Garcia, M. S.; Alvarado-Mendez, Edgar; Andrade-Lucio, Jose A.; Obarra-Manzano, O. G.; Torres-Cisneros, Miguel; Castro-Sanchez, R.; Estudillo-Ayala, J. M.; Ibarra-Escamilla, Baldeamr

    2001-08-01

    In this work we address our attention to the implementation of a beam analyzer. The software was designed under LabView, platform and using the Image Acquisition Card IMAQ of National Instruments. The objective is to develop a graphic interface which has to include image processing tools such as characteristic enhancement such as bright, contrast and morphologic operations and quantification of dimensions. An application of this graphic interface is like laser beam analyzer of medium cost, versatile, precise and easily reconfigurable under this programing environment.

  12. Evaluation of Acquisition Strategies for Image-Based Construction Site Monitoring

    NASA Astrophysics Data System (ADS)

    Tuttas, S.; Braun, A.; Borrmann, A.; Stilla, U.

    2016-06-01

    Construction site monitoring is an essential task for keeping track of the ongoing construction work and providing up-to-date information for a Building Information Model (BIM). The BIM contains the as-planned states (geometry, schedule, costs, ...) of a construction project. For updating, the as-built state has to be acquired repeatedly and compared to the as-planned state. In the approach presented here, a 3D representation of the as-built state is calculated from photogrammetric images using multi-view stereo reconstruction. On construction sites one has to cope with several difficulties like security aspects, limited accessibility, occlusions or construction activity. Different acquisition strategies and techniques, namely (i) terrestrial acquisition with a hand-held camera, (ii) aerial acquisition using a Unmanned Aerial Vehicle (UAV) and (iii) acquisition using a fixed stereo camera pair at the boom of the crane, are tested on three test sites. They are assessed considering the special needs for the monitoring tasks and limitations on construction sites. The three scenarios are evaluated based on the ability of automation, the required effort for acquisition, the necessary equipment and its maintaining, disturbance of the construction works, and on the accuracy and completeness of the resulting point clouds. Based on the experiences during the test cases the following conclusions can be drawn: Terrestrial acquisition has the lowest requirements on the device setup but lacks on automation and coverage. The crane camera shows the lowest flexibility but the highest grade of automation. The UAV approach can provide the best coverage by combining nadir and oblique views, but can be limited by obstacles and security aspects. The accuracy of the point clouds is evaluated based on plane fitting of selected building parts. The RMS errors of the fitted parts range from 1 to a few cm for the UAV and the hand-held scenario. First results show that the crane camera

  13. Automated acquisition and analysis of airway surface liquid height by confocal microscopy

    PubMed Central

    Choi, Hyun-Chul; Kim, Christine Seul Ki

    2015-01-01

    The airway surface liquid (ASL) is a thin-liquid layer that lines the luminal side of airway epithelia. ASL contains many molecules that are involved in primary innate defense in the lung. Measurement of ASL height on primary airway cultures by confocal microscopy is a powerful tool that has enabled researchers to study ASL physiology and pharmacology. Previously, ASL image acquisition and analysis were performed manually. However, this process is time and labor intensive. To increase the throughput, we have developed an automatic ASL measurement technique that combines a fully automated confocal microscope with novel automatic image analysis software that was written with image processing techniques derived from the computer science field. We were able to acquire XZ ASL images at the rate of ∼1 image/s in a reproducible fashion. Our automatic analysis software was able to analyze images at the rate of ∼32 ms/image. As proofs of concept, we generated a time course for ASL absorption and a dose response in the presence of SPLUNC1, a known epithelial sodium channel inhibitor, on human bronchial epithelial cultures. Using this approach, we determined the IC50 for SPLUNC1 to be 6.53 μM. Furthermore, our technique successfully detected a difference in ASL height between normal and cystic fibrosis (CF) human bronchial epithelial cultures and detected changes in ATP-stimulated Cl−/ASL secretion. We conclude that our automatic ASL measurement technique can be applied for repeated ASL height measurements with high accuracy and consistency and increased throughput. PMID:26001773

  14. An Improved Susceptibility Weighted Imaging Method using Multi-Echo Acquisition

    PubMed Central

    Oh, Sung Suk; Oh, Se-Hong; Nam, Yoonho; Han, Dongyeob; Stafford, Randall B.; Hwang, Jinyoung; Kim, Dong-Hyun; Park, HyunWook; Lee, Jongho

    2013-01-01

    Purpose To introduce novel acquisition and post-processing approaches for susceptibility weighted imaging (SWI) to remove background field inhomogeneity artifacts in both magnitude and phase data. Method The proposed method acquires three echoes in a 3D gradient echo (GRE) sequence, with a field compensation gradient (z-shim gradient) applied to the third echo. The artifacts in the magnitude data are compensated by signal estimation from all three echoes. The artifacts in phase signals are removed by modeling the background phase distortions using Gaussians. The method was applied in vivo and compared with conventional SWI. Results The method successfully compensates for background field inhomogeneity artifacts in magnitude and phase images, and demonstrated improved SWI images. In particular, vessels in frontal lobe, which were not observed in conventional SWI, were identified in the proposed method. Conclusion The new method improves image quality in SWI by restoring signal in the frontal and temporal regions. PMID:24105838

  15. Target-acquisition performance in undersampled infrared imagers: static imagery to motion video.

    PubMed

    Krapels, Keith; Driggers, Ronald G; Teaney, Brian

    2005-11-20

    In this research we show that the target-acquisition performance of an undersampled imager improves with sensor or target motion. We provide an experiment designed to evaluate the improvement in observer performance as a function of target motion rate in the video. We created the target motion by mounting a thermal imager on a precision two-axis gimbal and varying the sensor motion rate from 0.25 to 1 instantaneous field of view per frame. A midwave thermal imager was used to permit short integration times and remove the effects of motion blur. It is shown that the human visual system performs a superresolution reconstruction that mitigates some aliasing and provides a higher (than static imagery) effective resolution. This process appears to be relatively independent of motion velocity. The results suggest that the benefits of superresolution reconstruction techniques as applied to imaging systems with motion may be limited. PMID:16318174

  16. Cardiac imaging with multi-sector data acquisition in volumetric CT: variation of effective temporal resolution and its potential clinical consequences

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Taha, Basel H.; Vass, Melissa L.; Seamans, John L.; Okerlund, Darin R.

    2009-02-01

    With increasing longitudinal detector dimension available in diagnostic volumetric CT, step-and-shoot scan is becoming popular for cardiac imaging. In comparison to helical scan, step-and-shoot scan decouples patient table movement from cardiac gating/triggering, which facilitates the cardiac imaging via multi-sector data acquisition, as well as the administration of inter-cycle heart beat variation (arrhythmia) and radiation dose efficiency. Ideally, a multi-sector data acquisition can improve temporal resolution at a factor the same as the number of sectors (best scenario). In reality, however, the effective temporal resolution is jointly determined by gantry rotation speed and patient heart beat rate, which may significantly lower than the ideal or no improvement (worst scenario). Hence, it is clinically relevant to investigate the behavior of effective temporal resolution in cardiac imaging with multi-sector data acquisition. In this study, a 5-second cine scan of a porcine heart, which cascades 6 porcine cardiac cycles, is acquired. In addition to theoretical analysis and motion phantom study, the clinical consequences due to the effective temporal resolution variation are evaluated qualitative or quantitatively. By employing a 2-sector image reconstruction strategy, a total of 15 (the permutation of P(6, 2)) cases between the best and worst scenarios are studied, providing informative guidance for the design and optimization of CT cardiac imaging in volumetric CT with multi-sector data acquisition.

  17. A high speed PC-based data acquisition and control system for positron imaging

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.

    2009-06-01

    A modular positron camera with a flexible geometry suitable for performing Positron Emission Particle Tracking (PEPT) studies on a wide range of applications has been constructed. The demand for high speed list mode data storage required for these experiments has motivated the development of an improved data acquisition system to support the existing detectors. A high speed PC-based data acquisition system is presented. This device replaces the old dedicated hardware with a compact, flexible device with the same functionality and superior performance. Data acquisition rates of up to 80 MBytes per second allow coincidence data to be saved to disk for real-time analysis or post processing. The system supports the storage of time information with resolution of a half millisecond and remote trigger data support. Control of the detector system is provided by high-level software running on the same computer.

  18. Onboard utilization of ground control points for image correction. Volume 2: Analysis and simulation results

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An approach to remote sensing that meets future mission requirements was investigated. The deterministic acquisition of data and the rapid correction of data for radiometric effects and image distortions are the most critical limitations of remote sensing. The following topics are discussed: onboard image correction systems, GCP navigation system simulation, GCP analysis, and image correction analysis measurement.

  19. Axially elongated field-free point data acquisition in magnetic particle imaging.

    PubMed

    Kaethner, Christian; Ahlborg, Mandy; Bringout, Gael; Weber, Matthias; Buzug, Thorsten M

    2015-02-01

    The magnetic particle imaging (MPI) technology is a new imaging technique featuring an excellent possibility to detect iron oxide based nanoparticle accumulations in vivo. The excitation of the particles and in turn the signal generation in MPI are achieved by using oscillating magnetic fields. In order to realize a spatial encoding, a field-free point (FFP) is steered through the field of view (FOV). Such a positioning of the FFP can thereby be achieved by mechanical or electromagnetical movement. Conventionally, the data acquisition path is either a planar 2-D or a 3-D FFP trajectory. Assuming human applications, the size of the FOV sampled by such trajectories is strongly limited by heating of the body and by nerve stimulations. In this work, a new approach acquiring MPI data based on the axial elongation of a 2-D FFP trajectory is proposed. It is shown that such an elongation can be used as a data acquisition path to significantly increase the acquisition speed, with negligible loss of spatial resolution.

  20. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  1. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    SciTech Connect

    Yip, Stephen Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID

  2. Lossy cardiac x-ray image compression based on acquisition noise

    NASA Astrophysics Data System (ADS)

    de Bruijn, Frederik J.; Slump, Cornelis H.

    1997-05-01

    In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially since, in reality, human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors. Therefore, we investigate the possibility to develop alternative measures for data loss, based on the characteristics of the acquisition system, in our case, a digital cardiac imaging system. In general, due to the low exposure, cardiac x-ray images tend to be relatively noisy. The main noise contributions are quantum noise and electrical noise. The electrical noise is not correlated with the signal. In addition, the signal can be transformed such that the correlated Poisson-distributed quantum noise is transformed into an additional zero-mean Gaussian noise source which is uncorrelated with the signal. Furthermore, the systems modulation transfer function imposes a known spatial-frequency limitation to the output signal. In the assumption that noise which is not correlated with the signal contains no diagnostic information, we have derived a compression measure based on the acquisition parameters of a digital cardiac imaging system. The measure is used for bit- assignment and quantization of transform coefficients. We present a blockwise-DCT compression algorithm which is based on the conventional JPEG-standard. However, the bit- assignment to the transform coefficients is now determined by an assumed noise variance for each coefficient, for a given set of acquisition parameters. Experiments with the algorithm indicate that a bit rate of 0.6 bit/pixel is feasible, without apparent loss of clinical information.

  3. Design and characterization of an image acquisition system and its optomechanical module for chip defects inspection on chip sorters

    NASA Astrophysics Data System (ADS)

    Chen, Ming-Fu; Huang, Po-Hsuan; Chen, Yung-Hsiang; Cheng, Yu-Cheng

    2011-08-01

    Chip sorter is one of packaging facilities in chip manufactory. Defects will occur for a few of chips during manufacturing processes. If the size of chip defects is larger than a criterion of impacting chip quality, these flawed chips have to be detected and removed. Defects inspection system is usually developed with frame CCD imagers. There're some drawbacks for this system, such as mechanism of pause type for image acquisition, complicated acquisition control, easy damage for moving components, etc. And acquired images per chip have to be processed in radiometry and geometry and then pieced together before inspection. These processes impact the accuracy and efficiency of defects inspection. So approaches of image acquisition system and its opto-mechanical module will be critical for inspection system. In this article, design and characterization of a new image acquisition system and its opto-mechanical module are presented. Defects with size of greater than 15μm have to be inspected. Inspection performance shall be greater than 0.6 m/sec. Thus image acquisition system shall have the characteristics of having (1) the resolution of 5μm and 10μm for optical lens and linear CCD imager respectively; (2) the lens magnification of 2; (3) the line rate of greater than 120 kHz for imager output. The design of structure and outlines for new system and module are also described in this work. Proposed system has advantages of such as transporting chips in constant speed to acquire images, using one image only per chip for inspection, no image-mosaic process, simplifying the control of image acquisition. And the inspection efficiency and accuracy will be substantially improved.

  4. Task-driven image acquisition and reconstruction in cone-beam CT.

    PubMed

    Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H

    2015-04-21

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt

  5. Spectral analysis for automated exploration and sample acquisition

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi

    1992-01-01

    Future space exploration missions will rely heavily on the use of complex instrument data for determining the geologic, chemical, and elemental character of planetary surfaces. One important instrument is the imaging spectrometer, which collects complete images in multiple discrete wavelengths in the visible and infrared regions of the spectrum. Extensive computational effort is required to extract information from such high-dimensional data. A hierarchical classification scheme allows multispectral data to be analyzed for purposes of mineral classification while limiting the overall computational requirements. The hierarchical classifier exploits the tunability of a new type of imaging spectrometer which is based on an acousto-optic tunable filter. This spectrometer collects a complete image in each wavelength passband without spatial scanning. It may be programmed to scan through a range of wavelengths or to collect only specific bands for data analysis. Spectral classification activities employ artificial neural networks, trained to recognize a number of mineral classes. Analysis of the trained networks has proven useful in determining which subsets of spectral bands should be employed at each step of the hierarchical classifier. The network classifiers are capable of recognizing all mineral types which were included in the training set. In addition, the major components of many mineral mixtures can also be recognized. This capability may prove useful for a system designed to evaluate data in a strange environment where details of the mineral composition are not known in advance.

  6. Spectral analysis for automated exploration and sample acquisition

    NASA Astrophysics Data System (ADS)

    Eberlein, Susan; Yates, Gigi

    1992-05-01

    Future space exploration missions will rely heavily on the use of complex instrument data for determining the geologic, chemical, and elemental character of planetary surfaces. One important instrument is the imaging spectrometer, which collects complete images in multiple discrete wavelengths in the visible and infrared regions of the spectrum. Extensive computational effort is required to extract information from such high-dimensional data. A hierarchical classification scheme allows multispectral data to be analyzed for purposes of mineral classification while limiting the overall computational requirements. The hierarchical classifier exploits the tunability of a new type of imaging spectrometer which is based on an acousto-optic tunable filter. This spectrometer collects a complete image in each wavelength passband without spatial scanning. It may be programmed to scan through a range of wavelengths or to collect only specific bands for data analysis. Spectral classification activities employ artificial neural networks, trained to recognize a number of mineral classes. Analysis of the trained networks has proven useful in determining which subsets of spectral bands should be employed at each step of the hierarchical classifier. The network classifiers are capable of recognizing all mineral types which were included in the training set. In addition, the major components of many mineral mixtures can also be recognized. This capability may prove useful for a system designed to evaluate data in a strange environment where details of the mineral composition are not known in advance.

  7. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  8. A multispectral three-dimensional acquisition technique for imaging near metal implants.

    PubMed

    Koch, Kevin M; Lorbiecki, John E; Hinks, R Scott; King, Kevin F

    2009-02-01

    Metallic implants used in bone and joint arthroplasty induce severe spatial perturbations to the B0 magnetic field used for high-field clinical magnetic resonance. These perturbations distort slice-selection and frequency encoding processes applied in conventional two-dimensional MRI techniques and hinder the diagnosis of complications from arthroplasty. Here, a method is presented whereby multiple three-dimensional fast-spin-echo images are collected using discrete offsets in RF transmission and reception frequency. It is demonstrated that this multi acquisition variable-resonance image combination technique can be used to generate a composite image that is devoid of slice-plane distortion and possesses greatly reduced distortions in the readout direction, even in the immediate vicinity of metallic implants.

  9. The influence of the microscope lamp filament colour temperature on the process of digital images of histological slides acquisition standardization

    PubMed Central

    2014-01-01

    Background The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Methods Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. Results For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. Conclusions The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature

  10. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  11. Design and DSP implementation of star image acquisition and star point fast acquiring and tracking

    NASA Astrophysics Data System (ADS)

    Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang

    2006-02-01

    Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.

  12. Investigations on the efficiency of cardiac-gated methods for the acquisition of diffusion-weighted images

    NASA Astrophysics Data System (ADS)

    Nunes, Rita G.; Jezzard, Peter; Clare, Stuart

    2005-11-01

    Diffusion-weighted images are inherently very sensitive to motion. Pulsatile motion of the brain can give rise to artifactual signal attenuation leading to over-estimation of the apparent diffusion coefficients, even with snapshot echo planar imaging. Such miscalculations can result in erroneous estimates of the principal diffusion directions. Cardiac gating can be performed to confine acquisition to the quiet portion of the cycle. Although effective, this approach leads to significantly longer acquisition times. On the other hand, it has been demonstrated that pulsatile motion is not significant in regions above the corpus callosum. To reduce acquisition times and improve the efficiency of whole brain cardiac-gated acquisitions, the upper slices of the brain can be imaged during systole, reserving diastole for those slices most affected by pulsatile motion. The merits and disadvantages of this optimized approach are investigated here, in comparison to a more standard gating method and to the non-gated approach.

  13. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    PubMed Central

    GALAVIS, PAULINA E.; HOLLENSEN, CHRISTIAN; JALLOW, NGONEH; PALIWAL, BHUDATT; JERAJ, ROBERT

    2014-01-01

    Background Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45–60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be

  14. A user report on the trial use of gesture commands for image manipulation and X-ray acquisition.

    PubMed

    Li, Ellis Chun Fai; Lai, Christopher Wai Keung

    2016-07-01

    Touchless environment for image manipulation and X-ray acquisition may enhance the current infection control measure during X-ray examination simply by avoiding any touch on the control panel. The present study is intended at designing and performing a trial experiment on using motion-sensing technology to perform image manipulation and X-ray acquisition function (the activities a radiographer frequently performs during an X-ray examination) under an experimental setup. Based on the author's clinical experience, several gesture commands were designed carefully to complete a single X-ray examination. Four radiographers were randomly recruited for the study. They were asked to perform gesture commands in front of a computer integrated with a gesture-based touchless controller. The translational movements of the tip of their thumb and index finger while performing different gesture commands were recorded for analysis. Although individual operators were free to decide the extent of movement and the speed at which their fingers and thumbs moved while performing these gesture commands, the result of our study demonstrated that all operators could perform our proposed gesture commands with good consistency, suggesting that motion-sensing technology could, in practice, be integrated into radiographic examinations. To summarize, although the implementation of motion-sensing technology as an input command in radiographic examination might inevitably slow down the examination throughput considering that extra procedural steps are required to trigger specific gesture commands in sequence, it is advantageous in minimizing the potential of the pathogen contamination during image operation and image processing that leads to cross infection. PMID:27230385

  15. Wide-field flexible endoscope for simultaneous color and NIR fluorescence image acquisition during surveillance colonoscopy

    NASA Astrophysics Data System (ADS)

    García-Allende, P. Beatriz; Nagengast, Wouter B.; Glatz, Jürgen; Ntziachristos, Vasilis

    2013-03-01

    Colorectal cancer (CRC) is the third most common form of cancer and, despite recent declines in both incidence and mortality, it still remains the second leading cause of cancer-related deaths in the western world. Colonoscopy is the standard for detection and removal of premalignant lesions to prevent CRC. The major challenges that physicians face during surveillance colonoscopy are the high adenoma miss-rates and the lack of functional information to facilitate decision-making concerning which lesions to remove. Targeted imaging with NIR fluorescence would address these limitations. Tissue penetration is increased in the NIR range while the combination with targeted NIR fluorescent agents provides molecularly specific detection of cancer cells, i.e. a red-flag detection strategy that allows tumor imaging with optimal sensitivity and specificity. The development of a flexible endoscopic fluorescence imaging method that can be integrated with standard medical endoscopes and facilitates the clinical use of this potential is described in this work. A semi-disposable coherent fiber optic imaging bundle that is traditionally employed in the exploration of biliary and pancreatic ducts is proposed, since it is long and thin enough to be guided through the working channel of a traditional video colonoscope allowing visualization of proximal lesions in the colon. A custom developed zoom system magnifies the image of the proximal end of the imaging bundle to fill the dimensions of two cameras operating in parallel providing the simultaneous color and fluorescence video acquisition.

  16. Four-channel surface coil array for sequential CW-EPR image acquisition.

    PubMed

    Enomoto, Ayano; Emoto, Miho; Fujii, Hirotada; Hirata, Hiroshi

    2013-09-01

    This article describes a four-channel surface coil array to increase the area of visualization for continuous-wave electron paramagnetic resonance (CW-EPR) imaging. A 776-MHz surface coil array was constructed with four independent surface coil resonators and three kinds of switches. Control circuits for switching the resonators were also built to sequentially perform EPR image acquisition for each resonator. The resonance frequencies of the resonators were shifted using PIN diode switches to decouple the inductively coupled coils. To investigate the area of visualization with the surface coil array, three-dimensional EPR imaging was performed using a glass cell phantom filled with a solution of nitroxyl radicals. The area of visualization obtained with the surface coil array was increased approximately 3.5-fold in comparison to that with a single surface coil resonator. Furthermore, to demonstrate the applicability of this surface coil array to animal imaging, three-dimensional EPR imaging was performed in a living mouse with an exogenously injected nitroxyl radical imaging agent. PMID:23832070

  17. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-07-08

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  18. Matlab-based interface for the simultaneous acquisition of force measures and Doppler ultrasound muscular images.

    PubMed

    Ferrer-Buedo, José; Martínez-Sober, Marcelino; Alakhdar-Mohmara, Yasser; Soria-Olivas, Emilio; Benítez-Martínez, Josep C; Martínez-Martínez, José M

    2013-04-01

    This paper tackles the design of a graphical user interface (GUI) based on Matlab (MathWorks Inc., MA), a worldwide standard in the processing of biosignals, which allows the acquisition of muscular force signals and images from a ultrasound scanner simultaneously. Thus, it is possible to unify two key magnitudes for analyzing the evolution of muscular injuries: the force exerted by the muscle and section/length of the muscle when such force is exerted. This paper describes the modules developed to finally show its applicability with a case study to analyze the functioning capacity of the shoulder rotator cuff. PMID:23176896

  19. Imaging acquisition display performance: an evaluation and discussion of performance metrics and procedures.

    PubMed

    Silosky, Michael S; Marsh, Rebecca M; Scherzinger, Ann L

    2016-01-01

    When The Joint Commission updated its Requirements for Diagnostic Imaging Services for hospitals and ambulatory care facilities on July 1, 2015, among the new requirements was an annual performance evaluation for acquisition workstation displays. The purpose of this work was to evaluate a large cohort of acquisition displays used in a clinical environment and compare the results with existing performance standards provided by the American College of Radiology (ACR) and the American Association of Physicists in Medicine (AAPM). Measurements of the minimum luminance, maximum luminance, and luminance uniformity, were performed on 42 acquisition displays across multiple imaging modalities. The mean values, standard deviations, and ranges were calculated for these metrics. Additionally, visual evaluations of contrast, spatial resolution, and distortion were performed using either the Society of Motion Pictures and Television Engineers test pattern or the TG-18-QC test pattern. Finally, an evaluation of local nonuniformities was performed using either a uniform white display or the TG-18-UN80 test pattern. Displays tested were flat panel, liquid crystal displays that ranged from less than 1 to up to 10 years of use and had been built by a wide variety of manufacturers. The mean values for Lmin and Lmax for the displays tested were 0.28 ± 0.13 cd/m2 and 135.07 ± 33.35 cd/m2, respectively. The mean maximum luminance deviation for both ultrasound and non-ultrasound displays was 12.61% ± 4.85% and 14.47% ± 5.36%, respectively. Visual evaluation of display performance varied depending on several factors including brightness and contrast settings and the test pattern used for image quality assessment. This work provides a snapshot of the performance of 42 acquisition displays across several imaging modalities in clinical use at a large medical center. Comparison with existing performance standards reveals that changes in display technology and the move from cathode ray

  20. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  1. Methods of Hematoxylin and Erosin Image Information Acquisition and Optimization in Confocal Microscopy

    PubMed Central

    Yoon, Woong Bae; Kim, Hyunjin; Kim, Kwang Gi; Choi, Yongdoo; Chang, Hee Jin

    2016-01-01

    Objectives We produced hematoxylin and eosin (H&E) staining-like color images by using confocal laser scanning microscopy (CLSM), which can obtain the same or more information in comparison to conventional tissue staining. Methods We improved images by using several image converting techniques, including morphological methods, color space conversion methods, and segmentation methods. Results An image obtained after image processing showed coloring very similar to that in images produced by H&E staining, and it is advantageous to conduct analysis through fluorescent dye imaging and microscopy rather than analysis based on single microscopic imaging. Conclusions The colors used in CLSM are different from those seen in H&E staining, which is the method most widely used for pathologic diagnosis and is familiar to pathologists. Computer technology can facilitate the conversion of images by CLSM to be very similar to H&E staining images. We believe that the technique used in this study has great potential for application in clinical tissue analysis. PMID:27525165

  2. Optimizing Uas Image Acquisition and Geo-Registration for Precision Agriculture

    NASA Astrophysics Data System (ADS)

    Hearst, A. A.; Cherkauer, K. A.; Rainey, K. M.

    2014-12-01

    Unmanned Aircraft Systems (UASs) can acquire imagery of crop fields in various spectral bands, including the visible, near-infrared, and thermal portions of the spectrum. By combining techniques of computer vision, photogrammetry, and remote sensing, these images can be stitched into precise, geo-registered maps, which may have applications in precision agriculture and other industries. However, the utility of these maps will depend on their positional accuracy. Therefore, it is important to quantify positional accuracy and consider the tradeoffs between accuracy, field site setup, and the computational requirements for data processing and analysis. This will enable planning of data acquisition and processing to obtain the required accuracy for a given project. This study focuses on developing and evaluating methods for geo-registration of raw aerial frame photos acquired by a small fixed-wing UAS. This includes visual, multispectral, and thermal imagery at 3, 6, and 14 cm/pix resolutions, respectively. The study area is 10 hectares of soybean fields at the Agronomy Center for Research and Education (ACRE) at Purdue University. The dataset consists of imagery from 6 separate days of flights (surveys) and supporting ground measurements. The Direct Sensor Orientation (DiSO) and Integrated Sensor Orientation (InSO) methods for geo-registration are tested using 16 Ground Control Points (GCPs). Subsets of these GCPs are used to test for the effects of different numbers and spatial configurations of GCPs on positional accuracy. The horizontal and vertical Root Mean Squared Error (RMSE) is used as the primary metric of positional accuracy. Preliminary results from 1 of the 6 surveys show that the DiSO method (0 GCPs used) achieved an RMSE in the X, Y, and Z direction of 2.46 m, 1.04 m, and 1.91 m, respectively. InSO using 5 GCPs achieved an RMSE of 0.17 m, 0.13 m, and 0.44 m. InSO using 10 GCPs achieved an RMSE of 0.10 m, 0.09 m, and 0.12 m. Further analysis will identify

  3. Cardiovascular Magnetic Resonance in Cardiology Practice: A Concise Guide to Image Acquisition and Clinical Interpretation.

    PubMed

    Valbuena-López, Silvia; Hinojar, Rocío; Puntmann, Valentina O

    2016-02-01

    Cardiovascular magnetic resonance plays an increasingly important role in routine cardiology clinical practice. It is a versatile imaging modality that allows highly accurate, broad and in-depth assessment of cardiac function and structure and provides information on pertinent clinical questions in diseases such as ischemic heart disease, nonischemic cardiomyopathies, and heart failure, as well as allowing unique indications, such as the assessment and quantification of myocardial iron overload or infiltration. Increasing evidence for the role of cardiovascular magnetic resonance, together with the spread of knowledge and skill outside expert centers, has afforded greater access for patients and wider clinical experience. This review provides a snapshot of cardiovascular magnetic resonance in modern clinical practice by linking image acquisition and postprocessing with effective delivery of the clinical meaning.

  4. Horizon Acquisition for Attitude Determination Using Image Processing Algorithms- Results of HORACE on REXUS 16

    NASA Astrophysics Data System (ADS)

    Barf, J.; Rapp, T.; Bergmann, M.; Geiger, S.; Scharf, A.; Wolz, F.

    2015-09-01

    The aim of the Horizon Acquisition Experiment (HORACE) was to prove a new concept for a two-axis horizon sensor using algorithms processing ordinary images, which is also operable at high spinning rates occurring during emergencies. The difficulty to cope with image distortions, which is avoided by conventional horizon sensors, was introduced on purpose as we envision a system being capable of using any optical data. During the flight on REXUS1 16, which provided a suitable platform similar to the future application scenario, a malfunction of the payload cameras caused severe degradation of the collected scientific data. Nevertheless, with the aid of simulations we could show that the concept is accurate (±0.6°), fast (~ lOOms/frame) and robust enough for coarse attitude determination during emergencies and also applicable for small satellites. Besides, technical knowledge regarding the design of REXUS-experiments, including the detection of interferences between SATA and GPS, was gained.

  5. Wndchrm – an open source utility for biological image analysis

    PubMed Central

    Shamir, Lior; Orlov, Nikita; Eckley, D Mark; Macura, Tomasz; Johnston, Josiah; Goldberg, Ilya G

    2008-01-01

    Background Biological imaging is an emerging field, covering a wide range of applications in biological and clinical research. However, while machinery for automated experimenting and data acquisition has been developing rapidly in the past years, automated image analysis often introduces a bottleneck in high content screening. Methods Wndchrm is an open source utility for biological image analysis. The software works by first extracting image content descriptors from the raw image, image transforms, and compound image transforms. Then, the most informative features are selected, and the feature vector of each image is used for classification and similarity measurement. Results Wndchrm has been tested using several publicly available biological datasets, and provided results which are favorably comparable to the performance of task-specific algorithms developed for these datasets. The simple user interface allows researchers who are not knowledgeable in computer vision methods and have no background in computer programming to apply image analysis to their data. Conclusion We suggest that wndchrm can be effectively used for a wide range of biological image analysis tasks. Using wndchrm can allow scientists to perform automated biological image analysis while avoiding the costly challenge of implementing computer vision and pattern recognition algorithms. PMID:18611266

  6. MS1 Peptide Ion Intensity Chromatograms in MS2 (SWATH) Data Independent Acquisitions. Improving Post Acquisition Analysis of Proteomic Experiments.

    PubMed

    Rardin, Matthew J; Schilling, Birgit; Cheng, Lin-Yang; MacLean, Brendan X; Sorensen, Dylan J; Sahu, Alexandria K; MacCoss, Michael J; Vitek, Olga; Gibson, Bradford W

    2015-09-01

    Quantitative analysis of discovery-based proteomic workflows now relies on high-throughput large-scale methods for identification and quantitation of proteins and post-translational modifications. Advancements in label-free quantitative techniques, using either data-dependent or data-independent mass spectrometric acquisitions, have coincided with improved instrumentation featuring greater precision, increased mass accuracy, and faster scan speeds. We recently reported on a new quantitative method called MS1 Filtering (Schilling et al. (2012) Mol. Cell. Proteomics 11, 202-214) for processing data-independent MS1 ion intensity chromatograms from peptide analytes using the Skyline software platform. In contrast, data-independent acquisitions from MS2 scans, or SWATH, can quantify all fragment ion intensities when reference spectra are available. As each SWATH acquisition cycle typically contains an MS1 scan, these two independent label-free quantitative approaches can be acquired in a single experiment. Here, we have expanded the capability of Skyline to extract both MS1 and MS2 ion intensity chromatograms from a single SWATH data-independent acquisition in an Integrated Dual Scan Analysis approach. The performance of both MS1 and MS2 data was examined in simple and complex samples using standard concentration curves. Cases of interferences in MS1 and MS2 ion intensity data were assessed, as were the differentiation and quantitation of phosphopeptide isomers in MS2 scan data. In addition, we demonstrated an approach for optimization of SWATH m/z window sizes to reduce interferences using MS1 scans as a guide. Finally, a correlation analysis was performed on both MS1 and MS2 ion intensity data obtained from SWATH acquisitions on a complex mixture using a linear model that automatically removes signals containing interferences. This work demonstrates the practical advantages of properly acquiring and processing MS1 precursor data in addition to MS2 fragment ion

  7. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  8. Parallel image-acquisition in continuous-wave electron paramagnetic resonance imaging with a surface coil array: Proof-of-concept experiments.

    PubMed

    Enomoto, Ayano; Hirata, Hiroshi

    2014-02-01

    This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.

  9. Optimizing Federal Fleet Vehicle Acquisitions: An Eleven-Agency FY 2012 Analysis

    SciTech Connect

    Singer, M.; Daley, R.

    2015-02-01

    This report focuses on the National Renewable Energy Laboratory's (NREL) fiscal year (FY) 2012 effort that used the NREL Optimal Vehicle Acquisition (NOVA) analysis to identify optimal vehicle acquisition recommendations for eleven diverse federal agencies. Results of the study show that by following a vehicle acquisition plan that maximizes the reduction in greenhouse gas (GHG) emissions, significant progress is also made toward the mandated complementary goals of acquiring alternative fuel vehicles, petroleum use reduction, and alternative fuel use increase.

  10. Diffusion MRI of the neonate brain: acquisition, processing and analysis techniques.

    PubMed

    Pannek, Kerstin; Guzzetta, Andrea; Colditz, Paul B; Rose, Stephen E

    2012-10-01

    Diffusion MRI (dMRI) is a popular noninvasive imaging modality for the investigation of the neonate brain. It enables the assessment of white matter integrity, and is particularly suited for studying white matter maturation in the preterm and term neonate brain. Diffusion tractography allows the delineation of white matter pathways and assessment of connectivity in vivo. In this review, we address the challenges of performing and analysing neonate dMRI. Of particular importance in dMRI analysis is adequate data preprocessing to reduce image distortions inherent to the acquisition technique, as well as artefacts caused by head movement. We present a summary of techniques that should be used in the preprocessing of neonate dMRI data, and demonstrate the effect of these important correction steps. Furthermore, we give an overview of available analysis techniques, ranging from voxel-based analysis of anisotropy metrics including tract-based spatial statistics (TBSS) to recently developed methods of statistical analysis addressing issues of resolving complex white matter architecture. We highlight the importance of resolving crossing fibres for tractography and outline several tractography-based techniques, including connectivity-based segmentation, the connectome and tractography mapping. These techniques provide powerful tools for the investigation of brain development and maturation. PMID:22903761

  11. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  12. A guide to human in vivo microcirculatory flow image analysis.

    PubMed

    Massey, Michael J; Shapiro, Nathan I

    2016-01-01

    Various noninvasive microscopic camera technologies have been used to visualize the sublingual microcirculation in patients. We describe a comprehensive approach to bedside in vivo sublingual microcirculation video image capture and analysis techniques in the human clinical setting. We present a user perspective and guide suitable for clinical researchers and developers interested in the capture and analysis of sublingual microcirculatory flow videos. We review basic differences in the cameras, optics, light sources, operation, and digital image capture. We describe common techniques for image acquisition and discuss aspects of video data management, including data transfer, metadata, and database design and utilization to facilitate the image analysis pipeline. We outline image analysis techniques and reporting including video preprocessing and image quality evaluation. Finally, we propose a framework for future directions in the field of microcirculatory flow videomicroscopy acquisition and analysis. Although automated scoring systems have not been sufficiently robust for widespread clinical or research use to date, we discuss promising innovations that are driving new development. PMID:26861691

  13. Image Analysis in Surgical Pathology.

    PubMed

    Lloyd, Mark C; Monaco, James P; Bui, Marilyn M

    2016-06-01

    Digitization of glass slides of surgical pathology samples facilitates a number of value-added capabilities beyond what a pathologist could previously do with a microscope. Image analysis is one of the most fundamental opportunities to leverage the advantages that digital pathology provides. The ability to quantify aspects of a digital image is an extraordinary opportunity to collect data with exquisite accuracy and reliability. In this review, we describe the history of image analysis in pathology and the present state of technology processes as well as examples of research and clinical use. PMID:27241112

  14. A robust adaptive sampling method for faster acquisition of MR images.

    PubMed

    Vellagoundar, Jaganathan; Machireddy, Ramasubba Reddy

    2015-06-01

    A robust adaptive k-space sampling method is proposed for faster acquisition and reconstruction of MR images. In this method, undersampling patterns are generated based on magnitude profile of a fully acquired 2-D k-space data. Images are reconstructed using compressive sampling reconstruction algorithm. Simulation experiments are done to assess the performance of the proposed method under various signal-to-noise ratio (SNR) levels. The performance of the method is better than non-adaptive variable density sampling method when k-space SNR is greater than 10dB. The method is implemented on a fully acquired multi-slice raw k-space data and a quality assurance phantom data. Data reduction of up to 60% is achieved in the multi-slice imaging data and 75% is achieved in the phantom imaging data. The results show that reconstruction accuracy is improved over non-adaptive or conventional variable density sampling method. The proposed sampling method is signal dependent and the estimation of sampling locations is robust to noise. As a result, it eliminates the necessity of mathematical model and parameter tuning to compute k-space sampling patterns as required in non-adaptive sampling methods.

  15. SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.

    PubMed

    Lee, Hyunyeol; Park, Jaeseok

    2013-07-01

    Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging.

  16. Graph-based retrospective 4D image construction from free-breathing MRI slice acquisitions

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Ciesielski, Krzysztof C.; McDonough, Joseph M.; Mong, Andrew; Campbell, Robert M.

    2014-03-01

    4D or dynamic imaging of the thorax has many potential applications [1, 2]. CT and MRI offer sufficient speed to acquire motion information via 4D imaging. However they have different constraints and requirements. For both modalities both prospective and retrospective respiratory gating and tracking techniques have been developed [3, 4]. For pediatric imaging, x-ray radiation becomes a primary concern and MRI remains as the de facto choice. The pediatric subjects we deal with often suffer from extreme malformations of their chest wall, diaphragm, and/or spine, as such patient cooperation needed by some of the gating and tracking techniques are difficult to realize without causing patient discomfort. Moreover, we are interested in the mechanical function of their thorax in its natural form in tidal breathing. Therefore free-breathing MRI acquisition is the ideal modality of imaging for these patients. In our set up, for each coronal (or sagittal) slice position, slice images are acquired at a rate of about 200-300 ms/slice over several natural breathing cycles. This produces typically several thousands of slices which contain both the anatomic and dynamic information. However, it is not trivial to form a consistent and well defined 4D volume from these data. In this paper, we present a novel graph-based combinatorial optimization solution for constructing the best possible 4D scene from such data entirely in the digital domain. Our proposed method is purely image-based and does not need breath holding or any external surrogates or instruments to record respiratory motion or tidal volume. Both adult and children patients' data are used to illustrate the performance of the proposed method. Experimental results show that the reconstructed 4D scenes are smooth and consistent spatially and temporally, agreeing with known shape and motion of the lungs.

  17. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  18. Basic image analysis and manipulation in ImageJ.

    PubMed

    Hartig, Sean M

    2013-01-01

    Image analysis methods have been developed to provide quantitative assessment of microscopy data. In this unit, basic aspects of image analysis are outlined, including software installation, data import, image processing functions, and analytical tools that can be used to extract information from microscopy data using ImageJ. Step-by-step protocols for analyzing objects in a fluorescence image and extracting information from two-color tissue images collected by bright-field microscopy are included.

  19. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  20. IT system supporting acquisition of image data used in the identification of grasslands

    NASA Astrophysics Data System (ADS)

    Mueller, Wojciech; Nowakowski, Krzysztof; Tomczak, Robert J.; Kujawa, Sebastian; Rudowicz-Nawrocka, Janina; Idziaszek, Przemysław; Zawadzki, Adrian

    2013-07-01

    A complex research project was undertaken by the authors to develop a method for the automatic identification of grasslands using the neural analysis of aerial photographs made from relative low altitude. The development of such method requires the collection of large amount of various data. To control them and also to automate the process of their acquisition, an appropriate information system was developed in this study with the use of a variety of commercial and free technologies. Technologies for processing and storage of data in the form of raster and vector graphics were pivotal in the development of the research tool.

  1. Automatic analysis of macroarrays images.

    PubMed

    Caridade, C R; Marcal, A S; Mendonca, T; Albuquerque, P; Mendes, M V; Tavares, F

    2010-01-01

    The analysis of dot blot (macroarray) images is currently based on the human identification of positive/negative dots, which is a subjective and time consuming process. This paper presents a system for the automatic analysis of dot blot images, using a pre-defined grid of markers, including a number of ON and OFF controls. The geometric deformations of the input image are corrected, and the individual markers detected, both tasks fully automatically. Based on a previous training stage, the probability for each marker to be ON is established. This information is provided together with quality parameters for training, noise and classification, allowing for a fully automatic evaluation of a dot blot image. PMID:21097139

  2. Data acquisition system to interface between imaging instruments and the network: Applications in electron microscopy and ultrasound

    NASA Astrophysics Data System (ADS)

    Kapp, Oscar H.; Ruan, Shengyang

    1997-09-01

    A system for data acquisition for imaging instruments utilizing a computer network was created. Two versions of this system, both with the same basic design, were separately installed in conjunction with an electron microscope and a clinical ultrasound device. They serve the functions of data acquisition and data server to manage and to transfer images from these instruments. The virtues of this system are its simplicity of design, universality, cost effectiveness, ease of management, security for data, and instrument protection. This system, with little or no modification, may be used in conjunction with a broad range of data acquiring instruments in scientific, industrial, and medical laboratories.

  3. Three-dimensional ultrasonic imaging of concrete elements using different SAFT data acquisition and processing schemes

    SciTech Connect

    Schickert, Martin

    2015-03-31

    Ultrasonic testing systems using transducer arrays and the SAFT (Synthetic Aperture Focusing Technique) reconstruction allow for imaging the internal structure of concrete elements. At one-sided access, three-dimensional representations of the concrete volume can be reconstructed in relatively great detail, permitting to detect and localize objects such as construction elements, built-in components, and flaws. Different SAFT data acquisition and processing schemes can be utilized which differ in terms of the measuring and computational effort and the reconstruction result. In this contribution, two methods are compared with respect to their principle of operation and their imaging characteristics. The first method is the conventional single-channel SAFT algorithm which is implemented using a virtual transducer that is moved within a transducer array by electronic switching. The second method is the Combinational SAFT algorithm (C-SAFT), also named Sampling Phased Array (SPA) or Full Matrix Capture/Total Focusing Method (TFM/FMC), which is realized using a combination of virtual transducers within a transducer array. Five variants of these two methods are compared by means of measurements obtained at test specimens containing objects typical of concrete elements. The automated SAFT imaging system FLEXUS is used for the measurements which includes a three-axis scanner with a 1.0 m × 0.8 m scan range and an electronically switched ultrasonic array consisting of 48 transducers in 16 groups. On the basis of two-dimensional and three-dimensional reconstructed images, qualitative and some quantitative results of the parameters image resolution, signal-to-noise ratio, measurement time, and computational effort are discussed in view of application characteristics of the SAFT variants.

  4. Acquisition of priori tissue optical structure based on non-rigid image registration

    NASA Astrophysics Data System (ADS)

    Wan, Wenbo; Li, Jiao; Liu, Lingling; Wang, Yihan; Zhang, Yan; Gao, Feng

    2015-03-01

    Shape-parameterized diffuse optical tomography (DOT), which is based on a priori that assumes the uniform distribution of the optical properties in the each region, shows the effectiveness of complex biological tissue optical heterogeneities reconstruction. The priori tissue optical structure could be acquired with the assistance of anatomical imaging methods such as X-ray computed tomography (XCT) which suffers from low-contrast for soft tissues including different optical characteristic regions. For the mouse model, a feasible strategy of a priori tissue optical structure acquisition is proposed based on a non-rigid image registration algorithm. During registration, a mapping matrix is calculated to elastically align the XCT image of reference mouse to the XCT image of target mouse. Applying the matrix to the reference atlas which is a detailed mesh of organs/tissues in reference mouse, registered atlas can be obtained as the anatomical structure of target mouse. By assigning the literature published optical parameters of each organ to the corresponding anatomical structure, optical structure of the target organism can be obtained as a priori information for DOT reconstruction algorithm. By applying the non-rigid image registration algorithm to a target mouse which is transformed from the reference mouse, the results show that the minimum correlation coefficient can be improved from 0.2781 (before registration) to 0.9032 (after fine registration), and the maximum average Euclid distances can be decreased from 12.80mm (before registration) to 1.02mm (after fine registration), which has verified the effectiveness of the algorithm.

  5. Optimized acquisition time for x-ray fluorescence imaging of gold nanoparticles: a preliminary study using photon counting detector

    NASA Astrophysics Data System (ADS)

    Ren, Liqiang; Wu, Di; Li, Yuhua; Chen, Wei R.; Zheng, Bin; Liu, Hong

    2016-03-01

    X-ray fluorescence (XRF) is a promising spectroscopic technique to characterize imaging contrast agents with high atomic numbers (Z) such as gold nanoparticles (GNPs) inside small objects. Its utilization for biomedical applications, however, is greatly limited to experimental research due to longer data acquisition time. The objectives of this study are to apply a photon counting detector array for XRF imaging and to determine an optimized XRF data acquisition time, at which the acquired XRF image is of acceptable quality to allow the maximum level of radiation dose reduction. A prototype laboratory XRF imaging configuration consisting of a pencil-beam X-ray and a photon counting detector array (1 × 64 pixels) is employed to acquire the XRF image through exciting the prepared GNP/water solutions. In order to analyze the signal to noise ratio (SNR) improvement versus the increased exposure time, all the XRF photons within the energy range of 63 - 76KeV that include two Kα gold fluorescence peaks are collected for 1s, 2s, 3s, and so on all the way up to 200s. The optimized XRF data acquisition time for imaging different GNP solutions is determined as the moment when the acquired XRF image just reaches a quality with a SNR of 20dB which corresponds to an acceptable image quality.

  6. In-situ Image Acquisition Strategy on Asteroid Surface by MINERVA Rover in HAYABUSA Mission

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, T.; Sasaki, S.; Yanagisawa, M.

    Institute of Space and Astronautical Science (ISAS) has launched the engineering test spacecraft ``HAYABUSA'' (formerly called ``MUSES-C'') to the near Earth asteroid ``ITOKAWA (1998SF36)'' on May 9, 2003. HAYABUSA will go to the target asteroid after two years' interplanetary cruise and will descend onto the asteroid surface in 2005 to acquire some fragments, which will be brought back to the Earth in 2007. A tiny rover called ``MINERVA'' has boarded the HAYABUSA spacecraft. MINERVA is the first asteroid rover in the world. It will be deployed onto the surface immediately before the spacecraft touches the asteroid to acquire some fragments. Then it will autonomously move over the surface by hopping for a couple of days and the obtained data on multiple places are transmitted to the Earth via the mother spacecraft. Small cameras and thermometers are installed in the rover. This paper describes the image acquisition strategy by the cameras installed in the rover.

  7. A digital receiver module with direct data acquisition for magnetic resonance imaging systems

    NASA Astrophysics Data System (ADS)

    Tang, Weinan; Sun, Hongyu; Wang, Weimin

    2012-10-01

    A digital receiver module for magnetic resonance imaging (MRI) with detailed hardware implementations is presented. The module is based on a direct sampling scheme using the latest mixed-signal circuit design techniques. A single field-programmable gate array chip is employed to perform software-based digital down conversion for radio frequency signals. The modular architecture of the receiver allows multiple acquisition channels to be implemented on a highly integrated printed circuit board. To maintain the phase coherence of the receiver and the exciter in the context of direct sampling, an effective phase synchronization method was proposed to achieve a phase deviation as small as 0.09°. The performance of the described receiver module was verified in the experiments for both low- and high-field (0.5 T and 1.5 T) MRI scanners and was compared to a modern commercial MRI receiver system.

  8. Recovery of phase inconsistencies in continuously moving table extended field of view magnetic resonance imaging acquisitions.

    PubMed

    Kruger, David G; Riederer, Stephen J; Rossman, Phillip J; Mostardi, Petrice M; Madhuranthakam, Ananth J; Hu, Houchun H

    2005-09-01

    MR images formed using extended FOV continuously moving table data acquisition can have signal falloff and loss of lateral spatial resolution at localized, periodic positions along the direction of table motion. In this work we identify the origin of these artifacts and provide a means for correction. The artifacts are due to a mismatch of the phase of signals acquired from contiguous sampling fields of view and are most pronounced when the central k-space views are being sampled. Correction can be performed using the phase information from a periodically sampled central view to adjust the phase of all other views of that view cycle, making the net phase uniform across each axial plane. Results from experimental phantom and contrast-enhanced peripheral MRA studies show that the correction technique substantially eliminates the artifact for a variety of phase encode orders.

  9. A digital receiver module with direct data acquisition for magnetic resonance imaging systems.

    PubMed

    Tang, Weinan; Sun, Hongyu; Wang, Weimin

    2012-10-01

    A digital receiver module for magnetic resonance imaging (MRI) with detailed hardware implementations is presented. The module is based on a direct sampling scheme using the latest mixed-signal circuit design techniques. A single field-programmable gate array chip is employed to perform software-based digital down conversion for radio frequency signals. The modular architecture of the receiver allows multiple acquisition channels to be implemented on a highly integrated printed circuit board. To maintain the phase coherence of the receiver and the exciter in the context of direct sampling, an effective phase synchronization method was proposed to achieve a phase deviation as small as 0.09°. The performance of the described receiver module was verified in the experiments for both low- and high-field (0.5 T and 1.5 T) MRI scanners and was compared to a modern commercial MRI receiver system.

  10. Thermal Imaging of the Waccasassa Bay Preserve: Image Acquisition and Processing

    USGS Publications Warehouse

    Raabe, Ellen A.; Bialkowska-Jelinska, Elzbieta

    2010-01-01

    Thermal infrared (TIR) imagery was acquired along coastal Levy County, Florida, in March 2009 with the goal of identifying groundwater-discharge locations in Waccasassa Bay Preserve State Park (WBPSP). Groundwater discharge is thermally distinct in winter when Floridan aquifer temperature, 71-72 degrees F, contrasts with the surrounding cold surface waters. Calibrated imagery was analyzed to assess temperature anomalies and related thermal traces. The influence of warm Gulf water and image artifacts on small features was successfully constrained by image evaluation in three separate zones: Creeks, Bay, and Gulf. Four levels of significant water-temperature anomalies were identified, and 488 sites of interest were mapped. Among the sites identified, at least 80 were determined to be associated with image artifacts and human activity, such as excavation pits and the Florida Barge Canal. Sites of interest were evaluated for geographic concentration and isolation. High site densities, indicating interconnectivity and prevailing flow, were located at Corrigan Reef, No. 4 Channel, Winzy Creek, Cow Creek, Withlacoochee River, and at excavation sites. In other areas, low to moderate site density indicates the presence of independent vents and unique flow paths. A directional distribution assessment of natural seep features produced a northwest trend closely matching the strike direction of regional faults. Naturally occurring seeps were located in karst ponds and tidal creeks, and several submerged sites were detected in Waccasassa River and Bay, representing the first documentation of submarine vents in the Waccasassa region. Drought conditions throughout the region placed constraints on positive feature identification. Low discharge or displacement by landward movement of saltwater may have reduced or reversed flow during this season. Approximately two-thirds of seep locations in the overlap between 2009 and 2005 TIR night imagery were positively re-identified in 2009

  11. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  12. From chaos to order: The MicroStar data acquisition and analysis system

    SciTech Connect

    Rathbun, W.

    1991-03-01

    The MicroStar data acquisition and analysis software consists of several independent processes, although to the user it looks like a single program. These programs handle such functions as data acquisition (if any), data analysis, interactive display and data manipulation, and process monitoring and control. An overview of these processes and functions can be found in the paper as well as a more detailed description and users' guide.

  13. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    NASA Astrophysics Data System (ADS)

    Martynov, Denis

    Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in

  14. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  15. Calibration of a flood inundation model using a SAR image: influence of acquisition time

    NASA Astrophysics Data System (ADS)

    Van Wesemael, Alexandra; Gobeyn, Sacha; Neal, Jeffrey; Lievens, Hans; Van Eerdenbrugh, Katrien; De Vleeschouwer, Niels; Schumann, Guy; Vernieuwe, Hilde; Di Baldassarre, Giuliano; De Baets, Bernard; Bates, Paul; Verhoest, Niko

    2016-04-01

    Flood risk management has always been in a search for effective prediction approaches. As such, the calibration of flood inundation models is continuously improved. In practice, this calibration process consists of finding the optimal roughness parameters, both channel and floodplain Manning coefficients, since these values considerably influence the flood extent in a catchment. In addition, Synthetic Aperture Radar (SAR) images have been proven to be a very useful tool in calibrating the flood extent. These images can distinguish between wet (flooded) and dry (non-flooded) pixels through the intensity of backscattered radio waves. To this date, however, satellite overpass often occurs only once during a flood event. Therefore, this study is specifically concerned with the effect of the timing of the SAR data acquisition on calibration results. In order to model the flood extent, the raster-based inundation model, LISFLOOD-FP, is used together with a high resolution synthetic aperture radar image (ERS-2 SAR) of a flood event of the river Dee, Wales, in December 2006. As only one satellite image of the considered case study is available, a synthetic framework is implemented in order to generate a time series of SAR observations. These synthetic observations are then used to calibrate the model at different time instants. In doing so, the sensitivity of the model output to the channel and floodplain Manning coefficients is studied through time. As results are examined, these suggest that there is a clear difference in the spatial variability to which water is held within the floodplain. Furthermore, these differences seem to be variable through time. Calibration by means of satellite flood observations obtained from the rising or receding limb, would generally lead to more reliable results rather than near peak flow observations.

  16. Signal displacement in spiral-in acquisitions: simulations and implications for imaging in SFG regions.

    PubMed

    Brewer, Kimberly D; Rioux, James A; Klassen, Martyn; Bowen, Chris V; Beyea, Steven D

    2012-07-01

    Susceptibility field gradients (SFGs) cause problems for functional magnetic resonance imaging (fMRI) in regions like the orbital frontal lobes, leading to signal loss and image artifacts (signal displacement and "pile-up"). Pulse sequences with spiral-in k-space trajectories are often used when acquiring fMRI in SFG regions such as inferior/medial temporal cortex because it is believed that they have improved signal recovery and decreased signal displacement properties. Previously postulated theories explain differing reasons why spiral-in appears to perform better than spiral-out; however it is clear that multiple mechanisms are occurring in parallel. This study explores differences in spiral-in and spiral-out images using human and phantom empirical data, as well as simulations consistent with the phantom model. Using image simulations, the displacement of signal was characterized using point spread functions (PSFs) and target maps, the latter of which are conceptually inverse PSFs describing which spatial locations contribute signal to a particular voxel. The magnitude of both PSFs and target maps was found to be identical for spiral-out and spiral-in acquisitions, with signal in target maps being displaced from distant regions in both cases. However, differences in the phase of the signal displacement patterns that consequently lead to changes in the intervoxel phase coherence were found to be a significant mechanism explaining differences between the spiral sequences. The results demonstrate that spiral-in trajectories do preserve more total signal in SFG regions than spiral-out; however, spiral-in does not in fact exhibit decreased signal displacement. Given that this signal can be displaced by significant distances, its recovery may not be preferable for all fMRI applications.

  17. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain

  18. Functional optoacoustic imaging of moving objects using microsecond-delay acquisition of multispectral three-dimensional tomographic data.

    PubMed

    Deán-Ben, Xosé Luís; Bay, Erwin; Razansky, Daniel

    2014-07-30

    The breakthrough capacity of optoacoustics for three-dimensional visualization of dynamic events in real time has been recently showcased. Yet, efficient spectral unmixing for functional imaging of entire volumetric regions is significantly challenged by motion artifacts in concurrent acquisitions at multiple wavelengths. Here, we introduce a method for simultaneous acquisition of multispectral volumetric datasets by introducing a microsecond-level delay between excitation laser pulses at different wavelengths. Robust performance is demonstrated by real-time volumetric visualization of functional blood parametrers in human vasculature with a handheld matrix array optoacoustic probe. This approach can avert image artifacts imposed by velocities greater than 2 m/s, thus, does not only facilitate imaging influenced by respiratory, cardiac or other intrinsic fast movements in living tissues, but can achieve artifact-free imaging in the presence of more significant motion, e.g. abrupt displacements during handheld-mode operation in a clinical environment.

  19. Age of acquisition and imageability ratings for a large set of words, including verbs and function words.

    PubMed

    Bird, H; Franklin, S; Howard, D

    2001-02-01

    Age of acquisition and imageability ratings were collected for 2,645 words, including 892 verbs and 213 function words. Words that were ambiguous as to grammatical category were disambiguated: Verbs were shown in their infinitival form, and nouns (where appropriate) were preceded by the indefinite article (such as to crack and a crack). Subjects were speakers of British English selected from a wide age range, so that differences in the responses across age groups could be compared. Within the subset of early acquired noun/verb homonyms, the verb forms were rated as later acquired than the nouns, and the verb homonyms of high-imageability nouns were rated as significantly less imageable than their noun counterparts. A small number of words received significantly earlier or later age of acquisition ratings when the 20-40 years and 50-80 years age groups were compared. These tend to comprise words that have come to be used more frequently in recent years (either through technological advances or social change), or those that have fallen out of common usage. Regression analyses showed that although word length, familiarity, and concreteness make independent contributions to the age of acquisition measure, frequency and imageability are the most important predictors of rated age of acquisition.

  20. Measurement of eye lens dose for Varian On-Board Imaging with different cone-beam computed tomography acquisition techniques.

    PubMed

    Deshpande, Sudesh; Dhote, Deepak; Thakur, Kalpna; Pawar, Amol; Kumar, Rajesh; Kumar, Munish; Kulkarni, M S; Sharma, S D; Kannan, V

    2016-01-01

    The objective of this work was to measure patient eye lens dose for different cone-beam computed tomography (CBCT) acquisition protocols of Varian's On-Board Imaging (OBI) system using optically stimulated luminescence dosimeter (OSLD) and to study the variation in eye lens dose with patient geometry and distance of isocenter to the eye lens. During the experimental measurements, OSLD was placed on the patient between the eyebrows of both eyes in line of nose during CBCT image acquisition to measure eye lens doses. The eye lens dose measurements were carried out for three different cone-beam acquisition protocols (standard dose head, low-dose head [LDH], and high-quality head [HQH]) of Varian OBI. Measured doses were correlated with patient geometry and distance between isocenter and eye lens. Measured eye lens doses for standard head and HQH protocols were in the range of 1.8-3.2 mGy and 4.5-9.9 mGy, respectively. However, the measured eye lens dose for the LDH protocol was in the range of 0.3-0.7 mGy. The measured data indicate that eye lens dose to patient depends on the selected imaging protocol. It was also observed that eye lens dose does not depend on patient geometry but strongly depends on distance between eye lens and treatment field isocenter. However, undoubted advantages of imaging system should not be counterbalanced by inappropriate selection of imaging protocol, especially for very intense imaging protocol. PMID:27651564

  1. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  2. Challenges and opportunities for quantifying roots and rhizosphere interactions through imaging and image analysis.

    PubMed

    Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A

    2015-07-01

    The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions.

  3. Seismic acquisition parameters analysis for deep weak reflectors in the South Yellow Sea

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Liu, Huaishan; Wu, Zhiqiang; Yue, Long

    2016-10-01

    The Mesozoic-Paleozoic marine residual basin in the South Yellow Sea (SYS) is a significant deep potential hydrocarbon reservoir. However, the imaging of the deep prospecting target is quite challenging due to the specific seismic-geological conditions. In the Central and Wunansha Uplifts, the penetration of the seismic wavefield is limited by the shallow high-velocity layers (HVLs) and the weak reflections in the deep carbonate rocks. With the conventional marine seismic acquisition technique, the deep weak reflection is difficult to image and identify. In this paper, we could confirm through numerical simulation that the combination of multi-level air-gun array and extended cable used in the seismic acquisition is crucial for improving the imaging quality. Based on the velocity model derived from the geological interpretation, we performed two-dimensional finite difference forward modeling. The numerical simulation results show that the use of the multi-level air-gun array can enhance low-frequency energy and that the wide-angle reflection received at far offsets of the extended cable has a higher signal-to-noise ratio (SNR) and higher energy. Therefore, we have demonstrated that the unconventional wide-angle seismic acquisition technique mentioned above could overcome the difficulty in imaging the deep weak reflectors of the SYS, and it may be useful for the design of practical seismic acquisition schemes in this region.

  4. Novel ultrahigh resolution data acquisition and image reconstruction for multi-detector row CT

    SciTech Connect

    Flohr, T. G.; Stierstorfer, K.; Suess, C.; Schmidt, B.; Primak, A. N.; McCollough, C. H.

    2007-05-15

    We present and evaluate a special ultrahigh resolution mode providing considerably enhanced spatial resolution both in the scan plane and in the z-axis direction for a routine medical multi-detector row computed tomography (CT) system. Data acquisition is performed by using a flying focal spot both in the scan plane and in the z-axis direction in combination with tantalum grids that are inserted in front of the multi-row detector to reduce the aperture of the detector elements both in-plane and in the z-axis direction. The dose utilization of the system for standard applications is not affected, since the grids are moved into place only when needed and are removed for standard scanning. By means of this technique, image slices with a nominal section width of 0.4 mm (measured full width at half maximum=0.45 mm) can be reconstructed in spiral mode on a CT system with a detector configuration of 32x0.6 mm. The measured 2% value of the in-plane modulation transfer function (MTF) is 20.4 lp/cm, the measured 2% value of the longitudinal (z axis) MTF is 21.5 lp/cm. In a resolution phantom with metal line pair test patterns, spatial resolution of 20 lp/cm can be demonstrated both in the scan plane and along the z axis. This corresponds to an object size of 0.25 mm that can be resolved. The new mode is intended for ultrahigh resolution bone imaging, in particular for wrists, joints, and inner ear studies, where a higher level of image noise due to the reduced aperture is an acceptable trade-off for the clinical benefit brought about by the improved spatial resolution.

  5. Acquisition of Flexible Image Recognition by Coupling of Reinforcement Learning and a Neural Network

    NASA Astrophysics Data System (ADS)

    Shibata, Katsunari; Kawano, Tomohiko

    The authors have proposed a very simple autonomous learning system consisting of one neural network (NN), whose inputs are raw sensor signals and whose outputs are directly passed to actuators as control signals, and which is trained by using reinforcement learning (RL). However, the current opinion seems that such simple learning systems do not actually work on complicated tasks in the real world. In this paper, with a view to developing higher functions in robots, the authors bring up the necessity to introduce autonomous learning of a massively parallel and cohesively flexible system with massive inputs based on the consideration about the brain architecture and the sequential property of our consciousness. The authors also bring up the necessity to place more importance on “optimization” of the total system under a uniform criterion than “understandability” for humans. Thus, the authors attempt to stress the importance of their proposed system when considering the future research on robot intelligence. The experimental result in a real-world-like environment shows that image recognition from as many as 6240 visual signals can be acquired through RL under various backgrounds and light conditions without providing any knowledge about image processing or the target object. It works even for camera image inputs that were not experienced in learning. In the hidden layer, template-like representation, division of roles between hidden neurons, and representation to detect the target uninfluenced by light condition or background were observed after learning. The autonomous acquisition of such useful representations or functions makes us feel the potential towards avoidance of the frame problem and the development of higher functions.

  6. MIRAGE: The data acquisition, analysis, and display system

    NASA Technical Reports Server (NTRS)

    Rosser, Robert S.; Rahman, Hasan H.

    1993-01-01

    Developed for the NASA Johnson Space Center and Life Sciences Directorate by GE Government Services, the Microcomputer Integrated Real-time Acquisition Ground Equipment (MIRAGE) system is a portable ground support system for Spacelab life sciences experiments. The MIRAGE system can acquire digital or analog data. Digital data may be NRZ-formatted telemetry packets of packets from a network interface. Analog signal are digitized and stored in experimental packet format. Data packets from any acquisition source are archived to a disk as they are received. Meta-parameters are generated from the data packet parameters by applying mathematical and logical operators. Parameters are displayed in text and graphical form or output to analog devices. Experiment data packets may be retransmitted through the network interface. Data stream definition, experiment parameter format, parameter displays, and other variables are configured using spreadsheet database. A database can be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control. The MIRAGE system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability and ease of use make the MIRAGE a cost-effective solution to many experimental data processing requirements.

  7. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  8. Image analysis: Applications in materials engineering

    SciTech Connect

    Wojnar, L.

    1999-07-01

    This new practical book describes the basic principles of image acquisition, enhancement, measurement, and interpretation in very simple nonmathematical terms. it also provides solution-oriented algorithms and examples and case histories from industry and research, along with quick reference information on various specific problems. Included are numerous tables, graphs, charts, and working examples in detection of grain boundaries, pores, and chain structures.

  9. Three-dimensional fast imaging employing steady-state acquisition MRI and its diagnostic value for lumbar foraminal stenosis.

    PubMed

    Nemoto, Osamu; Fujikawa, Akira; Tachibana, Atsuko

    2014-07-01

    The aim of this study was to evaluate the usefulness of three-dimensional (3D) fast imaging employing steady-state acquisition (3D FIESTA) in the diagnosis of lumbar foraminal stenosis (LFS). Fifteen patients with LFS and 10 healthy volunteers were studied. All patients met the following criteria: (1) single L5 radiculopathy without compressive lesion in the spinal canal, (2) pain reproduction during provocative radiculography, and (3) improvement of symptoms after surgery. We retrospectively compared the symptomatic nerve roots to the asymptomatic nerve roots on fast spin-echo (FSE) T1 sagittal, FSE T2 axial and reconstituted 3D FIESTA images. The κ values for interobserver agreement in determining the presence of LFS were 0.525 for FSE T1 sagittal images, 0.735 for FSE T2 axial images, 0.750 for 3D FIESTA sagittal, 0.733 for axial images, and 0.953 for coronal images. The sensitivities and specificities were 60 and 86 % for FSE T1 sagittal images, 27 and 91 % for FSE T2 axial images, 60 and 97 % for 3D FIESTA sagittal images, 60 and 94 % for 3D FIESTA axial images, and 100 and 97 % for 3D FIESTA coronal images, respectively. 3D FIESTA can provide more reliable and additional information for the running course of lumbar nerve root, compared with conventional magnetic resonance imaging. Particularly, use of 3D FIESTA coronal images enables accurate diagnosis for LFS.

  10. A design of multiagent-based framework for volume image construction and analysis.

    PubMed

    Faheem, Hossam M

    2005-01-01

    This paper describes a design of a multiagent-based system that can be used to manage the acquisition and analysis of ultrasonograph images. The major concept is to design a management framework consisting of multiple intelligent agents to direct the ultrasonograph image acquisition and analysis operations carried out using a high-speed bit-parallel architecture efficiently as well as to allow for the construction of 3D images from 2D ones. Volume image operations need reactivity, autonomy, and intelligence of software. Therefore, agents can play an important role in enhancing the overall operation of medical image analysis. The system suggests a set of image analysis operations including smoothing, noise removal, and enhancing techniques. These operations will be implemented using parallel processing architectures while the management framework will consist of different agent types such as: simple reflex agents, agents that keep track of the world, goal-based agents, and utility-based agents. These agents interact with each other and exchange data among themselves in order to achieve a comprehensive speed in performing the volume image construction operations. Guided with the fact that the agent consists of program and architecture, the system deploys parallel processing architectures to implement the image analysis operations. The system is considered a step towards a complete multiagent-based framework for medical image acquisition and analysis.

  11. Optical hyperspectral imaging in microscopy and spectroscopy – a review of data acquisition

    PubMed Central

    Gao, Liang; Smith, R. Theodore

    2014-01-01

    Rather than simply acting as a photographic camera capturing two-dimensional (x, y) intensity images or a spectrometer acquiring spectra (λ), a hyperspectral imager measures entire three-dimensional (x, y, λ) datacubes for multivariate analysis, providing structural, molecular, and functional information about biological cells or tissue with unprecedented detail. Such data also gives clinical insights for disease diagnosis and treatment. We summarize the principles underpinning this technology, highlight its practical implementation, and discuss its recent applications at microscopic to macroscopic scales. PMID:25186815

  12. Statistical analysis of low-voltage EDS spectrum images

    SciTech Connect

    Anderson, I.M.

    1998-03-01

    The benefits of using low ({le}5 kV) operating voltages for energy-dispersive X-ray spectrometry (EDS) of bulk specimens have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging of a computer chip manufactured by a major semiconductor company. Data acquisition was performed with a Philips XL30-FEG SEM operated at 4 kV and equipped with an Oxford super-ATW detector and XP3 pulse processor. The specimen was normal to the electron beam and the take-off angle for acquisition was 35{degree}. The microscope was operated with a 150 {micro}m diameter final aperture at spot size 3, which yielded an X-ray count rate of {approximately}2,000 s{sup {minus}1}. EDS spectrum images were acquired as Adobe Photoshop files with the 4pi plug-in module. (The spectrum images could also be stored as NIH Image files, but the raw data are automatically rescaled as maximum-contrast (0--255) 8-bit TIFF images -- even at 16-bit resolution -- which poses an inconvenience for quantitative analysis.) The 4pi plug-in module is designed for EDS X-ray mapping and allows simultaneous acquisition of maps from 48 elements plus an SEM image. The spectrum image was acquired by re-defining the energy intervals of 48 elements to form a series of contiguous 20 eV windows from 1.25 kV to 2.19 kV. A spectrum image of 450 x 344 pixels was acquired from the specimen with a sampling density of 50 nm/pixel and a dwell time of 0.25 live seconds per pixel, for a total acquisition time of {approximately}14 h. The binary data files were imported into Mathematica for analysis with software developed by the author at Oak Ridge National Laboratory. A 400 x 300 pixel section of the original image was analyzed. MSA required {approximately}185 Mbytes of memory and {approximately}18 h of CPU time on a 300 MHz Power Macintosh 9600.

  13. Target identification by image analysis.

    PubMed

    Fetz, V; Prochnow, H; Brönstrup, M; Sasse, F

    2016-05-01

    Covering: 1997 to the end of 2015Each biologically active compound induces phenotypic changes in target cells that are characteristic for its mode of action. These phenotypic alterations can be directly observed under the microscope or made visible by labelling structural elements or selected proteins of the cells with dyes. A comparison of the cellular phenotype induced by a compound of interest with the phenotypes of reference compounds with known cellular targets allows predicting its mode of action. While this approach has been successfully applied to the characterization of natural products based on a visual inspection of images, recent studies used automated microscopy and analysis software to increase speed and to reduce subjective interpretation. In this review, we give a general outline of the workflow for manual and automated image analysis, and we highlight natural products whose bacterial and eucaryotic targets could be identified through such approaches. PMID:26777141

  14. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    NASA Astrophysics Data System (ADS)

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  15. Gradient-based correction of chromatic aberration in the joint acquisition of color and near-infrared images

    NASA Astrophysics Data System (ADS)

    Sadeghipoor, Zahra; Lu, Yue M.; Süsstrunk, Sabine

    2015-02-01

    Chromatic aberration distortions such as wavelength-dependent blur are caused by imperfections in photographic lenses. These distortions are much more severe in the case of color and near-infrared joint acquisition, as a wider band of wavelengths is captured. In this paper, we consider a scenario where the color image is in focus, and the NIR image captured with the same lens and same focus settings is out-of-focus and blurred. To reduce chromatic aberration distortions, we propose an algorithm that estimates the blur kernel and deblurs the NIR image using the sharp color image as a guide in both steps. In the deblurring step, we retrieve the lost details of the NIR image by exploiting the sharp edges of the color image, as the gradients of color and NIR images are often correlated. However, differences of scene reflections and light in visible and NIR bands cause the gradients of color and NIR images to be different in some regions of the image. To handle this issue, our algorithm measures the similarities and differences between the gradients of the NIR and color channels. The similarity measures guide the deblurring algorithm to efficiently exploit the gradients of the color image in reconstructing high-frequency details of NIR, without discarding the inherent differences between these images. Simulation results verify the effectiveness of our algorithm, both in estimating the blur kernel and deblurring the NIR image, without producing ringing artifacts inherent to the results of most deblurring methods.

  16. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  17. Visual Skills and Chinese Reading Acquisition: A Meta-Analysis of Correlation Evidence

    ERIC Educational Resources Information Center

    Yang, Ling-Yan; Guo, Jian-Peng; Richman, Lynn C.; Schmidt, Frank L.; Gerken, Kathryn C.; Ding, Yi

    2013-01-01

    This paper used meta-analysis to synthesize the relation between visual skills and Chinese reading acquisition based on the empirical results from 34 studies published from 1991 to 2011. We obtained 234 correlation coefficients from 64 independent samples, with a total of 5,395 participants. The meta-analysis revealed that visual skills as a…

  18. Space science technology: In-situ science. Sample Acquisition, Analysis, and Preservation Project summary

    NASA Technical Reports Server (NTRS)

    Aaron, Kim

    1991-01-01

    The Sample Acquisition, Analysis, and Preservation Project is summarized in outline and graphic form. The objective of the project is to develop component and system level technology to enable the unmanned collection, analysis and preservation of physical, chemical and mineralogical data from the surface of planetary bodies. Technology needs and challenges are identified and specific objectives are described.

  19. DDS-Suite - A Dynamic Data Acquisition, Processing, and Analysis System for Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Burnside, Jathan J.

    2012-01-01

    Wind Tunnels have optimized their steady-state data systems for acquisition and analysis and even implemented large dynamic-data acquisition systems, however development of near real-time processing and analysis tools for dynamic-data have lagged. DDS-Suite is a set of tools used to acquire, process, and analyze large amounts of dynamic data. Each phase of the testing process: acquisition, processing, and analysis are handled by separate components so that bottlenecks in one phase of the process do not affect the other, leading to a robust system. DDS-Suite is capable of acquiring 672 channels of dynamic data at rate of 275 MB / s. More than 300 channels of the system use 24-bit analog-to-digital cards and are capable of producing data with less than 0.01 of phase difference at 1 kHz. System architecture, design philosophy, and examples of use during NASA Constellation and Fundamental Aerodynamic tests are discussed.

  20. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  1. Elastic Scattering LIDAR Data Acquisition Visualization and Analysis

    1999-10-12

    ELASTIC/EVIEW is a software system that controls an elastic scattering atmospheric Light Detection and Ranging (LIDAR) instrument. It can acquire elastic scattering LIDAR data using this system and produce images of one, two, and three-dimensional atmospheric data on particulates and other atmospheric pollutants. The user interface is a modern menu driven syatem with appropriate support for user configuration and printing files.

  2. Breath-hold black blood quantitative T1rho imaging of liver using single shot fast spin echo acquisition

    PubMed Central

    Chan, Queenie; Wáng, Yì-Xiáng J.

    2016-01-01

    Background Liver fibrosis is a key feature in most chronic liver diseases. T1rho magnetic resonance imaging is a potentially important technique for noninvasive diagnosis, severity grading, and therapy monitoring of liver fibrosis. However, it remains challenging to perform robust T1rho quantification of liver on human subjects. One major reason is that the presence of rich blood signal in liver can cause artificially high T1rho measurement and makes T1rho quantification susceptible to motion. Methods A pulse sequence based on single shot fast/turbo spin echo (SSFSE/SSTSE) acquisition, with theoretical analysis and simulation based on the extended phase graph (EPG) algorithm, was presented for breath-hold single slice quantitative T1rho imaging of liver with suppression of blood signal. The pulse sequence was evaluated in human subjects at 3.0 T with 500 Hz spinlock frequency and time-of-spinlock (TSL) 0, 10, 30 and 50 ms. Results Human scan demonstrated that the entire T1rho data sets with four spinlock time can be acquired within a single breath-hold of 10 seconds with black blood effect. T1rho quantification with suppression of blood signal results in significantly reduced T1rho value of liver compared to the results without blood suppression. Conclusions A signal-to-noise ratio (SNR) efficient pulse sequence was reported for T1rho quantification of liver. The black blood effect, together with a short breath-hold, mitigates the risk of quantification errors as would occur in the conventional methods. PMID:27190769

  3. An overview of data acquisition, signal coding and data analysis techniques for MST radars

    NASA Technical Reports Server (NTRS)

    Rastogi, P. K.

    1986-01-01

    An overview is given of the data acquisition, signal processing, and data analysis techniques that are currently in use with high power MST/ST (mesosphere stratosphere troposphere/stratosphere troposphere) radars. This review supplements the works of Rastogi (1983) and Farley (1984) presented at previous MAP workshops. A general description is given of data acquisition and signal processing operations and they are characterized on the basis of their disparate time scales. Then signal coding, a brief description of frequently used codes, and their limitations are discussed, and finally, several aspects of statistical data processing such as signal statistics, power spectrum and autocovariance analysis, outlier removal techniques are discussed.

  4. An on-line remote supervisory system for microparticles based on image analysis

    NASA Astrophysics Data System (ADS)

    Liu, Wei-Hua; Jiang, Ming-Shun; Sui, Qing-Mei

    2011-11-01

    A new on-line remote particle analysis system based on image processing has been developed to measure microparticles. The system is composed of particle collector sensor (PCS), particle image sensor (PIS), image remote transmit module and image processing system. Then some details of image processing are discussed. The main advantage of this system is more convenient in particle sample collection and particle image acquisition. The particle size can be obtained using the system with a deviation abot less than 1 μm, and the particle number can be obtained without deviation. The developed system is also convenient and versatile for other analyses of microparticle for academic and industrial application.

  5. A Rational Analysis of the Acquisition of Multisensory Representations

    ERIC Educational Resources Information Center

    Yildirim, Ilker; Jacobs, Robert A.

    2012-01-01

    How do people learn multisensory, or amodal, representations, and what consequences do these representations have for perceptual performance? We address this question by performing a rational analysis of the problem of learning multisensory representations. This analysis makes use of a Bayesian nonparametric model that acquires latent multisensory…

  6. Adaptive anisotropic gaussian filtering to reduce acquisition time in cardiac diffusion tensor imaging.

    PubMed

    Mazumder, Ria; Clymer, Bradley D; Mo, Xiaokui; White, Richard D; Kolipaka, Arunark

    2016-06-01

    Diffusion tensor imaging (DTI) is used to quantify myocardial fiber orientation based on helical angles (HA). Accurate HA measurements require multiple excitations (NEX) and/or several diffusion encoding directions (DED). However, increasing NEX and/or DED increases acquisition time (TA). Therefore, in this study, we propose to reduce TA by implementing a 3D adaptive anisotropic Gaussian filter (AAGF) on the DTI data acquired from ex-vivo healthy and infarcted porcine hearts. DTI was performed on ex-vivo hearts [9-healthy, 3-myocardial infarction (MI)] with several combinations of DED and NEX. AAGF, mean (AVF) and median filters (MF) were applied on the primary eigenvectors of the diffusion tensor prior to HA estimation. The performance of AAGF was compared against AVF and MF. Root mean square error (RMSE), concordance correlation-coefficients and Bland-Altman's technique was used to determine optimal combination of DED and NEX that generated the best HA maps in the least possible TA. Lastly, the effect of implementing AAGF on the infarcted porcine hearts was also investigated. RMSE in HA estimation for AAGF was lower compared to AVF or MF. Post-filtering (AAGF) fewer DED and NEX were required to achieve HA maps with similar integrity as those obtained from higher NEX and/or DED. Pathological alterations caused in HA orientation in the MI model were preserved post-filtering (AAGF). Our results demonstrate that AAGF reduces TA without affecting the integrity of the myocardial microstructure. PMID:26843150

  7. Measurement of eye lens dose for Varian On-Board Imaging with different cone-beam computed tomography acquisition techniques

    PubMed Central

    Deshpande, Sudesh; Dhote, Deepak; Thakur, Kalpna; Pawar, Amol; Kumar, Rajesh; Kumar, Munish; Kulkarni, M. S.; Sharma, S. D.; Kannan, V.

    2016-01-01

    The objective of this work was to measure patient eye lens dose for different cone-beam computed tomography (CBCT) acquisition protocols of Varian's On-Board Imaging (OBI) system using optically stimulated luminescence dosimeter (OSLD) and to study the variation in eye lens dose with patient geometry and distance of isocenter to the eye lens. During the experimental measurements, OSLD was placed on the patient between the eyebrows of both eyes in line of nose during CBCT image acquisition to measure eye lens doses. The eye lens dose measurements were carried out for three different cone-beam acquisition protocols (standard dose head, low-dose head [LDH], and high-quality head [HQH]) of Varian OBI. Measured doses were correlated with patient geometry and distance between isocenter and eye lens. Measured eye lens doses for standard head and HQH protocols were in the range of 1.8–3.2 mGy and 4.5–9.9 mGy, respectively. However, the measured eye lens dose for the LDH protocol was in the range of 0.3–0.7 mGy. The measured data indicate that eye lens dose to patient depends on the selected imaging protocol. It was also observed that eye lens dose does not depend on patient geometry but strongly depends on distance between eye lens and treatment field isocenter. However, undoubted advantages of imaging system should not be counterbalanced by inappropriate selection of imaging protocol, especially for very intense imaging protocol. PMID:27651564

  8. In-Depth Analysis of Computer Memory Acquisition Software for Forensic Purposes.

    PubMed

    McDown, Robert J; Varol, Cihan; Carvajal, Leonardo; Chen, Lei

    2016-01-01

    The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64-bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not-wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job.

  9. In-Depth Analysis of Computer Memory Acquisition Software for Forensic Purposes.

    PubMed

    McDown, Robert J; Varol, Cihan; Carvajal, Leonardo; Chen, Lei

    2016-01-01

    The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64-bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not-wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job. PMID:27405017

  10. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  11. Motofit - integrating neutron reflectometry acquisition, reduction and analysis into one, easy to use, package

    NASA Astrophysics Data System (ADS)

    Nelson, Andrew

    2010-11-01

    The efficient use of complex neutron scattering instruments is often hindered by the complex nature of their operating software. This complexity exists at each experimental step: data acquisition, reduction and analysis, with each step being as important as the previous. For example, whilst command line interfaces are powerful at automated acquisition they often reduce accessibility by novice users and sometimes reduce the efficiency for advanced users. One solution to this is the development of a graphical user interface which allows the user to operate the instrument by a simple and intuitive "push button" approach. This approach was taken by the Motofit software package for analysis of multiple contrast reflectometry data. Here we describe the extension of this package to cover the data acquisition and reduction steps for the Platypus time-of-flight neutron reflectometer. Consequently, the complete operation of an instrument is integrated into a single, easy to use, program, leading to efficient instrument usage.

  12. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  13. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  14. Statistical analysis of biophoton image

    NASA Astrophysics Data System (ADS)

    Wang, Susheng

    1998-08-01

    A photon count image system has been developed to obtain the ultra-weak bioluminescence image. The photon images of some plant, animal and human hand have been detected. The biophoton image is different from usual image. In this paper three characteristics of biophoton image are analyzed. On the basis of these characteristics the detected probability and detected limit of photon count image system, detected limit of biophoton image have been discussed. These researches provide scientific basis for experiments design and photon image processing.

  15. Research and design of portable photoelectric rotary table data-acquisition and analysis system

    NASA Astrophysics Data System (ADS)

    Yang, Dawei; Yang, Xiufang; Han, Junfeng; Yan, Xiaoxu

    2015-02-01

    Photoelectric rotary table as the main test tracking measurement platform, widely use in shooting range and aerospace fields. In the range of photoelectric tracking measurement system, in order to meet the photoelectric testing instruments and equipment of laboratory and field application demand, research and design the portable photoelectric rotary table data acquisition and analysis system, and introduces the FPGA device based on Xilinx company Virtex-4 series and its peripheral module of the system hardware design, and the software design of host computer in VC++ 6.0 programming platform and MFC package based on class libraries. The data acquisition and analysis system for data acquisition, display and storage, commission control, analysis, laboratory wave playback, transmission and fault diagnosis, and other functions into an organic whole, has the advantages of small volume, can be embedded, high speed, portable, simple operation, etc. By photoelectric tracking turntable as experimental object, carries on the system software and hardware alignment, the experimental results show that the system can realize the data acquisition, analysis and processing of photoelectric tracking equipment and control of turntable debugging good, and measurement results are accurate, reliable and good maintainability and extensibility. The research design for advancing the photoelectric tracking measurement equipment debugging for diagnosis and condition monitoring and fault analysis as well as the standardization and normalization of the interface and improve the maintainability of equipment is of great significance, and has certain innovative and practical value.

  16. Computer analysis of mammography phantom images (CAMPI)

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1997-05-01

    Computer analysis of mammography phantom images (CAMPI) is a method for objective and precise measurements of phantom image quality in mammography. This investigation applied CAMPI methodology to the Fischer Mammotest Stereotactic Digital Biopsy machine. Images of an American College of Radiology phantom centered on the largest two microcalcification groups were obtained on this machine under a variety of x-ray conditions. Analyses of the images revealed that the precise behavior of the CAMPI measures could be understood from basic imaging physics principles. We conclude that CAMPI is sensitive to subtle image quality changes and can perform accurate evaluations of images, especially of directly acquired digital images.

  17. Quantifying the impact of respiratory-gated 4D CT acquisition on thoracic image quality: A digital phantom study

    SciTech Connect

    Bernatowicz, K. Knopf, A.; Lomax, A.; Keall, P.; Kipritidis, J.; Mishra, P.

    2015-01-15

    Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results

  18. Towards an Operant Analysis of the Acquisition of Conceptual Behavior.

    ERIC Educational Resources Information Center

    Brigham, Thomas A.

    A model for the analysis of simple human conceptual behavior, based on the apparent similarities of human conceptual behavior and that of infrahuman subjects, is developed. A minimum definition of conceptual behavior is given: A single response, verbal or nonverbal, under the discriminative control of a group of stimuli whose parameters are…

  19. A Stylistic Approach to Foreign Language Acquisition and Literary Analysis.

    ERIC Educational Resources Information Center

    Berg, William J.; Martin-Berg, Laurey K.

    This paper discusses an approach to teaching third college year "bridge" courses, showing that students in a course that focuses on language and culture as well as students in an introductory course on literary analysis can benefit from using a stylistic approach to literacy texts to understand both form and content. The paper suggests that a…

  20. Time series analysis of knowledge of results effects during motor skill acquisition.

    PubMed

    Blackwell, J R; Simmons, R W; Spray, J A

    1991-03-01

    Time series analysis was used to investigate the hypothesis that during acquisition of a motor skill, knowledge of results (KR) information is used to generate a stable internal referent about which response errors are randomly distributed. Sixteen subjects completed 50 acquisition trials of each of three movements whose spatial-temporal characteristics differed. Acquisition trials were either blocked, with each movement being presented in series, or randomized, with the presentation of movements occurring in random order. Analysis of movement time data indicated the contextual interference effect reported in previous studies was replicated in the present experiment. Time series analysis of the acquisition trial data revealed the majority of individual subject response patterns during blocked trials were best described by a model with a temporarily stationary, internal reference of the criterion and systematic, trial-to-trial variation of response errors. During random trial conditions, response patterns were usually best described by a "White-noise" model. This model predicts a permanently stationary, internal reference associated with randomly distributed response errors that are unaffected by KR information. These results are not consistent with previous work using time series analysis to describe motor behavior (Spray & Newell, 1986). PMID:2028084

  1. Acquisition and analysis of primate physiologic data for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Eberhart, Russell C.; Hogrefe, Arthur F.; Radford, Wade E.; Sanders, Kermit H.; Dobbins, Roy W.

    1988-01-01

    This paper describes the design and prototypes of the Physiologic Acquisition and Telemetry System (PATS), which is a multichannel system, designed for large primates, for the data acquisition, telemetry, storage, and analysis of physiological data. PATS is expected to acquire data from units implanted in the abdominal cavities of rhesus monkeys that will be flown aboard the Spacelab. The system will telemeter both stored and real-time internal physiologic measurements to an external Flight Support Station (FSS) computer system. The implanted Data Acquition and Telemetry Subsystem subunit will be externally activated, controlled and reprogrammed from the FSS.

  2. Acquisition and analysis of primate physiologic data for the Space Shuttle

    NASA Astrophysics Data System (ADS)

    Eberhart, Russell C.; Hogrefe, Arthur F.; Radford, Wade E.; Sanders, Kermit H.; Dobbins, Roy W.

    1988-03-01

    This paper describes the design and prototypes of the Physiologic Acquisition and Telemetry System (PATS), which is a multichannel system, designed for large primates, for the data acquisition, telemetry, storage, and analysis of physiological data. PATS is expected to acquire data from units implanted in the abdominal cavities of rhesus monkeys that will be flown aboard the Spacelab. The system will telemeter both stored and real-time internal physiologic measurements to an external Flight Support Station (FSS) computer system. The implanted Data Acquition and Telemetry Subsystem subunit will be externally activated, controlled and reprogrammed from the FSS.

  3. Differentiating semantic categories during the acquisition of novel words: correspondence analysis applied to event-related potentials.

    PubMed

    Fargier, Raphaël; Ploux, Sabine; Cheylus, Anne; Reboul, Anne; Paulignan, Yves; Nazir, Tatjana A

    2014-11-01

    Growing evidence suggests that semantic knowledge is represented in distributed neural networks that include modality-specific structures. Here, we examined the processes underlying the acquisition of words from different semantic categories to determine whether the emergence of visual- and action-based categories could be tracked back to their acquisition. For this, we applied correspondence analysis (CA) to ERPs recorded at various moments during acquisition. CA is a multivariate statistical technique typically used to reveal distance relationships between words of a corpus. Applied to ERPs, it allows isolating factors that best explain variations in the data across time and electrodes. Participants were asked to learn new action and visual words by associating novel pseudowords with the execution of hand movements or the observation of visual images. Words were probed before and after training on two consecutive days. To capture processes that unfold during lexical access, CA was applied on the 100-400 msec post-word onset interval. CA isolated two factors that organized the data as a function of test sessions and word categories. Conventional ERP analyses further revealed a category-specific increase in the negativity of the ERPs to action and visual words at the frontal and occipital electrodes, respectively. The distinct neural processes underlying action and visual words can thus be tracked back to the acquisition of word-referent relationships and may have its origin in association learning. Given current evidence for the flexibility of language-induced sensory-motor activity, we argue that these associative links may serve functions beyond word understanding, that is, the elaboration of situation models.

  4. Whole-body magnetic resonance imaging featuring moving table continuous data acquisition with high-precision position feedback.

    PubMed

    Zenge, Michael O; Ladd, Mark E; Vogt, Florian M; Brauck, Katja; Barkhausen, Joerg; Quick, Harald H

    2005-09-01

    A novel setup for whole-body MR imaging with moving table continuous data acquisition has been developed and evaluated. The setup features a manually positioned moving table platform with integrated phased-array surface radiofrequency coils. A high-precision laser position sensor was integrated into the system to provide real-time positional data that were used to compensate for nonlinear manual table translation. This setup enables continuous 2D and 3D whole-body data acquisition during table movement with surface coil image quality. The concept has been successfully evaluated with whole-body steady-state free precession (TrueFISP) anatomic imaging in five healthy volunteers. Seamless coronal and sagittal slices of continually acquired whole-body data during table movement were accurately reconstructed. The proposed strategy is potentially useful for a variety of applications, including whole-body metastasis screening, whole-body MR angiography, large field-of-view imaging in short bore systems, and for moving table applications during MR-guided interventions.

  5. Compressive sensing based high-speed time-stretch optical microscopy for two-dimensional image acquisition.

    PubMed

    Guo, Qiang; Chen, Hongwei; Weng, Zhiliang; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2015-11-16

    In this paper, compressive sensing based high-speed time-stretch optical microscopy for two-dimensional (2D) image acquisition is proposed and experimentally demonstrated for the first time. A section of dispersion compensating fiber (DCF) is used to perform wavelength-to-time conversion and then ultrafast spectral shaping of broadband optical pulses can be achieved via high-speed intensity modulation. A 2D spatial disperser comprising a pair of orthogonally oriented dispersers is employed to produce spatially structured illumination for 2D image acquisition and a section of single mode fiber (SMF) is utilized for pulse compression in the optical domain. In our scheme, a 1.2-GHz photodetector and a 50-MHz analog-to-digital converter (ADC) are used to acquire the energy of the compressed pulses. Image reconstructions are demonstrated at a frame rate of 500 kHz and a sixteen-fold image compression is achieved in our proof-of-concept demonstration.

  6. Development of an acquisition protocol and a segmentation algortihm for wounds of cutaneous Leishmaniasis in digital images

    NASA Astrophysics Data System (ADS)

    Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro

    2010-03-01

    We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.

  7. Principles and clinical applications of image analysis.

    PubMed

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  8. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  9. IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.

    2010-06-11

    The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve

  10. An Independent Workstation For CT Image Processing And Analysis

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Sewchand, Wilfred

    1988-06-01

    This manuscript describes an independent workstation which consists of a data acquisition and transfer system, a host computer, and a display and record system. The main tasks of the workstation include the collecting and managing of a vast amount of data, creating and processing 2-D and 3-D images, conducting quantitative data analysis, and recording and exchanging information. This workstation not only meets the requirements for routine clinical applications, but it is also used extensively for research purposes. It is stand-alone and works as a physician's workstation; moreover, it can be easily linked into a computer-network and serve as a component of PACS (Picture Archiving and Communication System).

  11. Dosimetric and image quality assessment of different acquisition protocols of a novel 64-slice CT scanner

    NASA Astrophysics Data System (ADS)

    Vite, Cristina; Mangini, Monica; Strocchi, Sabina; Novario, Raffaele; Tanzi, Fabio; Carrafiello, Gianpaolo; Conte, Leopoldo; Fugazzola, Carlo

    2006-03-01

    sharp fall below that value. A significant decrease in the effective dose to the patient, around 40%, was found; image quality analysis shows a further 10% dose reduction possibility.

  12. Patient-adaptive lesion metabolism analysis by dynamic PET images.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2012-01-01

    Dynamic PET imaging provides important spatial-temporal information for metabolism analysis of organs and tissues, and generates a great reference for clinical diagnosis and pharmacokinetic analysis. Due to poor statistical properties of the measurement data in low count dynamic PET acquisition and disturbances from surrounding tissues, identifying small lesions inside the human body is still a challenging issue. The uncertainties in estimating the arterial input function will also limit the accuracy and reliability of the metabolism analysis of lesions. Furthermore, the sizes of the patients and the motions during PET acquisition will yield mismatch against general purpose reconstruction system matrix, this will also affect the quantitative accuracy of metabolism analyses of lesions. In this paper, we present a dynamic PET metabolism analysis framework by defining a patient adaptive system matrix to improve the lesion metabolism analysis. Both patient size information and potential small lesions are incorporated by simulations of phantoms of different sizes and individual point source responses. The new framework improves the quantitative accuracy of lesion metabolism analysis, and makes the lesion identification more precisely. The requirement of accurate input functions is also reduced. Experiments are conducted on Monte Carlo simulated data set for quantitative analysis and validation, and on real patient scans for assessment of clinical potential. PMID:23286175

  13. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  14. On the resolution of ECG acquisition systems for the reliable analysis of the P-wave.

    PubMed

    Censi, Federica; Calcagnini, Giovanni; Corazza, Ivan; Mattei, Eugenio; Triventi, Michele; Bartolini, Pietro; Boriani, Giuseppe

    2012-02-01

    The analysis of the P-wave on surface ECG is widely used to assess the risk of atrial arrhythmias. In order to provide reliable results, the automatic analysis of the P-wave must be precise and reliable and must take into account technical aspects, one of those being the resolution of the acquisition system. The aim of this note is to investigate the effects of the amplitude resolution of ECG acquisition systems on the P-wave analysis. Starting from ECG recorded by an acquisition system with a less significant bit (LSB) of 31 nV (24 bit on an input range of 524 mVpp), we reproduced an ECG signal as acquired by systems with lower resolution (16, 15, 14, 13 and 12 bit). We found that, when the LSB is of the order of 128 µV (12 bit), a single P-wave is not recognizable on ECG. However, when averaging is applied, a P-wave template can be extracted, apparently suitable for the P-wave analysis. Results obtained in terms of P-wave duration and morphology revealed that the analysis of ECG at lowest resolutions (from 12 to 14 bit, LSB higher than 30 µV) could lead to misleading results. However, the resolution used nowadays in modern electrocardiographs (15 and 16 bit, LSB <10 µV) is sufficient for the reliable analysis of the P-wave.

  15. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    SciTech Connect

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the

  16. Acquisition and analysis of angle-beam wavefield data

    SciTech Connect

    Dawson, Alexander J.; Michaels, Jennifer E.; Levine, Ross M.; Chen, Xin; Michaels, Thomas E.

    2014-02-18

    Angle-beam ultrasonic testing is a common practical technique used for nondestructive evaluation to detect, locate, and characterize a variety of material defects and damage. Greater understanding of the both the incident wavefield produced by an angle-beam transducer and the subsequent scattering from a variety of defects and geometrical features is anticipated to increase the reliability of data interpretation. The focus of this paper is on acquiring and analyzing propagating waves from angle-beam transducers in simple, defect-free plates as a first step in the development of methods for flaw characterization. Unlike guided waves, which excite the plate throughout its thickness, angle-beam bulk waves bounce back and forth between the plate surfaces, resulting in the well-known multiple “skips” or “V-paths.” The experimental setup consists of a laser vibrometer mounted on an XYZ scanning stage, which is programmed to move point-to-point on a rectilinear grid to acquire waveform data. Although laser vibrometry is now routinely used to record guided waves for which the frequency content is below 1 MHz, it is more challenging to acquire higher frequency bulk waves in the 1–10 MHz range. Signals are recorded on the surface of an aluminum plate that were generated from a 5 MHz, 65° refracted angle, shear wave transducer-wedge combination. Data are analyzed directly in the x-t domain, via a slant stack Radon transform in the τ-p (offset time-slowness) domain, and via a 2-D Fourier transform in the ω-k domain, thereby enabling identification of specific arrivals and modes. Results compare well to those expected from a simple ray tracing analysis except for the unexpected presence of a strong Rayleigh wave.

  17. Standardization of infrared breast thermogram acquisition protocols and abnormality analysis of breast thermograms

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Gogoi, Usha Rani; Das, Kakali; Ghosh, Anjan Kumar; Bhattacharjee, Debotosh; Majumdar, Gautam

    2016-05-01

    The non-invasive, painless, radiation-free and cost-effective infrared breast thermography (IBT) makes a significant contribution to improving the survival rate of breast cancer patients by early detecting the disease. This paper presents a set of standard breast thermogram acquisition protocols to improve the potentiality and accuracy of infrared breast thermograms in early breast cancer detection. By maintaining all these protocols, an infrared breast thermogram acquisition setup has been established at the Regional Cancer Centre (RCC) of Government Medical College (AGMC), Tripura, India. The acquisition of breast thermogram is followed by the breast thermogram interpretation, for identifying the presence of any abnormality. However, due to the presence of complex vascular patterns, accurate interpretation of breast thermogram is a very challenging task. The bilateral symmetry of the thermal patterns in each breast thermogram is quantitatively computed by statistical feature analysis. A series of statistical features are extracted from a set of 20 thermograms of both healthy and unhealthy subjects. Finally, the extracted features are analyzed for breast abnormality detection. The key contributions made by this paper can be highlighted as -- a) the designing of a standard protocol suite for accurate acquisition of breast thermograms, b) creation of a new breast thermogram dataset by maintaining the protocol suite, and c) statistical analysis of the thermograms for abnormality detection. By doing so, this proposed work can minimize the rate of false findings in breast thermograms and thus, it will increase the utilization potentiality of breast thermograms in early breast cancer detection.

  18. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications.

    PubMed

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  19. A review of breast tomosynthesis. Part II. Image reconstruction, processing and analysis, and advanced applications

    PubMed Central

    Sechopoulos, Ioannis

    2013-01-01

    Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127

  20. Proposed military handbook for dynamic data acquisition and analysis - An invitation to review

    NASA Technical Reports Server (NTRS)

    Himelblau, Harry; Wise, James H.; Piersol, Allan G.; Grundvig, Max R.

    1990-01-01

    A draft Military Handbook prepared under the sponsorship of the USAF Space Division is presently being distributed throughout the U.S. for review by the aerospace community. This comprehensive document provides recommended guidelines for the acquisition and analysis of structural dynamics and aeroacoustic data, and is intended to reduce the errors and variability commonly found in flight, ground and laboratory dynamic test measurements. In addition to the usual variety of measurement problems encountered in the definition of dynamic loads, the development of design and test criteria, and the analysis of failures, special emphasis is given to certain state-of-the-art topics, such as pyroshock data acquisition and nonstationary random data analysis.

  1. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  2. Comparison of unwrapped image quality and acquisition speed from forward-looking and side-looking modes of the TCTBIS

    NASA Astrophysics Data System (ADS)

    Harpring, Larry J.; Pechersky, Martin J.

    2001-04-01

    A True Color Tube Bore Inspection System (TCTBIS) has been developed to aid in the visual nondestructive examination of the inside of small diameter tubes. The instrument was developed to inspect for the presence of contaminants and discoloration inside the tube. The tubes, which have a 1.5 - 1.7 millimeter inside diameter, are integrally attached to pressure vessels that are filled to high pressure through the tubes. The latest version of the TCTBIS can operate in two modes. In the forward-looking mode a borescope is used to look down the length of the tube. In the side-looking mode, a tube containing a 45 degree(s) mirror is placed over the forward-looking borescope so that a direct view of the sidewall of the tube can be seen. The work reported here is a comparison of the relative performance of these two operating modes in terms of image quality and data acquisition speed. Each mode uses an entirely different method of image acquisition and unwrapped image reconstruction. These methods along with comparison results and suggestions for improvements will be discussed in detail.

  3. Revisiting Age-of-Acquisition Effects in Spanish Visual Word Recognition: The Role of Item Imageability

    ERIC Educational Resources Information Center

    Wilson, Maximiliano A.; Cuetos, Fernando; Davies, Rob; Burani, Cristina

    2013-01-01

    Word age-of-acquisition (AoA) affects reading. The mapping hypothesis predicts AoA effects when input--output mappings are arbitrary. In Spanish, the orthography-to-phonology mappings required for word naming are consistent; therefore, no AoA effects are expected. Nevertheless, AoA effects have been found, motivating the present investigation of…

  4. Digital breast tomosynthesis: studies of the effects of acquisition geometry on contrast-to-noise ratio and observer preference of low-contrast objects in breast phantom images.

    PubMed

    Goodsitt, Mitchell M; Chan, Heang-Ping; Schmitz, Andrea; Zelakiewicz, Scott; Telang, Santosh; Hadjiiski, Lubomir; Watcharotone, Kuanwong; Helvie, Mark A; Paramagul, Chintana; Neal, Colleen; Christodoulou, Emmanuel; Larson, Sandra C; Carson, Paul L

    2014-10-01

    The effect of acquisition geometry in digital breast tomosynthesis was evaluated with studies of contrast-to-noise ratios (CNRs) and observer preference. Contrast-detail (CD) test objects in 5 cm thick phantoms with breast-like backgrounds were imaged. Twelve different angular acquisitions (average glandular dose for each ~1.1 mGy) were performed ranging from narrow angle 16° with 17 projection views (16d17p) to wide angle 64d17p. Focal slices of SART-reconstructed images of the CD arrays were selected for CNR computations and the reader preference study. For the latter, pairs of images obtained with different acquisition geometries were randomized and scored by 7 trained readers. The total scores for all images and readings for each acquisition geometry were compared as were the CNRs. In general, readers preferred images acquired with wide angle as opposed to narrow angle geometries. The mean percent preferred was highly correlated with tomosynthesis angle (R = 0.91). The highest scoring geometries were 60d21p (95%), 64d17p (80%), and 48d17p (72%); the lowest scoring were 16d17p (4%), 24d9p (17%) and 24d13p (33%). The measured CNRs for the various acquisitions showed much overlap but were overall highest for wide-angle acquisitions. Finally, the mean reader scores were well correlated with the mean CNRs (R = 0.83). PMID:25211509

  5. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    PubMed

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects.

  6. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    PubMed

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects. PMID:26970109

  7. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  8. Millimeter-wave sensor image analysis

    NASA Technical Reports Server (NTRS)

    Wilson, William J.; Suess, Helmut

    1989-01-01

    Images of an airborne, scanning, radiometer operating at a frequency of 98 GHz, have been analyzed. The mm-wave images were obtained in 1985/1986 using the JPL mm-wave imaging sensor. The goal of this study was to enhance the information content of these images and make their interpretation easier for human analysis. In this paper, a visual interpretative approach was used for information extraction from the images. This included application of nonlinear transform techniques for noise reduction and for color, contrast and edge enhancement. Results of the techniques on selected mm-wave images are presented.

  9. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  10. VPI - VIBRATION PATTERN IMAGER: A CONTROL AND DATA ACQUISITION SYSTEM FOR SCANNING LASER VIBROMETERS

    NASA Technical Reports Server (NTRS)

    Rizzi, S. A.

    1994-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor (Ometron Limited, Kelvin House, Worsley Bridge Road, London, SE26 5BX, England), but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. VPI's graphical user interface allows the operation of the program to be controlled interactively through keyboard and mouse-selected menu options. The main menu controls all functions for setup, data acquisition, display, file operations, and exiting the program. Two types of data may be acquired with the VPI system: single point or "full field". In the single point mode, time series data is sampled by the A/D converter on the I/O board at a user-defined rate for the selected number of samples. The position of the measuring point, adjusted by mirrors in the sensor, is controlled via a mouse input. In the "full field" mode, the measurement point is moved over a user-selected rectangular area with up to 256 positions in both x and y directions. The time series data is sampled by the A/D converter on the I/O board and converted to a root-mean-square (rms) value by the DSP board. The rms "full field" velocity distribution is then uploaded for display and storage. VPI is written in C language and Texas Instruments' TMS320C30 assembly language for IBM PC series and compatible computers running MS-DOS. The program requires 640K of RAM for execution, and a hard disk with 10Mb or more of disk space is recommended. The program also requires a mouse, a VGA graphics display, a Four Channel analog I/O board (Spectrum Signal Processing, Inc.; Westborough, MA), a break-out box and a Spirit-30 board (Sonitech

  11. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  12. System Matrix Analysis for Computed Tomography Imaging.

    PubMed

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  13. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  14. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  15. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization.

  16. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization. PMID:23931513

  17. Evaluation of spatial resolution in image acquisition by optical flatbed scanners for radiochromic film dosimetry

    NASA Astrophysics Data System (ADS)

    Asero, G.; Greco, C.; Gueli, A. M.; Raffaele, L.; Spampinato, S.

    2016-03-01

    Introduction: Radiochromic films are two-dimensional dosimeters that do not require developing and give values of absorbed dose with accuracy and precision. Since this dosimeter colours directly after irradiation, it can be digitized with commercial optical flatbed scanners to obtain a calibration curve that links blackening of the film with dose. Although the film has an intrinsic high spatial resolution, the scanner determines the actual resolution of this dosimeter, in particular the "dot per inch" (dpi) parameter. The present study investigates the effective spatial resolution of a scanner used for Gafchromic® XR-QA2 film (designed for radiology Quality Assurance) analysis. Material and methods: The quantitative evaluation of the resolution was performed with the Modulation Transfer Function (MTF) method, comparing the nominal resolution with the experimental one. The analysis was performed with two procedures. First, the 1951 USAF resolution test chart, a tool that tests the performance of optical devices, was used. Secondly, a combined system of mammography X-ray tube, XR-QA2 film and a bar pattern object was used. In both cases the MTF method has been applied and the results were compared. Results: The USAF and the film images have been acquired with increasing dpi and a standard protocol for radiochromic analysis, to evaluate horizontal and vertical and resolution. The effective resolution corresponds to the value of the MTF at 50%. In both cases and for both procedures, it was verified that, starting from a dpi value, the effective resolution saturates. Conclusion: The study found that, for dosimetric applications, the dpi of the scanner have to be adjusted to a reasonable value because, if too high, it requires high scanning and computational time without providing additional information.

  18. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  19. Light calibration and quality assessment methods for Reflectance Transformation Imaging applied to artworks' analysis

    NASA Astrophysics Data System (ADS)

    Giachetti, A.; Daffara, C.; Reghelin, C.; Gobbetti, E.; Pintus, R.

    2015-06-01

    In this paper we analyze some problems related to the acquisition of multiple illumination images for Polynomial Texture Maps (PTM) or generic Reflectance Transform Imaging (RTI). We show that intensity and directionality nonuniformity can be a relevant issue when acquiring manual sets of images with the standard highlight-based setup both using a flash lamp and a LED light. To maintain a cheap and flexible acquisition setup that can be used on field and by non-experienced users we propose to use a dynamic calibration and correction of the lights based on multiple intensity and direction estimation around the imaged object during the acquisition. Preliminary tests on the results obtained have been performed by acquiring a specifically designed 3D printed pattern in order to see the accuracy of the acquisition obtained both for spatial discrimination of small structures and normal estimation, and on samples of different types of paper in order to evaluate material discrimination. We plan to design and build from our analysis and from the tools developed and under development a set of novel procedures and guidelines that can be used to turn the cheap and common RTI acquisition setup from a simple way to enrich object visualization into a powerful method for extracting quantitative characterization both of surface geometry and of reflective properties of different materials. These results could have relevant applications in the Cultural Heritage domain, in order to recognize different materials used in paintings or investigate the ageing status of artifacts' surface.

  20. Computer programs for absolute neutron activation analysis on the nuclear data 6620 data acquisition system

    SciTech Connect

    Wade, J.W.; Emery, J.F.

    1982-03-01

    Five computer programs that provide multielement neutron activation analysis are discussed. The software package was designed for use on the Nuclear Data 6620 Data Acquisition System and interacts with existing Nuclear Data Corporation software. The programs were developed to make use of the capabilities of the 6620 system to analyze large numbers of samples and assist in a large sample workload that had begun in the neutron activation analysis facility of the Oak Ridge Research Reactor. Nuclear Data neutron activation software is unable to perform absolute activation analysis and therefore was inefficient and inadequate for our applications.

  1. Multifunctional data acquisition and analysis and optical sensors: a Bonneville Power Administration (BPA) update

    NASA Astrophysics Data System (ADS)

    Erickson, Dennis C.; Donnelly, Matt K.

    1995-04-01

    The authors present a design concept describing a multifunctional data acquisition and analysis architecture for advanced power system monitoring. The system is tailored to take advantage of the salient features of low energy sensors, particularly optical types. The discussion of the system concept and optical sensors is based on research at BPA and PNL and on progress made at existing BPA installations and other sites in the western power system.

  2. A New Acquisition and Imaging System for Environmental Measurements: An Experience on the Italian Cultural Heritage

    PubMed Central

    Leccese, Fabio; Cagnetti, Marco; Calogero, Andrea; Trinca, Daniele; di Pasquale, Stefano; Giarnetti, Sabino; Cozzella, Lorenzo

    2014-01-01

    A new acquisition system for remote control of wall paintings has been realized and tested in the field. The system measures temperature and atmospheric pressure in an archeological site where a fresco has been put under control. The measuring chain has been designed to be used in unfavorable environments where neither electric power nor telecommunication infrastructures are available. The environmental parameters obtained from the local monitoring are then transferred remotely allowing an easier management by experts in the field of conservation of cultural heritage. The local acquisition system uses an electronic card based on microcontrollers and sends the data to a central unit realized with a Raspberry-Pi. The latter manages a high quality camera to pick up pictures of the fresco. Finally, to realize the remote control at a site not reached by internet signals, a WiMAX connection based on different communication technologies such as WiMAX, Ethernet, GPRS and Satellite, has been set up. PMID:24859030

  3. A new acquisition and imaging system for environmental measurements: an experience on the Italian cultural heritage.

    PubMed

    Leccese, Fabio; Cagnetti, Marco; Calogero, Andrea; Trinca, Daniele; di Pasquale, Stefano; Giarnetti, Sabino; Cozzella, Lorenzo

    2014-05-23

    A new acquisition system for remote control of wall paintings has been realized and tested in the field. The system measures temperature and atmospheric pressure in an archeological site where a fresco has been put under control. The measuring chain has been designed to be used in unfavorable environments where neither electric power nor telecommunication infrastructures are available. The environmental parameters obtained from the local monitoring are then transferred remotely allowing an easier management by experts in the field of conservation of cultural heritage. The local acquisition system uses an electronic card based on microcontrollers and sends the data to a central unit realized with a Raspberry-Pi. The latter manages a high quality camera to pick up pictures of the fresco. Finally, to realize the remote control at a site not reached by internet signals, a WiMAX connection based on different communication technologies such as WiMAX, Ethernet, GPRS and Satellite, has been set up.

  4. A new acquisition and imaging system for environmental measurements: an experience on the Italian cultural heritage.

    PubMed

    Leccese, Fabio; Cagnetti, Marco; Calogero, Andrea; Trinca, Daniele; di Pasquale, Stefano; Giarnetti, Sabino; Cozzella, Lorenzo

    2014-01-01

    A new acquisition system for remote control of wall paintings has been realized and tested in the field. The system measures temperature and atmospheric pressure in an archeological site where a fresco has been put under control. The measuring chain has been designed to be used in unfavorable environments where neither electric power nor telecommunication infrastructures are available. The environmental parameters obtained from the local monitoring are then transferred remotely allowing an easier management by experts in the field of conservation of cultural heritage. The local acquisition system uses an electronic card based on microcontrollers and sends the data to a central unit realized with a Raspberry-Pi. The latter manages a high quality camera to pick up pictures of the fresco. Finally, to realize the remote control at a site not reached by internet signals, a WiMAX connection based on different communication technologies such as WiMAX, Ethernet, GPRS and Satellite, has been set up. PMID:24859030

  5. Explaining the "Natural Order of L2 Morpheme Acquisition" in English: A Meta-Analysis of Multiple Determinants

    ERIC Educational Resources Information Center

    Goldschneider, Jennifer M.; DeKeyser, Robert M.

    2005-01-01

    This meta-analysis pools data from 25 years of research on the order of acquisition of English grammatical morphemes by students of English as a second language (ESL). Some researchers have posited a "natural" order of acquisition common to all ESL learners, but no single cause has been shown for this phenomenon. Our study investigated whether a…

  6. The large-scale digital cell analysis system: an open system for nonperturbing live cell imaging.

    PubMed

    Davis, Paul J; Kosmacek, Elizabeth A; Sun, Yuansheng; Ianzini, Fiorenza; Mackey, Michael A

    2007-12-01

    The Large-Scale Digital Cell Analysis System (LSDCAS) was designed to provide a highly extensible open source live cell imaging system. Analysis of cell growth data has demonstrated a lack of perturbation in cells imaged using LSDCAS, through reference to cell growth data from cells growing in CO(2) incubators. LSDCAS consists of data acquisition, data management and data analysis software, and is currently a Core research facility at the Holden Comprehensive Cancer Center at the University of Iowa. Using LSDCAS analysis software, this report and others show that although phase-contrast imaging has no apparent effect on cell growth kinetics and viability, fluorescent image acquisition in the cell lines tested caused a measurable level of growth perturbation using LSDCAS. This report describes the current design of the system, reasons for the implemented design, and details its basic functionality. The LSDCAS software runs on the GNU/Linux operating system, and provides easy to use, graphical programs for data acquisition and quantitative analysis of cells imaged with phase-contrast or fluorescence microscopy (alone or in combination), and complete source code is freely available under the terms of the GNU Public Software License at the project website (http://lsdcas.engineering.uiowa.edu). PMID:18045324

  7. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  8. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  9. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  10. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  11. System and method for improving ultrasound image acquisition and replication for repeatable measurements of vascular structures

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2006-01-01

    High resolution B-mode ultrasound images of the common carotid artery are obtained with an ultrasound transducer using a standardized methodology. Subjects are supine with the head counter-rotated 45 degrees using a head pillow. The jugular vein and carotid artery are located and positioned in a vertical stacked orientation. The transducer is rotated 90 degrees around the centerline of the transverse image of the stacked structure to obtain a longitudinal image while maintaining the vessels in a stacked position. A computerized methodology assists operators to accurately replicate images obtained over several spaced-apart examinations. The methodology utilizes a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time live ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, measurement of vascular dimensions such as carotid arterial IMT and diameter, the coefficient of variation is substantially reduced to values approximating from about 1.0% to about 1.25%. All images contain anatomical landmarks for reproducing probe angulation, including visualization of the carotid bulb, stacking of the jugular vein above the carotid artery, and initial instrumentation settings, used at a baseline measurement are maintained during all follow-up examinations.

  12. Digital live–tracking 3–dimensional minisensors for recording head orientation during image acquisition

    PubMed Central

    de Paula, Leonardo Koerich; Ackerman, James L.; Carvalho, Felipe de Assis Ribeiro; Eidson, Lindsey; Cevidanes, Lucia Helena Soares

    2013-01-01

    Introduction Our objective was to test the value of minisensors for recording unrestrained head position with 6 degrees of freedom during 3-dimensional stereophotogrammetry. Methods Four 3-dimensional pictures (3dMD, Atlanta, Ga) were taken of 20 volunteers as follows: (1) in unrestrained head position, (2) a repeat of picture 1, (3) in unrestrained head position wearing a headset with 3-dimensional live tracking sensors (3-D Guidance trackSTAR; Ascension Technology, Burlington, Vt), and (4) a repeat of picture 3. The sensors were used to track the x, y, and z coordinates (pitch, roll, and yaw) of the head in space. The patients were seated in front of a mirror and asked to stand and take a walk between each acquisition. Eight landmarks were identified in each 3-dimensional picture (nasion, tip of nose, subnasale, right and left lip commissures, midpoints of upper and lower lip vermilions, soft-tissue B-point). The distances between correspondent landmarks were measured between pictures 1 and 2 and 3 and 4 with software. The Student t test was used to test differences between unrestrained head position with and without sensors. Results Interlandmark distances for pictures 1 and 2 (head position without the sensors) and pictures 3 and 4 (head position with sensors) were consistent for all landmarks, indicating that roll, pitch, and yaw of the head are controlled independently of the sensors. However, interlandmark distances were on average 17.34 ± 0.32 mm between pictures 1 and 2. Between pictures 3 and 4, the distances averaged 6.17 ± 0.15 mm. All interlandmark distances were significantly different between the 2 methods (P<0.001). Conclusions The use of 3-dimensional live-tracking sensors aids the reproducibility of patient head positioning during repeated or follow-up acquisitions of 3-dimensional stereophotogrammetry. Even with sensors, differences in spatial head position between acquisitions still require additional registration procedures. PMID:22196193

  13. Video-task acquisition in rhesus monkeys (Macaca mulatta) and chimpanzees (Pan troglodytes): a comparative analysis.

    PubMed

    Hopkins, W D; Washburn, D A; Hyatt, C W

    1996-04-01

    This study describes video-task acquisition in two nonhuman primate species. The subjects were seven rhesus monkeys (Macaca mulatta) and seven chimpanzees (Pan troglodytes). All subjects were trained to manipulate a joystick which controlled a cursor displayed on a computer monitor. Two criterion levels were used: one based on conceptual knowledge of the task and one based on motor performance. Chimpanzees and rhesus monkeys attained criterion in a comparable number of trials using a conceptually based criterion. However, using a criterion based on motor performance, chimpanzees reached criterion significantly faster than rhesus monkeys. Analysis of error patterns and latency indicated that the rhesus monkeys had a larger asymmetry in response bias and were significantly slower in responding than the chimpanzees. The results are discussed in terms of the relation between object manipulation skills and video-task acquisition.

  14. Video-task acquisition in rhesus monkeys (Macaca mulatta) and chimpanzees (Pan troglodytes): a comparative analysis

    NASA Technical Reports Server (NTRS)

    Hopkins, W. D.; Washburn, D. A.; Hyatt, C. W.; Rumbaugh, D. M. (Principal Investigator)

    1996-01-01

    This study describes video-task acquisition in two nonhuman primate species. The subjects were seven rhesus monkeys (Macaca mulatta) and seven chimpanzees (Pan troglodytes). All subjects were trained to manipulate a joystick which controlled a cursor displayed on a computer monitor. Two criterion levels were used: one based on conceptual knowledge of the task and one based on motor performance. Chimpanzees and rhesus monkeys attained criterion in a comparable number of trials using a conceptually based criterion. However, using a criterion based on motor performance, chimpanzees reached criterion significantly faster than rhesus monkeys. Analysis of error patterns and latency indicated that the rhesus monkeys had a larger asymmetry in response bias and were significantly slower in responding than the chimpanzees. The results are discussed in terms of the relation between object manipulation skills and video-task acquisition.

  15. Pulsed laser noise analysis and pump-probe signal detection with a data acquisition card.

    PubMed

    Werley, Christopher A; Teo, Stephanie M; Nelson, Keith A

    2011-12-01

    A photodiode and data acquisition card whose sampling clock is synchronized to the repetition rate of a laser are used to measure the energy of each laser pulse. Simple analysis of the data yields the noise spectrum from very low frequencies up to half the repetition rate and quantifies the pulse energy distribution. When two photodiodes for balanced detection are used in combination with an optical modulator, the technique is capable of detecting very weak pump-probe signals (ΔI/I(0) ~ 10(-5) at 1 kHz), with a sensitivity that is competitive with a lock-in amplifier. Detection with the data acquisition card is versatile and offers many advantages including full quantification of noise during each stage of signal processing, arbitrary digital filtering in silico after data collection is complete, direct readout of percent signal modulation, and easy adaptation for fast scanning of delay between pump and probe.

  16. A Fast VME Data Acquisition System for Spill Analysis and Beam Loss Measurement

    NASA Astrophysics Data System (ADS)

    Hoffmann, T.; Liakin, D. A.; Forck, P.

    2002-12-01

    Particle counters perform the control of beam loss and slowly extracted currents at the heavy ion synchrotron (SIS) at GSI. For these devices a new data acquisition system has been developed with the main intention to combine the operating purposes beam loss measurement, spill analysis, spill structure measurement and matrix switching functionality in one single assembly. To provide a reasonable digital selection of counters at significant locations a modular VME setup based on the GSI data acquisition software MBS (Multi Branch System) was chosen. An overview of the design regarding the digital electronics and the infrastructure is given. Of main interest in addition to the high performance of the used hardware is the development of a user-friendly software interface for hardware controls, data evaluation and presentation to the operator.

  17. Assessing the efficacy of low-level image content descriptors for computer-based fluorescence microscopy image analysis.

    PubMed

    Shamir, L

    2011-09-01

    The increasing prevalence of automated image acquisition systems and state-of-the-art information technology has enabled new types of microscopy experiments based on automatic processing of massive image data sets, and numerous methods of high-content screening using machine vision and pattern recognition methods have been proposed. However, as a relatively young discipline, it is important to validate these methods and ensure that the machine vision and pattern recognition techniques reliably reflect the actual morphology, and can be effectively used for finding and validating scientific discoveries. In this report we show that some of the previously reported experimental results using automatic microscopy image analysis might be biased, and discuss practices and methods that can be used to obtain objective and reliable automatic analysis of microscopy images.

  18. Hospital integration and vertical consolidation: an analysis of acquisitions in New York State.

    PubMed

    Huckman, Robert S

    2006-01-01

    While prior studies tend to view hospital integration through the lens of horizontal consolidation, I provide an analysis of its vertical aspects. I examine the effect of hospital acquisitions in New York State on the distribution of market share for major cardiac procedures across providers in target markets. I find evidence of benefits to acquirers via business stealing, with the resulting redistribution of volume across providers having small effects, if any, on total welfare with respect to cardiac care. The results of this analysis -- along with similar assessments for other services -- can be incorporated into future studies of hospital consolidation.

  19. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  20. Technique for real-time frontal face image acquisition using stereo system

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Vizilter, Yuri V.; Kudryashov, Yuri I.

    2013-04-01

    Most part of existing systems for face recognition is usually based on two-dimensional images. And the quality of recognition is rather high for frontal images of face. But for other kind of images the quality decreases significantly. It is necessary to compensate for the effect of a change in the posture of a person (the camera angle) for correct operation of such systems. There are methods of transformation of 2D image of the person to the canonical orientation. The efficiency of these methods depends on the accuracy of determination of specific anthropometric points. Problems can arise for cases of partly occlusion of the person`s face. Another approach is to have a set of person images for different view angles for the further processing. But a need for storing and processing a large number of two-dimensional images makes this method considerably time-consuming. The proposed technique uses stereo system for fast generation of person face 3D model and obtaining face image in given orientation using this 3D model. Real-time performance is provided by implementing and graph cut methods for face surface 3D reconstruction and applying CUDA software library for parallel calculation.

  1. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  2. An image acquisition and registration strategy for the fusion of hyperpolarized helium-3 MRI and x-ray CT images of the lung

    NASA Astrophysics Data System (ADS)

    Ireland, Rob H.; Woodhouse, Neil; Hoggard, Nigel; Swinscoe, James A.; Foran, Bernadette H.; Hatton, Matthew Q.; Wild, Jim M.

    2008-11-01

    The purpose of this ethics committee approved prospective study was to evaluate an image acquisition and registration protocol for hyperpolarized helium-3 magnetic resonance imaging (3He-MRI) and x-ray computed tomography. Nine patients with non-small cell lung cancer (NSCLC) gave written informed consent to undergo a free-breathing CT, an inspiration breath-hold CT and a 3D ventilation 3He-MRI in CT position using an elliptical birdcage radiofrequency (RF) body coil. 3He-MRI to CT image fusion was performed using a rigid registration algorithm which was assessed by two observers using anatomical landmarks and a percentage volume overlap coefficient. Registration of 3He-MRI to breath-hold CT was more accurate than to free-breathing CT; overlap 82.9 ± 4.2% versus 59.8 ± 9.0% (p < 0.001) and mean landmark error 0.75 ± 0.24 cm versus 1.25 ± 0.60 cm (p = 0.002). Image registration is significantly improved by using an imaging protocol that enables both 3He-MRI and CT to be acquired with similar breath holds and body position through the use of a birdcage 3He-MRI body RF coil and an inspiration breath-hold CT. Fusion of 3He-MRI to CT may be useful for the assessment of patients with lung diseases.

  3. A Model for the Omnidirectional Acquisition and Rendering of Stereoscopic Images for Human Viewing

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.; Dubois, Eric

    2015-12-01

    Interactive visual media enable the visualization and navigation of remote-world locations in all gaze directions. A large segment of such media is created using pictures from the remote sites thanks to the advance in panoramic cameras. A desirable enhancement is to facilitate the stereoscopic visualization of remote scenes in all gaze directions. In this context, a model for the signal to be acquired by an omnistereoscopic sensor is needed in order to design better acquisition strategies. This omnistereoscopic viewing model must take into account the geometric constraints imposed by our binocular vision system since we want to produce stereoscopic imagery capable to induce stereopsis consistently in any gaze direction; in this paper, we present such model. In addition, we discuss different approaches to sample or to approximate this function and we propose a general acquisition model for sampling the omnistereoscopic light signal. From this model, we propose that by acquiring and mosaicking sparse sets of partially overlapped stereoscopic snapshots, a satisfactory illusion of depth can be evoked. Finally, we show an example of the rendering pipeline to create the omnistereoscopic imagery.

  4. Cardiac Multidetector Computed Tomography: Basic Physics of Image Acquisition and Clinical Applications

    PubMed Central

    Bardo, Dianna M.E; Brown, Paul

    2008-01-01

    Cardiac MDCT is here to stay. And, it is more than just imaging coronary arteries. Understanding the differences in and the benefits of one CT scanner from another will help you to optimize the capabilities of the scanner, but requires a basic understanding of the MDCT imaging physics. This review provides key information needed to understand the differences in the types of MDCT scanners, from 64 – 320 detectors, flat panels, single and dual source configurations, step and shoot prospective and retrospective gating, and how each factor influences radiation dose, spatial and temporal resolution, and image noise. PMID:19936200

  5. An algorithm to unveil the inner structure of objects concealed by beam divergence in radiographic image acquisition systems

    SciTech Connect

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    2014-11-11

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessing a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.

  6. An algorithm to unveil the inner structure of objects concealed by beam divergence in radiographic image acquisition systems

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Lopes, R. T.

    2014-11-01

    Two main parameters rule the performance of an Image Acquisition System, namely, spatial resolution and contrast. For radiographic systems using cone beam arrangements, the farther the source, the better the resolution, but the contrast would diminish due to the lower statistics. A closer source would yield a higher contrast but it would no longer reproduce the attenuation map of the object, as the incoming beam flux would be reduced by unequal large divergences and attenuation factors. This work proposes a procedure to correct these effects when the object is comprised of a hull - or encased in it - possessing a shape capable to be described in analytical geometry terms. Such a description allows the construction of a matrix containing the attenuation factors undergone by the beam from the source until its final destination at each coordinate on the 2D detector. Each matrix element incorporates the attenuation suffered by the beam after its travel through the hull wall, as well as its reduction due to the square of distance to the source and the angle it hits the detector surface. When the pixel intensities of the original image are corrected by these factors, the image contrast, reduced by the overall attenuation in the exposure phase, are recovered, allowing one to see details otherwise concealed due to the low contrast. In order to verify the soundness of this approach, synthetic images of objects of different shapes, such as plates and tubes, incorporating defects and statistical fluctuation, have been generated, recorded for further comparison and afterwards processed to improve their contrast. The developed algorithm which, generates processes and plots the images has been written in Fortran 90 language. As the resulting final images exhibit the expected improvements, it therefore seemed worthwhile to carry out further tests with actual experimental radiographies.

  7. Analysis of dynamic brain imaging data.

    PubMed Central

    Mitra, P P; Pesaran, B

    1999-01-01

    Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: "noise" characterization and suppression, and "signal" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources. PMID:9929474

  8. NIH Image to ImageJ: 25 years of image analysis.

    PubMed

    Schneider, Caroline A; Rasband, Wayne S; Eliceiri, Kevin W

    2012-07-01

    For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects.

  9. Hardware acceleration of lucky-region fusion (LRF) algorithm for image acquisition and processing

    NASA Astrophysics Data System (ADS)

    Maignan, William; Koeplinger, David; Carhart, Gary W.; Aubailly, Mathieu; Kiamilev, Fouad; Liu, J. Jiang

    2013-05-01

    "Lucky-region fusion" (LRF) is an image processing technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames, and "fuses" them into a final image with improved quality. In previous research, the LRF algorithm had been implemented on a PC using a compiled programming language. However, the PC usually does not have sufficient processing power to handle real-time extraction, processing and reduction required when the LRF algorithm is applied not to single picture images but rather to real-time video from fast, high-resolution image sensors. This paper describes a hardware implementation of the LRF algorithm on a Virtex 6 field programmable gate array (FPGA) to achieve real-time video processing. The novelty in our approach is the creation of a "black box" LRF video processing system with a standard camera link input, a user controller interface, and a standard camera link output.

  10. Factor Analysis of the Image Correlation Matrix.

    ERIC Educational Resources Information Center

    Kaiser, Henry F.; Cerny, Barbara A.

    1979-01-01

    Whether to factor the image correlation matrix or to use a new model with an alpha factor analysis of it is mentioned, with particular reference to the determinacy problem. It is pointed out that the distribution of the images is sensibly multivariate normal, making for "better" factor analyses. (Author/CTM)

  11. Imaging MS Methodology for More Chemical Information in Less Data Acquisition Time Utilizing a Hybrid Linear Ion Trap-Orbitrap Mass Spectrometer

    SciTech Connect

    Perdian, D. C.; Lee, Young Jin

    2010-11-15

    A novel mass spectrometric imaging method is developed to reduce the data acquisition time and provide rich chemical information using a hybrid linear ion trap-orbitrap mass spectrometer. In this method, the linear ion trap and orbitrap are used in tandem to reduce the acquisition time by incorporating multiple linear ion trap scans during an orbitrap scan utilizing a spiral raster step plate movement. The data acquisition time was decreased by 43-49% in the current experiment compared to that of orbitrap-only scans; however, 75% or more time could be saved for higher mass resolution and with a higher repetition rate laser. Using this approach, a high spatial resolution of 10 {micro}m was maintained at ion trap imaging, while orbitrap spectra were acquired at a lower spatial resolution, 20-40 {micro}m, all with far less data acquisition time. Furthermore, various MS imaging methods were developed by interspersing MS/MS and MSn ion trap scans during orbitrap scans to provide more analytical information on the sample. This method was applied to differentiate and localize structural isomers of several flavonol glycosides from an Arabidopsis flower petal in which MS/MS, MSn, ion trap, and orbitrap images were all acquired in a single data acquisition.

  12. Comparison of the Number of Image Acquisitions and Procedural Time Required for Transarterial Chemoembolization of Hepatocellular Carcinoma with and without Tumor-Feeder Detection Software.

    PubMed

    Iwazawa, Jin; Ohue, Shoichi; Hashimoto, Naoko; Mitani, Takashi

    2013-01-01

    Purpose. To compare the number of image acquisitions and procedural time required for transarterial chemoembolization (TACE) with and without tumor-feeder detection software in cases of hepatocellular carcinoma (HCC). Materials and Methods. We retrospectively reviewed 50 cases involving software-assisted TACE (September 2011-February 2013) and 84 cases involving TACE without software assistance (January 2010-August 2011). We compared the number of image acquisitions, the overall procedural time, and the therapeutic efficacy in both groups. Results. Angiography acquisition per session reduced from 6.6 times to 4.6 times with software assistance (P < 0.001). Total image acquisition significantly decreased from 10.4 times to 8.7 times with software usage (P = 0.004). The mean procedural time required for a single session with software-assisted TACE (103 min) was significantly lower than that for a session without software (116 min, P = 0.021). For TACE with and without software usage, the complete (68% versus 63%, resp.) and objective (78% versus 80%, resp.) response rates did not differ significantly. Conclusion. In comparison with software-unassisted TACE, automated feeder-vessel detection software-assisted TACE for HCC involved fewer image acquisitions and could be completed faster while maintaining a comparable treatment response.

  13. Morphology enabled dipole inversion (MEDI) from a single-angle acquisition: comparison with COSMOS in human brain imaging.

    PubMed

    Liu, Tian; Liu, Jing; de Rochefort, Ludovic; Spincemaille, Pascal; Khalidov, Ildar; Ledoux, James Robert; Wang, Yi

    2011-09-01

    Magnetic susceptibility varies among brain structures and provides insights into the chemical and molecular composition of brain tissues. However, the determination of an arbitrary susceptibility distribution from the measured MR signal phase is a challenging, ill-conditioned inverse problem. Although a previous method named calculation of susceptibility through multiple orientation sampling (COSMOS) has solved this inverse problem both theoretically and experimentally using multiple angle acquisitions, it is often impractical to carry out on human subjects. Recently, the feasibility of calculating the brain susceptibility distribution from a single-angle acquisition was demonstrated using morphology enabled dipole inversion (MEDI). In this study, we further improved the original MEDI method by sparsifying the edges in the quantitative susceptibility map that do not have a corresponding edge in the magnitude image. Quantitative susceptibility maps generated by the improved MEDI were compared qualitatively and quantitatively with those generated by calculation of susceptibility through multiple orientation sampling. The results show a high degree of agreement between MEDI and calculation of susceptibility through multiple orientation sampling, and the practicality of MEDI allows many potential clinical applications.

  14. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  15. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  16. HTPheno: An image analysis pipeline for high-throughput plant phenotyping

    PubMed Central

    2011-01-01

    Background In the last few years high-throughput analysis methods have become state-of-the-art in the life sciences. One of the latest developments is automated greenhouse systems for high-throughput plant phenotyping. Such systems allow the non-destructive screening of plants over a period of time by means of image acquisition techniques. During such screening different images of each plant are recorded and must be analysed by applying sophisticated image analysis algorithms. Results This paper presents an image analysis pipeline (HTPheno) for high-throughput plant phenotyping. HTPheno is implemented as a plugin for ImageJ, an open source image processing software. It provides the possibility to analyse colour images of plants which are taken in two different views (top view and side view) during a screening. Within the analysis different phenotypical parameters for each plant such as height, width and projected shoot area of the plants are calculated for the duration of the screening. HTPheno is applied to analyse two barley cultivars. Conclusions HTPheno, an open source image analysis pipeline, supplies a flexible and adaptable ImageJ plugin which can be used for automated image analysis in high-throughput plant phenotyping and therefore to derive new biological insights, such as determination of fitness. PMID:21569390

  17. Design and implementation of photoelectric rotary table data acquisition and analysis system host computer software based on VC++ and MFC

    NASA Astrophysics Data System (ADS)

    Yang, Dawei; Yang, Xiufang; Han, Junfeng; Yan, Xiaoxu

    2015-02-01

    Photoelectric rotary table is mainly used in the defense industry and military fields, especially in the shooting range, target tracking, target acquisition, aerospace aspects play an important one. For range photoelectric measuring equipment field test application requirements, combined with a portable photoelectric rotary table data acquisition hardware system, software programming platform is presented based on the VC++, using MFC prepared PC interface, the realization of photoelectric turntable data acquisition, analysis and processing and debugging control. The host computer software design of serial communication and protocol, real-time data acquisition and display, real-time data curve drawing, analog acquisition, debugging guide, error analysis program, and gives the specific design method. Finally, through the photoelectric rotary table data acquisition hardware system alignment, the experimental results show that host computer software can better accomplish with lower machine data transmission, data acquisition, control and analysis, and to achieve the desired effect, the entire software system running performance is stable, flexible, strong practicality and reliability, the advantages of good scalability.

  18. The design and validation of a magnetic resonance imaging-compatible device for obtaining mechanical properties of plantar soft tissue via gated acquisition.

    PubMed

    Williams, Evan D; Stebbins, Michael J; Cavanagh, Peter R; Haynor, David R; Chu, Baocheng; Fassbind, Michael J; Isvilanonda, Vara; Ledoux, William R

    2015-10-01

    Changes in the mechanical properties of the plantar soft tissue in people with diabetes may contribute to the formation of plantar ulcers. Such ulcers have been shown to be in the causal pathway for lower extremity amputation. The hydraulic plantar soft tissue reducer (HyPSTER) was designed to measure in vivo, rate-dependent plantar soft tissue compressive force and three-dimensional deformations to help understand, predict, and prevent ulcer formation. These patient-specific values can then be used in an inverse finite element analysis to determine tissue moduli, and subsequently used in a foot model to show regions of high stress under a wide variety of loading conditions. The HyPSTER uses an actuator to drive a magnetic resonance imaging-compatible hydraulic loading platform. Pressure and actuator position were synchronized with gated magnetic resonance imaging acquisition. Achievable loading rates were slower than those found in normal walking because of a water-hammer effect (pressure wave ringing) in the hydraulic system when the actuator direction was changed rapidly. The subsequent verification tests were, therefore, performed at 0.2 Hz. The unloaded displacement accuracy of the system was within 0.31%. Compliance, presumably in the system's plastic components, caused a displacement loss of 5.7 mm during a 20-mm actuator test at 1354 N. This was accounted for with a target to actual calibration curve. The positional accuracy of the HyPSTER during loaded displacement verification tests from 3 to 9 mm against a silicone backstop was 95.9% with a precision of 98.7%. The HyPSTER generated minimal artifact in the magnetic resonance imaging scanner. Careful analysis of the synchronization of the HyPSTER and the magnetic resonance imaging scanner was performed. With some limitations, the HyPSTER provided key functionality in measuring dynamic, patient-specific plantar soft tissue mechanical properties. PMID:26405098

  19. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  20. Development of automated imaging and analysis for zebrafish chemical screens.

    PubMed Central

    Vogt, Andreas; Codore, Hiba; Day, Billy W.; Hukriede, Neil A.; Tsang, Michael

    2010-01-01

    We demonstrate the application of image-based high-content screening (HCS) methodology to identify small molecules that can modulate the FGF/RAS/MAPK pathway in zebrafish embryos. The zebrafish embryo is an ideal system for in vivo high-content chemical screens. The 1-day old embryo is approximately 1mm in diameter and can be easily arrayed into 96-well plates, a standard format for high throughput screening. During the first day of development, embryos are transparent with most of the major organs present, thus enabling visualization of tissue formation during embryogenesis. The complete automation of zebrafish chemical screens is still a challenge, however, particularly in the development of automated image acquisition and analysis. We previously generated a transgenic reporter line that expresses green fluorescent protein (GFP) under the control of FGF activity and demonstrated their utility in chemical screens 1. To establish methodology for high throughput whole organism screens, we developed a system for automated imaging and analysis of zebrafish embryos at 24-48 hours post fertilization (hpf) in 96-well plates 2. In this video we highlight the procedures for arraying transgenic embryos into multiwell plates at 24hpf and the addition of a small molecule (BCI) that hyperactivates FGF signaling 3. The plates are incubated for 6 hours followed by the addition of tricaine to anesthetize larvae prior to automated imaging on a Molecular Devices ImageXpress Ultra laser scanning confocal HCS reader. Images are processed by Definiens Developer software using a Cognition Network Technology algorithm that we developed to detect and quantify expression of GFP in the heads of transgenic embryos. In this example we highlight the ability of the algorithm to measure dose-dependent effects of BCI on GFP reporter gene expression in treated embryos. PMID:20613708

  1. High-contrast 3D image acquisition using HiLo microscopy with an electrically tunable lens

    NASA Astrophysics Data System (ADS)

    Philipp, Katrin; Smolarski, André; Fischer, Andreas; Koukourakis, Nektarios; Stürmer, Moritz; Wallrabe, Ulricke; Czarske, Jürgen

    2016-04-01

    We present a HiLo microscope with an electrically tunable lens for high-contrast three-dimensional image acquisition. HiLo microscopy combines wide field and speckled illumination images to create optically sectioned images. Additionally, the depth-of-field is not fixed, but can be adjusted between wide field and confocal-like axial resolution. We incorporate an electrically tunable lens in the HiLo microscope for axial scanning, to obtain three-dimensional data without the need of moving neither the sample nor the objective. The used adaptive lens consists of a transparent polydimethylsiloxane (PDMS) membrane into which an annular piezo bending actuator is embedded. A transparent fluid is filled between the membrane and the glass substrate. When actuated, the piezo generates a pressure in the lens which deflects the membrane and thus changes the refractive power. This technique enables a large tuning range of the refractive power between 1/f = (-24 . . . 25) 1/m. As the NA of the adaptive lens is only about 0.05, a fixed high-NA lens is included in the setup to provide high resolution. In this contribution, the scan properties and capabilities of the tunable lens in the HiLo microscope are analyzed. Eventually, exemplary measurements are presented and discussed.

  2. The Probabilistic Analysis of Language Acquisition: Theoretical, Computational, and Experimental Analysis

    ERIC Educational Resources Information Center

    Hsu, Anne S.; Chater, Nick; Vitanyi, Paul M. B.

    2011-01-01

    There is much debate over the degree to which language learning is governed by innate language-specific biases, or acquired through cognition-general principles. Here we examine the probabilistic language acquisition hypothesis on three levels: We outline a novel theoretical result showing that it is possible to learn the exact "generative model"…

  3. On image analysis in fractography (Methodological Notes)

    NASA Astrophysics Data System (ADS)

    Shtremel', M. A.

    2015-10-01

    As other spheres of image analysis, fractography has no universal method for information convolution. An effective characteristic of an image is found by analyzing the essence and origin of every class of objects. As follows from the geometric definition of a fractal curve, its projection onto any straight line covers a certain segment many times; therefore, neither a time series (one-valued function of time) nor an image (one-valued function of plane) can be a fractal. For applications, multidimensional multiscale characteristics of an image are necessary. "Full" wavelet series break the law of conservation of information.

  4. Design and construction of the front-end electronics data acquisition for the SLD CRID (Cherenkov Ring Imaging Detector)

    SciTech Connect

    Hoeflich, J.; McShurley, D.; Marshall, D.; Oxoby, G.; Shapiro, S.; Stiles, P. ); Spencer, E. . Inst. for Particle Physics)

    1990-10-01

    We describe the front-end electronics for the Cherenkov Ring Imaging Detector (CRID) of the SLD at the Stanford Linear Accelerator Center. The design philosophy and implementation are discussed with emphasis on the low-noise hybrid amplifiers, signal processing and data acquisition electronics. The system receives signals from a highly efficient single-photo electron detector. These signals are shaped and amplified before being stored in an analog memory and processed by a digitizing system. The data from several ADCs are multiplexed and transmitted via fiber optics to the SLD FASTBUS system. We highlight the technologies used, as well as the space, power dissipation, and environmental constraints imposed on the system. 16 refs., 10 figs.

  5. High-speed multiframe dynamic transmission electron microscope image acquisition system with arbitrary timing

    DOEpatents

    Reed, Bryan W.; Dehope, William J; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M

    2016-06-21

    An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses of a predefined pulse duration and waveform, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has two pairs of plates arranged perpendicular to one another. A control system controls the laser and a plurality of switching components synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to be provided with an independently set duration and independently set inter-pulse spacings.

  6. High-speed multi-frame dynamic transmission electron microscope image acquisition system with arbitrary timing

    DOEpatents

    Reed, Bryan W.; DeHope, William J.; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M.

    2016-02-23

    An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses each being of a programmable pulse duration, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has a plurality of plates. A control system having a digital sequencer controls the laser and a plurality of switching components, synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to enable programmable pulse durations and programmable inter-pulse spacings.

  7. High-speed multiframe dynamic transmission electron microscope image acquisition system with arbitrary timing

    SciTech Connect

    Reed, Bryan W.; DeHope, William J.; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M.

    2015-10-20

    An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses of a predefined pulse duration and waveform, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has two pairs of plates arranged perpendicular to one another. A control system controls the laser and a plurality of switching components synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to be provided with an independently set duration and independently set inter-pulse spacings.

  8. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  9. Malware Analysis Using Visualized Image Matrices

    PubMed Central

    Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  10. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  11. Acquisition of quantitative physiological data and computerized image reconstruction using a single scan TV system

    NASA Technical Reports Server (NTRS)

    Baily, N. A.

    1975-01-01

    Single scan operation of television X-ray fluoroscopic systems allow both analog and digital reconstruction of tomographic sections from single plan images. This type of system combined with a minimum of statistical processing showed excellent capabilities for delineating small changes in differential X-ray attenuation. Patient dose reduction is significant when compared to normal operation or film recording. Flat screen, low light level systems were both rugged and light in weight, making them applicable for a variety of special purposes. Three dimensional information was available from the tomographic methods and the recorded data was sufficient when used with appropriate computer display devices to give representative 3D images.

  12. Diagnosing Lung Nodules on Oncologic MR/PET Imaging: Comparison of Fast T1-Weighted Sequences and Influence of Image Acquisition in Inspiration and Expiration Breath-Hold

    PubMed Central

    Schwenzer, Nina F.; Seith, Ferdinand; Gatidis, Sergios; Brendle, Cornelia; Schmidt, Holger; Pfannenberg, Christina A.; laFougère, Christian; Nikolaou, Konstantin

    2016-01-01

    Objective First, to investigate the diagnostic performance of fast T1-weighted sequences for lung nodule evaluation in oncologic magnetic resonance (MR)/positron emission tomography (PET). Second, to evaluate the influence of image acquisition in inspiration and expiration breath-hold on diagnostic performance. Materials and Methods The study was approved by the local Institutional Review Board. PET/CT and MR/PET of 44 cancer patients were evaluated by 2 readers. PET/CT included lung computed tomography (CT) scans in inspiration and expiration (CTin, CTex). MR/PET included Dixon sequence for attenuation correction and fast T1-weighted volumetric interpolated breath-hold examination (VIBE) sequences (volume interpolated breath-hold examination acquired in inspiration [VIBEin], volume interpolated breath-hold examination acquired in expiration [VIBEex]). Diagnostic performance was analyzed for lesion-, lobe-, and size-dependence. Diagnostic confidence was evaluated (4-point Likert-scale; 1 = high). Jackknife alternative free-response receiver-operating characteristic (JAFROC) analysis was performed. Results Seventy-six pulmonary lesions were evaluated. Lesion-based detection rates were: CTex, 77.6%; VIBEin, 53.3%; VIBEex, 51.3%; and Dixon, 22.4%. Lobe-based detection rates were: CTex, 89.6%; VIBEin, 58.3%; VIBEex, 60.4%; and Dixon, 31.3%. In contrast to CT, inspiration versus expiration did not alter diagnostic performance in VIBE sequences. Diagnostic confidence was best for VIBEin and CTex and decreased in VIBEex and Dixon (1.2 ± 0.6; 1.2 ± 0.7; 1.5 ± 0.9; 1.7 ± 1.1, respectively). The JAFROC figure-of-merit of Dixon was significantly lower. All patients with malignant lesions were identified by CTex, VIBEin, and VIBEex, while 3 patients were false-negative in Dixon. Conclusion Fast T1-weighted VIBE sequences allow for identification of patients with malignant pulmonary lesions. The Dixon sequence is not recommended for lung nodule evaluation in oncologic MR

  13. Data acquisition, preprocessing and analysis for the Virginia Tech OLYMPUS experiment

    NASA Technical Reports Server (NTRS)

    Remaklus, P. Will

    1991-01-01

    Virginia Tech is conducting a slant path propagation experiment using the 12, 20, and 30 GHz OLYMPUS beacons. Beacon signal measurements are made using separate terminals for each frequency. In addition, short baseline diversity measurements are collected through a mobile 20 GHz terminal. Data collection is performed with a custom data acquisition and control system. Raw data are preprocessed to remove equipment biases and discontinuities prior to analysis. Preprocessed data are then statistically analyzed to investigate parameters such as frequency scaling, fade slope and duration, and scintillation intensity.

  14. The Open Microscopy Environment (OME) Data Model and XML file: open tools for informatics and quantitative analysis in biological imaging

    PubMed Central

    Goldberg, Ilya G; Allan, Chris; Burel, Jean-Marie; Creager, Doug; Falconi, Andrea; Hochheiser, Harry; Johnston, Josiah; Mellen, Jeff; Sorger, Peter K; Swedlow, Jason R

    2005-01-01

    The Open Microscopy Environment (OME) defines a data model and a software implementation to serve as an informatics framework for imaging in biological microscopy experiments, including representation of acquisition parameters, annotations and image analysis results. OME is designed to support high-content cell-based screening as well as traditional image analysis applications. The OME Data Model, expressed in Extensible Markup Language (XML) and realized in a traditional database, is both extensible and self-describing, allowing it to meet emerging imaging and analysis needs. PMID:15892875

  15. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  16. Design and characterization of a digital image acquisition system for whole-specimen breast histopathology

    NASA Astrophysics Data System (ADS)

    Clarke, Gina M.; Peressotti, Chris; Mawdsley, Gordon E.; Yaffe, Martin J.

    2006-10-01

    We have developed a digital histopathology imaging system capable of producing a three-dimensional (3D) representation of histopathology from an entire lumpectomy specimen. The system has the potential to improve the accuracy of surgical margin assessment in the treatment of breast cancer by providing finer sampling and 3D visualization. A scanning light microscope was modified to allow digital photomicrography of a stack of large (up to 120 × 170 mm2) histology slides cut serially through the entire specimen. The images are registered and displayed in 2D and 3D. The design of the system, which reduces or eliminates the appearance of 'tiling' and 'seam' artefacts inherent in the scanning method, is described and its resolution, contrast/noise and coverage properties are characterized through measurements of the modulation transfer function (MTF), depth of field (DOF) and signal difference to noise ratio (SDNR). The imaging task requires a lateral resolution of 5 µm, an SDNR of 5 between relevant features, 'tiling artefact' at a level below the detectability threshold of the eye, and 'seam artefact' of less than 5-10 µm. The tests demonstrate that the system is largely adequate for the imaging task, although further optimizations are required to reduce the degradation of coverage incurred by seam artefact.

  17. Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image

    ERIC Educational Resources Information Center

    Mainwaring, Lynda M.; Krasnow, Donna H.

    2010-01-01

    Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

  18. Improving in situ data acquisition using training images and a Bayesian mixture model

    NASA Astrophysics Data System (ADS)

    Abdollahifard, Mohammad Javad; Mariethoz, Gregoire; Pourfard, Mohammadreza

    2016-06-01

    Estimating the spatial distribution of physical processes using a minimum number of samples is of vital importance in earth science applications where sampling is costly. In recent years, training image-based methods have received a lot of attention for interpolation and simulation. However, training images have never been employed to optimize spatial sampling process. In this paper, a sequential compressive sampling method is presented which decides the location of new samples based on a training image. First, a Bayesian mixture model is developed based on the training patterns. Then, using this model, unknown values are estimated based on a limited number of random samples. Since the model is probabilistic, it allows estimating local uncertainty conditionally to the available samples. Based on this, new samples are sequentially extracted from the locations with maximum uncertainty. Experiments show that compared to a random sampling strategy, the proposed supervised sampling method significantly reduces the number of samples needed to achieve the same level of accuracy, even when the training image is not optimally chosen. The method has the potential to reduce the number of observations necessary for the characterization of environmental processes.

  19. Multi-image acquisition-based distance sensor using agile laser spot beam.

    PubMed

    Riza, Nabeel A; Amin, M Junaid

    2014-09-01

    We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.

  20. Automatic quantitative analysis of t-tubule organization in cardiac myocytes using ImageJ.

    PubMed

    Pasqualin, Côme; Gannier, François; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique

    2015-02-01

    The transverse tubule system in mammalian striated muscle is highly organized and contributes to optimal and homogeneous contraction. Diverse pathologies such as heart failure and atrial fibrillation include disorganization of t-tubules and contractile dysfunction. Few tools are available for the quantification of the organization of the t-tubule system. We developed a plugin for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. This plugin (TTorg) analyzes raw confocal microscopy images. Analysis options include the whole image, specific regions of the image (cropping), and z-axis analysis of the same image. Batch analysis of a series of images with identical criteria is also one of the options. There is no need to either reorientate any specimen to the horizontal or to do a thresholding of the image to perform analysis. TTorg includes a synthetic "myocyte-like" image generator to test the plugin's efficiency in the user's own experimental conditions. This plugin was validated on synthetic images for different simulated cell characteristics and acquisition parameters. TTorg was able to detect significant differences between the organization of the t-tubule systems in experimental data of mouse ventricular myocytes isolated from wild-type and dystrophin-deficient mice. TTorg is freely distributed, and its source code is available. It provides a reliable, easy-to-use, automatic, and unbiased measurement of t-tubule organization in a wide variety of experimental conditions.

  1. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  2. Data acquisition and analysis for the Fermilab Collider RunII

    SciTech Connect

    Paul L. G. Lebrun et al.

    2004-07-07

    Operating and improving the understanding of the Fermilab Accelerator Complex for the colliding beam experiments requires advanced software methods and tools. The Shot Data Acquisition and Analysis (SDA) has been developed to fulfill this need. The SDA takes a standard set of critical data at relevant stages during the complex series of beam manipulations leading to {radical}(s) {approx} 2 TeV collisions. Data is stored in a relational database, and is served to programs and users via Web based tools. Summary tables are systematically generated during and after a store. Written entirely in Java, SDA supports both interactive tools and application interfaces used for in-depth analysis. In this talk, we present the architecture and described some of our analysis tools. We also present some results on the recent Tevatron performance as illustrations of the capabilities of SDA.

  3. Automated Confocal Laser Scanning Microscopy and Semiautomated Image Processing for Analysis of Biofilms

    PubMed Central

    Kuehn, Martin; Hausner, Martina; Bungartz, Hans-Joachim; Wagner, Michael; Wilderer, Peter A.; Wuertz, Stefan

    1998-01-01

    The purpose of this study was to develop and apply a quantitative optical method suitable for routine measurements of biofilm structures under in situ conditions. A computer program was designed to perform automated investigations of biofilms by using image acquisition and image analysis techniques. To obtain a representative profile of a growing biofilm, a nondestructive procedure was created to study and quantify undisturbed microbial populations within the physical environment of a glass flow cell. Key components of the computer-controlled processing described in this paper are the on-line collection of confocal two-dimensional (2D) cross-sectional images from a preset 3D domain of interest followed by the off-line analysis of these 2D images. With the quantitative extraction of information contained in each image, a three-dimensional reconstruction of the principal biological events can be achieved. The program is convenient to handle and was generated to determine biovolumes and thus facilitate the examination of dynamic processes within biofilms. In the present study, Pseudomonas fluorescens or a green fluorescent protein-expressing Escherichia coli strain, EC12, was inoculated into glass flow cells and the respective monoculture biofilms were analyzed in three dimensions. In this paper we describe a method for the routine measurements of biofilms by using automated image acquisition and semiautomated image analysis. PMID:9797255

  4. Acquisition of quantitative physiological data and computerized image reconstruction using a single scan TV system

    NASA Technical Reports Server (NTRS)

    Baily, N. A.

    1976-01-01

    A single-scan radiography system has been interfaced to a minicomputer, and the combined system has been used with a variety of fluoroscopic systems and image intensifiers available in clinical facilities. The system's response range is analyzed, and several applications are described. These include determination of the gray scale for typical X-ray-fluoroscopic-television chains, measurement of gallstone volume in patients, localization of markers or other small anatomical features, determinations of organ areas and volumes, computer reconstruction of tomographic sections of organs in motion, and computer reconstruction of transverse axial body sections from fluoroscopic images. It is concluded that this type of system combined with a minimum of statistical processing shows excellent capabilities for delineating small changes in differential X-ray attenuation.

  5. Repeated-Measures Analysis of Image Data

    NASA Technical Reports Server (NTRS)

    Newton, H. J.

    1983-01-01

    It is suggested that using a modified analysis of variance procedure on data sampled systematically from a rectangular array of image data can provide a measure of homogeneity of means over that array in single directions and how variation in perpendicular directions interact. The modification of analysis of variance required to account for spatial correlation is described theoretically and numerically on simulated data.

  6. Design and demonstrate the performance of cryogenic components representative of space vehicles: Start basket liquid acquisition device performance analysis

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The objective was to design, fabricate and test an integrated cryogenic test article incorporating both fluid and thermal propellant management subsystems. A 2.2 m (87 in) diameter aluminum test tank was outfitted with multilayer insulation, helium purge system, low-conductive tank supports, thermodynamic vent system, liquid acquisition device and immersed outflow pump. Tests and analysis performed on the start basket liquid acquisition device and studies of the liquid retention characteristics of fine mesh screens are discussed.

  7. Hybrid µCT-FMT imaging and image analysis

    PubMed Central

    Zafarnia, Sara; Babler, Anne; Jahnen-Dechent, Willi; Lammers, Twan; Lederle, Wiltrud; Kiessling, Fabian

    2015-01-01

    Fluorescence-mediated tomography (FMT) enables longitudinal and quantitative determination of the fluorescence distribution in vivo and can be used to assess the biodistribution of novel probes and to assess disease progression using established molecular probes or reporter genes. The combination with an anatomical modality, e.g., micro computed tomography (µCT), is beneficial for image analysis and for fluorescence reconstruction. We describe a protocol for multimodal µCT-FMT imaging including the image processing steps necessary to extract quantitative measurements. After preparing the mice and performing the imaging, the multimodal data sets are registered. Subsequently, an improved fluorescence reconstruction is performed, which takes into account the shape of the mouse. For quantitative analysis, organ segmentations are generated based on the anatomical data using our interactive segmentation tool. Finally, the biodistribution curves are generated using a batch-processing feature. We show the applicability of the method by assessing the biodistribution of a well-known probe that binds to bones and joints. PMID:26066033

  8. Optimization of magnetic flux density for fast MREIT conductivity imaging using multi-echo interleaved partial fourier acquisitions

    PubMed Central

    2013-01-01

    Background Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive method for visualizing the internal conductivity and/or current density of an electrically conductive object by externally injected currents. The injected current through a pair of surface electrodes induces a magnetic flux density distribution inside the imaging object, which results in additional magnetic flux density. To measure the magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels out the systematic artifacts accumulated in phase signals and also reduces the random noise effect by doubling the measured magnetic flux density signal. For practical applications of in vivo MREIT, it is essential to reduce the scan duration maintaining spatial-resolution and sufficient contrast. In this paper, we optimize the magnetic flux density by using a fast gradient multi-echo MR pulse sequence. To recover the one component of magnetic flux density Bz, we use a coupled partial Fourier acquisitions in the interleaved sense. Methods To prove the proposed algorithm, we performed numerical simulations using a two-dimensional finite-element model. For a real experiment, we designed a phantom filled with a calibrated saline solution and located a rubber balloon inside the phantom. The rubber balloon was inflated by injecting the same saline solution during the MREIT imaging. We used the multi-echo fast low angle shot (FLASH) MR pulse sequence for MRI scan, which allows the reduction of measuring time without a substantial loss in image quality. Results Under the assumption of a priori phase artifact map from a reference scan, we rigorously investigated the convergence ratio of the proposed method, which was closely related with the number of measured phase encode set and the frequency range of the background field inhomogeneity. In the phantom experiment with a partial Fourier acquisition, the total scan time was

  9. A Comparison of the Effects of Image-Schema-Based Instruction and Translation-Based Instruction on the Acquisition of L2 Polysemous Words

    ERIC Educational Resources Information Center

    Morimoto, Shun; Loewen, Shawn

    2007-01-01

    This quasi-experimental study investigated the effectiveness of two types of vocabulary instruction--image-schema-based instruction (ISBI) and translation-based instruction (TBI)--on the acquisition of second language (L2) polysemous words. Fifty-eight Japanese high school learners of English were divided into two treatment groups (ISBI and TBI)…

  10. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  11. Motion analysis of knee joint using dynamic volume images

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Kohno, Takahiro; Suzuki, Masahiko; Moriya, Hideshige; Mori, Sin-ichiro; Endo, Masahiro

    2006-03-01

    Acquisition and analysis of three-dimensional movement of knee joint is desired in orthopedic surgery. We have developed two methods to obtain dynamic volume images of knee joint. One is a 2D/3D registration method combining a bi-plane dynamic X-ray fluoroscopy and a static three-dimensional CT, the other is a method using so-called 4D-CT that uses a cone-beam and a wide 2D detector. In this paper, we present two analyses of knee joint movement obtained by these methods: (1) transition of the nearest points between femur and tibia (2) principal component analysis (PCA) of six parameters representing the three dimensional movement of knee. As a preprocessing for the analysis, at first the femur and tibia regions are extracted from volume data at each time frame and then the registration of the tibia between different frames by an affine transformation consisting of rotation and translation are performed. The same transformation is applied femur as well. Using those image data, the movement of femur relative to tibia can be analyzed. Six movement parameters of femur consisting of three translation parameters and three rotation parameters are obtained from those images. In the analysis (1), axis of each bone is first found and then the flexion angle of the knee joint is calculated. For each flexion angle, the minimum distance between femur and tibia and the location giving the minimum distance are found in both lateral condyle and medial condyle. As a result, it was observed that the movement of lateral condyle is larger than medial condyle. In the analysis (2), it was found that the movement of the knee can be represented by the first three principal components with precision of 99.58% and those three components seem to strongly relate to three major movements of femur in the knee bend known in orthopedic surgery.

  12. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  13. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  14. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  15. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  16. Image Chain Analysis For Digital Image Rectification System

    NASA Astrophysics Data System (ADS)

    Arguello, Roger J.

    1981-07-01

    An image chain analysis, utilizing a comprehensive computer program, has been gen-erated for the key elements of a digital image rectification system. System block dia-grams and analyses for three system configurations employing film scanner input have been formulated with a parametric specification of pertinent element modulation transfer functions and input film scene spectra. The major elements of the system for this analy-sis include a high-resolution, high-speed charge-coupled device film scanner, three candidate digital resampling option algorithms (i.e., nearest neighbor, bilinear inter-polation and cubic convolution methods), and two candidate printer reconstructor implemen-tations (solid-state light-emitting diode printer and laser beam recorder). Suitable metrics for the digital rectification system, incorporating the effects of interpolation and resolution error, were established, and the image chain analysis program was used to perform a quantitative comparison of the three resampling options with the two candi-date printer reconstructor implementations. The nearest neighbor digital resampling function is found to be a good compromise choice when cascaded with either a light-emit-ting diode printer or laser beam recorder. The resulting composite intensity point spread functions, including resampling, and both types of reconstruction are bilinear and quadratic, respectively.

  17. User's guide to noise data acquisition and analysis programs for HP9845: Nicolet analyzers

    NASA Technical Reports Server (NTRS)

    Mcgary, M. C.

    1982-01-01

    A software interface package was written for use with a desktop computer and two models of single channel Fast Fourier analyzers. This software features a portable measurement and analysis system with several options. Two types of interface hardware can alternately be used in conjunction with the software. Either an IEEE-488 Bus interface or a 16-bit parallel system may be used. Two types of storage medium, either tape cartridge or floppy disc can be used with the software. Five types of data may be stored, plotted, and/or printed. The data types include time histories, narrow band power spectra, and narrow band, one-third octave band, or octave band sound pressure level. The data acquisition programming includes a front panel remote control option for the FFT analyzers. Data analysis options include choice of line type and pen color for plotting.

  18. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  19. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  20. Proteomic analysis of formalin-fixed paraffin embedded tissue by MALDI imaging mass spectrometry

    PubMed Central

    Casadonte, Rita; Caprioli, Richard M

    2012-01-01

    Archived formalin-fixed paraffin-embedded (FFPE) tissue collections represent a valuable informational resource for proteomic studies. Multiple FFPE core biopsies can be assembled in a single block to form tissue microarrays (TMAs). We describe a protocol for analyzing protein in FFPE -TMAs using matrix-assisted laser desorption/ionization (MAL DI) imaging mass spectrometry (IMS). The workflow incorporates an antigen retrieval step following deparaffinization, in situ trypsin digestion, matrix application and then mass spectrometry signal acquisition. The direct analysis of FFPE -TMA tissue using IMS allows direct analysis of multiple tissue samples in a single experiment without extraction and purification of proteins. The advantages of high speed and throughput, easy sample handling and excellent reproducibility make this technology a favorable approach for the proteomic analysis of clinical research cohorts with large sample numbers. For example, TMA analysis of 300 FFPE cores would typically require 6 h of total time through data acquisition, not including data analysis. PMID:22011652

  1. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  2. Cancer detection by quantitative fluorescence image analysis.

    PubMed

    Parry, W L; Hemstreet, G P

    1988-02-01

    Quantitative fluorescence image analysis is a rapidly evolving biophysical cytochemical technology with the potential for multiple clinical and basic research applications. We report the application of this technique for bladder cancer detection and discuss its potential usefulness as an adjunct to methods used currently by urologists for the diagnosis and management of bladder cancer. Quantitative fluorescence image analysis is a cytological method that incorporates 2 diagnostic techniques, quantitation of nuclear deoxyribonucleic acid and morphometric analysis, in a single semiautomated system to facilitate the identification of rare events, that is individual cancer cells. When compared to routine cytopathology for detection of bladder cancer in symptomatic patients, quantitative fluorescence image analysis demonstrated greater sensitivity (76 versus 33 per cent) for the detection of low grade transitional cell carcinoma. The specificity of quantitative fluorescence image analysis in a small control group was 94 per cent and with the manual method for quantitation of absolute nuclear fluorescence intensity in the screening of high risk asymptomatic subjects the specificity was 96.7 per cent. The more familiar flow cytometry is another fluorescence technique for measurement of nuclear deoxyribonucleic acid. However, rather than identifying individual cancer cells, flow cytometry identifies cellular pattern distributions, that is the ratio of normal to abnormal cells. Numerous studies by others have shown that flow cytometry is a sensitive method to monitor patients with diagnosed urological disease. Based upon results in separate quantitative fluorescence image analysis and flow cytometry studies, it appears that these 2 fluorescence techniques may be complementary tools for urological screening, diagnosis and management, and that they also may be useful separately or in combination to elucidate the oncogenic process, determine the biological potential of tumors

  3. Advanced automated char image analysis techniques

    SciTech Connect

    Tao Wu; Edward Lester; Michael Cloke

    2006-05-15

    Char morphology is an important characteristic when attempting to understand coal behavior and coal burnout. In this study, an augmented algorithm has been proposed to identify char types using image analysis. On the basis of a series of image processing steps, a char image is singled out from the whole image, which then allows the important major features of the char particle to be measured, including size, porosity, and wall thickness. The techniques for automated char image analysis have been tested against char images taken from ICCP Char Atlas as well as actual char particles derived from pyrolyzed char samples. Thirty different chars were prepared in a drop tube furnace operating at 1300{sup o}C, 1% oxygen, and 100 ms from 15 different world coals sieved into two size fractions (53-75 and 106-125 {mu}m). The results from this automated technique are comparable with those from manual analysis, and the additional detail from the automated sytem has potential use in applications such as combustion modeling systems. Obtaining highly detailed char information with automated methods has traditionally been hampered by the difficulty of automatic recognition of individual char particles. 20 refs., 10 figs., 3 tabs.

  4. Integral imaging acquisition and processing for visualization of photon counting images in the mid-wave infrared range

    NASA Astrophysics Data System (ADS)

    Latorre-Carmona, P.; Pla, F.; Javidi, B.

    2016-06-01

    In this paper, we present an overview of our previously published work on the application of the maximum likelihood (ML) reconstruction method to integral images acquired with a mid-wave infrared detector on two different types of scenes: one of them consisting of a road, a group of trees and a vehicle just behind one of the trees (being the car at a distance of more than 200m from the camera), and another one consisting of a view of the Wright Air Force Base airfield, with several hangars and different other types of installations (including warehouses) at distances ranging from 600m to more than 2km. Dark current noise is considered taking into account the particular features this type of sensors have. Results show that this methodology allows to improve visualization in the photon counting domain.

  5. Hormonal Contraception and the Risk of HIV Acquisition: An Individual Participant Data Meta-analysis

    PubMed Central

    Morrison, Charles S.; Chen, Pai-Lien; Kwok, Cynthia; Baeten, Jared M.; Brown, Joelle; Crook, Angela M.; Van Damme, Lut; Delany-Moretlwe, Sinead; Francis, Suzanna C.; Friedland, Barbara A.; Hayes, Richard J.; Heffron, Renee; Kapiga, Saidi; Karim, Quarraisha Abdool; Karpoff, Stephanie; Kaul, Rupert; McClelland, R. Scott; McCormack, Sheena; McGrath, Nuala; Myer, Landon; Rees, Helen; van der Straten, Ariane; Watson-Jones, Deborah; van de Wijgert, Janneke H. H. M.; Stalter, Randy; Low, Nicola

    2015-01-01

    Background Observational studies of a putative association between hormonal contraception (HC) and HIV acquisition have produced conflicting results. We conducted an individual participant data (IPD) meta-analysis of studies from sub-Saharan Africa to compare the incidence of HIV infection in women using combined oral contraceptives (COCs) or the injectable progestins depot-medroxyprogesterone acetate (DMPA) or norethisterone enanthate (NET-EN) with women not using HC. Methods and Findings Eligible studies measured HC exposure and incident HIV infection prospectively using standardized measures, enrolled women aged 15–49 y, recorded ≥15 incident HIV infections, and measured prespecified covariates. Our primary analysis estimated the adjusted hazard ratio (aHR) using two-stage random effects meta-analysis, controlling for region, marital status, age, number of sex partners, and condom use. We included 18 studies, including 37,124 women (43,613 woman-years) and 1,830 incident HIV infections. Relative to no HC use, the aHR for HIV acquisition was 1.50 (95% CI 1.24–1.83) for DMPA use, 1.24 (95% CI 0.84–1.82) for NET-EN use, and 1.03 (95% CI 0.88–1.20) for COC use. Between-study heterogeneity was mild (I2 < 50%). DMPA use was associated with increased HIV acquisition compared with COC use (aHR 1.43, 95% CI 1.23–1.67) and NET-EN use (aHR 1.32, 95% CI 1.08–1.61). Effect estimates were attenuated for studies at lower risk of methodological bias (compared with no HC use, aHR for DMPA use 1.22, 95% CI 0.99–1.50; for NET-EN use 0.67, 95% CI 0.47–0.96; and for COC use 0.91, 95% CI 0.73–1.41) compared to those at higher risk of bias (pinteraction = 0.003). Neither age nor herpes simplex virus type 2 infection status modified the HC–HIV relationship. Conclusions This IPD meta-analysis found no evidence that COC or NET-EN use increases women’s risk of HIV but adds to the evidence that DMPA may increase HIV risk, underscoring the need for additional safe

  6. Human movement analysis with image processing in real time

    NASA Astrophysics Data System (ADS)

    Fauvet, Eric; Paindavoine, Michel; Cannard, F.

    1991-04-01

    In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.

  7. Metabolome analysis of Arabidopsis thaliana roots identifies a key metabolic pathway for iron acquisition.

    PubMed

    Schmidt, Holger; Günther, Carmen; Weber, Michael; Spörlein, Cornelia; Loscher, Sebastian; Böttcher, Christoph; Schobert, Rainer; Clemens, Stephan

    2014-01-01

    Fe deficiency compromises both human health and plant productivity. Thus, it is important to understand plant Fe acquisition strategies for the development of crop plants which are more Fe-efficient under Fe-limited conditions, such as alkaline soils, and have higher Fe density in their edible tissues. Root secretion of phenolic compounds has long been hypothesized to be a component of the reduction strategy of Fe acquisition in non-graminaceous plants. We therefore subjected roots of Arabidopsis thaliana plants grown under Fe-replete and Fe-deplete conditions to comprehensive metabolome analysis by gas chromatography-mass spectrometry and ultra-pressure liquid chromatography electrospray ionization quadrupole time-of-flight mass spectrometry. Scopoletin and other coumarins were found among the metabolites showing the strongest response to two different Fe-limited conditions, the cultivation in Fe-free medium and in medium with an alkaline pH. A coumarin biosynthesis mutant defective in ortho-hydroxylation of cinnamic acids was unable to grow on alkaline soil in the absence of Fe fertilization. Co-cultivation with wild-type plants partially rescued the Fe deficiency phenotype indicating a contribution of extracellular coumarins to Fe solubilization. Indeed, coumarins were detected in root exudates of wild-type plants. Direct infusion mass spectrometry as well as UV/vis spectroscopy indicated that coumarins are acting both as reductants of Fe(III) and as ligands of Fe(II). PMID:25058345

  8. Metabolome Analysis of Arabidopsis thaliana Roots Identifies a Key Metabolic Pathway for Iron Acquisition

    PubMed Central

    Schmidt, Holger; Günther, Carmen; Weber, Michael; Spörlein, Cornelia; Loscher, Sebastian; Böttcher, Christoph; Schobert, Rainer; Clemens, Stephan

    2014-01-01

    Fe deficiency compromises both human health and plant productivity. Thus, it is important to understand plant Fe acquisition strategies for the development of crop plants which are more Fe-efficient under Fe-limited conditions, such as alkaline soils, and have higher Fe density in their edible tissues. Root secretion of phenolic compounds has long been hypothesized to be a component of the reduction strategy of Fe acquisition in non-graminaceous plants. We therefore subjected roots of Arabidopsis thaliana plants grown under Fe-replete and Fe-deplete conditions to comprehensive metabolome analysis by gas chromatography-mass spectrometry and ultra-pressure liquid chromatography electrospray ionization quadrupole time-of-flight mass spectrometry. Scopoletin and other coumarins were found among the metabolites showing the strongest response to two different Fe-limited conditions, the cultivation in Fe-free medium and in medium with an alkaline pH. A coumarin biosynthesis mutant defective in ortho-hydroxylation of cinnamic acids was unable to grow on alkaline soil in the absence of Fe fertilization. Co-cultivation with wild-type plants partially rescued the Fe deficiency phenotype indicating a contribution of extracellular coumarins to Fe solubilization. Indeed, coumarins were detected in root exudates of wild-type plants. Direct infusion mass spectrometry as well as UV/vis spectroscopy indicated that coumarins are acting both as reductants of Fe(III) and as ligands of Fe(II). PMID:25058345

  9. Automated eXpert Spectral Image Analysis

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limtedmore » number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X

  10. Applications Of Binary Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  11. Microscopical image analysis: problems and approaches.

    PubMed

    Bradbury, S

    1979-03-01

    This article reviews some of the problems which have been encountered in the application of automatic image analysis to problems in biology. Some of the questions involved in the actual formulation of such a problem for this approach are considered as well as the difficulties in the analysis due to lack of specific constrast in the image and to its complexity. Various practical methods which have been successful in overcoming these problems are outlined, and the question of the desirability of an opto-manual or semi-automatic system as opposed to a fully automatic version is considered.

  12. Analysis of facial sebum distribution using a digital fluorescent imaging system.

    PubMed

    Han, Byungkwan; Jung, Byungjo; Nelson, J Stuart; Choi, Eung-Ho

    2007-01-01

    Current methods for analysis of sebum excretion have limitations, such as irreproducible results in repeatable measurements due to the point measurement method, user-dependent artifacts due to contact measurement or qualitative evaluation of the image, and long measurement time. A UV-induced fluorescent digital imaging system is developed to acquire facial images so that the distribution of sebum excretion on the face could be analyzed. The imaging system consists of a constant UV-A light source, digital color camera, and head-positioning device. The system for acquisition of a fluorescent facial image and the image analysis method is described. The imaging modality provides uniform light distribution and presents a discernible color fluorescent image. Valuable parameters of sebum excretion are obtained after image analysis. The imaging system, which provides a noncontact method, is proved to be a useful tool to evaluate the amount and pattern of sebum excretion. When compared to conventional "Wood's lamp" and "Sebutape" methods that provide similar parameters for sebum excretion, the described method is simpler and more reliable to evaluate the dynamics of sebum excretion in nearly real-time. PMID:17343481

  13. Regulatory Forum opinion piece: image analysis-based cell proliferation studies using electronic images: the CEPA industry working group's proposal.

    PubMed

    Dölemeyer, Arno; Mudry, Maria Cristina De Vera; Kohler, Manfred; Schorsch, Frederic

    2013-01-01

    Electronic images of histopathological changes are commonly and increasingly used in toxicologic pathology for morphological evaluation, illustration, peer review, or reporting. Toxicity studies in which cell proliferation is an end point are also pivotal in determining the carcinogenic potential of new molecules. In this article, we describe the approach of the European Cell Proliferation and Apoptosis working group (CEPA) for performing cell proliferation studies and morphometry using electronic images. The Society of Toxicologic Pathology (STP) has published a position statement on handling of pathology image data in compliance with 21 Code of Federal Regulations (CFR) Parts 58 and 11. CEPA supports the STP position and shares the issues involved in the use of electronic images in pathology. However, considering the experience and current know-how of members, particularly in conducting cell proliferation studies, CEPA would like to recommend in this article that electronic images acquired using state-of-the-art slide imaging techniques, including whole slide scanning, need not be considered as raw data, and therefore are not subject to 21 CFR Parts 58 and 11 regulations for archiving. In this article, we detail the reasons why we come to this proposal and we describe the measures that are taken to ensure Good Laboratory Practice-compliant execution of cell proliferation studies that include acquisition and validation of imaging and image analysis systems, development and validation of methods for their intended use, formulation, and use of standard operating procedures.

  14. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  15. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  16. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  17. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  18. Monitoring of historical frescoes by timed infrared imaging analysis

    NASA Astrophysics Data System (ADS)

    Cadelano, G.; Bison, P.; Bortolin, A.; Ferrarini, G.; Peron, F.; Girotto, M.; Volinia, M.

    2015-03-01

    The subflorescence and efflorescence phenomena are widely acknowledged as the major causes of permanent damage to fresco wall paintings. They are related to the occurrence of cycles of dry/wet conditions inside the walls. Therefore, it is essential to identify the presence of water on the decorated surfaces and inside the walls. Nondestructive testing in industrial applications have confirmed that active infrared thermography with continuous timed images acquisition can improve the outcomes of thermal analysis aimed to moisture identification. In spite of that, in cultural heritage investigations these techniques have not been yet used extensively on a regular basis. This paper illustrates an application of these principles in order to evaluate the decay of fresco mural paintings in a medieval chapel located in North-West of Italy. One important feature of this study is the use of a robotic system called aIRview that can be utilized to automatically acquire and process thermal images. Multiple accurate thermal views of the inside walls of the building have been produced in a survey that lasted several days. Signal processing algorithms based on Fast Fourier Transform analysis have been applied to the acquired data in order to formulate trustworthy hypotheses about the deterioration mechanisms.

  19. Analysis of an interferometric Stokes imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Murali, Sukumar

    Estimation of Stokes vector components from an interferometric fringe encoded image is a novel way of measuring the State Of Polarization (SOP) distribution across a scene. Imaging polarimeters employing interferometric techniques encode SOP in- formation across a scene in a single image in the form of intensity fringes. The lack of moving parts and use of a single image eliminates the problems of conventional polarimetry - vibration, spurious signal generation due to artifacts, beam wander, and need for registration routines. However, interferometric polarimeters are limited by narrow bandpass and short exposure time operations which decrease the Signal to Noise Ratio (SNR) defined as the ratio of the mean photon count to the standard deviation in the detected image. A simulation environment for designing an Interferometric Stokes Imaging polarimeter (ISIP) and a detector with noise effects is created and presented. Users of this environment are capable of imaging an object with defined SOP through an ISIP onto a detector producing a digitized image output. The simulation also includes bandpass imaging capabilities, control of detector noise, and object brightness levels. The Stokes images are estimated from a fringe encoded image of a scene by means of a reconstructor algorithm. A spatial domain methodology involving the idea of a unit cell and slide approach is applied to the reconstructor model developed using Mueller calculus. The validation of this methodology and effectiveness compared to a discrete approach is demonstrated with suitable examples. The pixel size required to sample the fringes and minimum unit cell size required for reconstruction are investigated using condition numbers. The importance of the PSF of fore-optics (telescope) used in imaging the object is investigated and analyzed using a point source imaging example and a Nyquist criteria is presented. Reconstruction of fringe modulated images in the presence of noise involves choosing an

  20. Acquisition of X-ray images by using a CNT cold emitter

    NASA Astrophysics Data System (ADS)

    Choi, H. Y.; Chang, W. S.; Kim, H. S.; Park, Y. H.; Kim, J. U.

    2006-08-01

    Carbon nanotubes (i.e., CNTs) are tubular carbon molecules with properties that make them potentially useful in extremely small scale electronic and mechanical applications. Because of this, CNTs are widely used in many fields such as field emission display (FED), nanoscale sensors, vacuum electronic devices, and so on. In this study, CNTs were applied for an X-ray source. CNTs were grown on the Si-wafer substrate by thermal CVD method and the length of the grown CNTs was about 30 50 μm. The electrical properties of the grown CNT emitter were tested in an X-ray tube, which has triode structure (i.e., a cathode as a CNT emitter, a grid, and an anode). Electron beam focusing characteristics as well as correlations between emission currents and grid mesh structures (or grid mesh voltage) were also studied by using OPERA 3D simulation. The detailed descriptions of the manufactured X-ray triode were reported and some preliminary X-ray images were presented.

  1. Visualizing Proteins and Macromolecular Complexes by Negative Stain EM: from Grid Preparation to Image Acquisition

    PubMed Central

    Booth, David S.; Avila-Sakar, Agustin; Cheng, Yifan

    2011-01-01

    Single particle electron microscopy (EM), of both negative stained or frozen hydrated biological samples, has become a versatile tool in structural biology 1. In recent years, this method has achieved great success in studying structures of proteins and macromolecular complexes 2, 3. Compared with electron cryomicroscopy (cryoEM), in which frozen hydrated protein samples are embedded in a thin layer of vitreous ice 4, negative staining is a simpler sample preparation method in which protein samples are embedded in a thin layer of dried heavy metal salt to increase specimen contrast 5. The enhanced contrast of negative stain EM allows examination of relatively small biological samples. In addition to determining three-dimensional (3D) structure of purified proteins or protein complexes 6, this method can be used for much broader purposes. For example, negative stain EM can be easily used to visualize purified protein samples, obtaining information such as homogeneity/heterogeneity of the sample, formation of protein complexes or large assemblies, or simply to evaluate the quality of a protein preparation. In this video article, we present a complete protocol for using an EM to observe negatively stained protein sample, from preparing carbon coated grids for negative stain EM to acquiring images of negatively stained sample in an electron microscope operated at 120kV accelerating voltage. These protocols have been used in our laboratory routinely and can be easily followed by novice users. PMID:22215030

  2. Front-end electronics and data acquisition system for imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Chen, Y. T.; de La Taille, C.; Suomijärvi, T.; Cao, Z.; Deligny, O.; Dulucq, F.; Ge, M. M.; Lhenry-Yvon, I.; Martin-Chassard, G.; Nguyen Trung, T.; Wanlin, E.; Xiao, G.; Yin, L. Q.; Yun Ky, B.; Zhang, L.; Zhang, H. Y.; Zhang, S. S.; Zhu, Z.

    2015-09-01

    In this paper, a front-end electronics based on an application-specific integrated circuit (ASIC) is presented for the future imaging atmospheric Cherenkov telescopes (IACTs). To achieve this purpose, a 16-channel ASIC chip, PARISROC 2 (Photomultiplier ARray Integrated in SiGe ReadOut Chip) is used in the analog signal processing and digitization. The digitized results are sent to the server by a user-defined User Datagram Protocol/Internet Protocol (UDP/IP) hardcore engine through Ethernet that is managed by a FPGA. A prototype electronics fulfilling the requirements of the Wide Field of View Cherenkov Telescope Array (WFCTA) of the Large High Altitude Air Shower Observatory (LHAASO) project has been designed, fabricated and tested to prove the concept of the design. A detailed description of the development with the results of the test measurements are presented. By using a new input structure and a new configuration of the ASIC, the dynamic range of the circuit is extended. A highly precise-time calibrating algorithm is also proposed, verified and optimized for the mass production. The test results suggest that the proposed electronics design fulfills the general specification of the future IACTs.

  3. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  4. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions. PMID:15521492

  5. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  6. Principal Components Analysis In Medical Imaging

    NASA Astrophysics Data System (ADS)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  7. The rate of acquisition of formal operational schemata in adolescence: A secondary analysis

    NASA Astrophysics Data System (ADS)

    Eckstein, Shulamith G.; Shemesh, Michal

    A theoretical model of cognitive development is applied to the study of the acquisition of formal operational schemata by adolescents. The model predicts that the proportion of adolescents who have not yet acquired the ability to perform a a specific Piagetian-like task is an exponentially decreasing function of age. The model has been used to analyze the data of two large-scale studies performed in the United States and in Israel. The functional dependence upon age was found to be the same in both countries for tasks which are used to assess the following formal operations: proportional reasoning, probabilistic reasoning, correlations, and combinatorial analysis. Different functional dependence was found for tasks assessing conservation, control of variables, and prepositional logic. These results give support to the unity hypothesis of cognitive development, that is, the hypothesis that the various schemata of formal thought appear simultaneously.

  8. The use of an optical data acquisition system for bladed disk vibration analysis

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Meyn, E. H.

    1984-01-01

    A new concept in instrumentation was developed by engineers at NASA Lewis Research Center to collect vibration data from multi-bladed rotors. This new concept, known as the optical data acquisition system, uses optical transducers to measure bladed tip delections by reflection light beams off the tips of the blades as they pass in front of the optical transducer. By using an array of transducers around the perimeter of the rotor, detailed vibration signals can be obtained. In this study, resonant frequencies and mode shapes were determined for a 56 bladed rotor using the optical system. Frequency data from the optical system was also compared to data obtained from strain gauge measurements and finite element analysis and was found to be in good agreement.

  9. The use of an optical data acquisition system for bladed disk vibration analysis

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Meyn, E. H.

    1985-01-01

    A new concept in instrumentation was developed by engineers at NASA Lewis Research Center to collect vibration data from multi-bladed rotors. This new concept, known as the optical data acquisition system, uses optical transducers to measure bladed tip deflections by reflection of light beams off the tips of the blades as they pass in front of the optical transducer. By using an array of transducers around the perimeter of the rotor, detailed vibration signals can be obtained. In this study, resonant frequencies and mode shapes were determined for a 56 bladed rotor using the optical system. Frequency data from the optical system was also compared to data obtained from strain gauge measurements and finite element analysis and was found to be in good agreement.

  10. Dual CARS and SHG image acquisition scheme that combines single central fiber and multimode fiber bundle to collect and differentiate backward and forward generated photons.

    PubMed

    Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K; Wong, Stephen T C

    2016-06-01

    In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications.

  11. Dual CARS and SHG image acquisition scheme that combines single central fiber and multimode fiber bundle to collect and differentiate backward and forward generated photons

    PubMed Central

    Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.

    2016-01-01

    In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938

  12. Dual CARS and SHG image acquisition scheme that combines single central fiber and multimode fiber bundle to collect and differentiate backward and forward generated photons.

    PubMed

    Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K; Wong, Stephen T C

    2016-06-01

    In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938

  13. Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers

    NASA Astrophysics Data System (ADS)

    Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.

    1993-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.

  14. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  15. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  16. PIXE analysis and imaging of papyrus documents

    NASA Astrophysics Data System (ADS)

    Lövestam, N. E. Göran; Swietlicki, Erik

    1990-01-01

    The analysis of antique papyrus documents using an external milliprobe is described. Missing characters of text in the documents were made visible by means of PIXE analysis and X-ray imaging of the areas studied. The contrast between the papyrus and the ink was further increased when the information contained in all the elements was taken into account simultaneously using a multivariate technique (partial least-squares regression).

  17. Financial analysis of technology acquisition using fractionated lasers as a model.

    PubMed

    Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R

    2010-08-01

    Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser.

  18. Visualization of Parameter Space for Image Analysis

    PubMed Central

    Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.

    2013-01-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361

  19. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  20. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  1. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  2. Age of acquisition and imageability norms for base and morphologically complex words in English and in Spanish.

    PubMed

    Davies, Shakiela K; Izura, Cristina; Socas, Rosy; Dominguez, Alberto

    2016-03-01

    The extent to which processing words involves breaking them down into smaller units or morphemes or is the result of an interactive activation of other units, such as meanings, letters, and sounds (e.g., dis-agree-ment vs. disagreement), is currently under debate. Disentangling morphology from phonology and semantics is often a methodological challenge, because orthogonal manipulations are difficult to achieve (e.g., semantically unrelated words are often phonologically related: casual-casualty and, vice versa, sign-signal). The present norms provide a morphological classification of 3,263 suffixed derived words from two widely spoken languages: English (2,204 words) and Spanish (1,059 words). Morphologically complex words were sorted into four categories according to the nature of their relationship with the base word: phonologically transparent (friend-friendly), phonologically opaque (child-children), semantically transparent (habit-habitual), and semantically opaque (event-eventual). In addition, ratings were gathered for age of acquisition, imageability, and semantic distance (i.e., the extent to which the meaning of the complex derived form could be drawn from the meaning of its base constituents). The norms were completed by adding values for word frequency; word length in number of phonemes, letters, and syllables; lexical similarity, as measured by the number of neighbors; and morphological family size. A series of comparative analyses from the collated ratings for the base and derived words were also carried out. The results are discussed in relation to recent findings.

  3. Positron Emission Mammography with Multiple Angle Acquisition

    SciTech Connect

    Mark F. Smith; Stan Majewski; Raymond R. Raylman

    2002-11-01

    Positron emission mammography (PEM) of F-18 fluorodeoxyglucose (FDG) uptake in breast tumors with dedicated detectors typically has been accomplished with two planar detectors in a fixed position with the breast under compression. The potential use of PEM imaging at two detector positions to guide stereotactic breast biopsy has motivated us to use PEM coincidence data acquired at two or more detector positions together in a single image reconstruction. Multiple angle PEM acquisition and iterative image reconstruction were investigated using point source and compressed breast phantom acquisitions with 5, 9, 12 and 15 mm diameter spheres and a simulated tumor:background activity concentration ratio of 6:1. Image reconstruction was performed with an iterative MLEM algorithm that used coincidence events between any two detector pixels on opposed detector heads at each detector position. This present study compared two acquisition protocols: 2 angle acquisition with detector angular positions of -15 and +15 degrees and 11 angle acquisition with detector positions spaced at 3 degree increments over the range -15 to +15 degrees. Three- dimensional image resolution was assessed for the point source acquisitions, and contrast and signal-to-noise metrics were evaluated for the compressed breast phantom with different simulated tumor sizes. Radial and tangential resolutions were similar for the two protocols, while normal resolution was better for the 2 angle acquisition. Analysis is complicated by the asymmetric point spread functions. Signal- to-noise vs. contrast tradeoffs were better for 11 angle acquisition for the smallest visible 9 mm sphere, while tradeoff results were mixed for the larger and more easily visible 12 mm and 15 mm diameter spheres. Additional study is needed to better understand the performance of limited angle tomography for PEM. PEM tomography experiments with complete angular sampling are planned.

  4. Positron Emission Mammography with Multiple Angle Acquisition

    SciTech Connect

    Mark F. Smith; Stan Majewski; Raymond R. Raylman

    2002-11-01

    Positron emission mammography (PEM) of F-18 fluorodeoxyglucose (FbG) uptake in breast tumors with dedicated detectors typically has been accomplished with two planar detectors in a fixed position with the breast under compression. The potential use of PEM imaging at two detector positions to guide stereotactic breast biopsy has motivated us to use PEM coincidence data acquired at two or more detector positions together in a single image reconstruction. Multiple angle PEM acquisition and iterative image reconstruction were investigated using point source and compressed breast phantom acquisitions with 5, 9, 12 and 15 mm diameter spheres and a simulated tumor:background activity concentration ratio of 6:1. Image reconstruction was performed with an iterative MLEM algorithm that used coincidence events between any two detector pixels on opposed detector heads at each detector position. This present study compared two acquisition protocols: 2 angle acquisition with detector angular positions of -15 and +15 degrees and 11 angle acquisition with detector positions spaced at 3 degree increments over the range -15 to +15 degrees. Three-dimensional image resolution was assessed for the point source acquisitions, and contrast and signal-to-noise metrics were evaluated for the compressed breast phantom with different simulated tumor sizes. Radial and tangential resolutions were similar for the two protocols, while normal resolution was better for the 2 angle acquisition. Analysis is complicated by the asymmetric point spread functions. Signal- to-noise vs. contrast tradeoffs were better for 11 angle acquisition for the smallest visible 9 mm sphere, while tradeoff results were mixed for the larger and more easily visible 12 mm and 15 mm diameter spheres. Additional study is needed to better understand the performance of limited angle tomography for PEM. PEM tomography experiments with complete angular sampling are planned.

  5. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  6. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis

    PubMed Central

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T.; Oh, Seung W.; Chen, Jichao; Mitra, Ananya; Tsien, Richard W.; Zeng, Hongkui; Ascoli, Giorgio A.; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-01-01

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain. PMID:25014658

  7. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  8. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  9. Digital imaging analysis to assess scar phenotype.

    PubMed

    Smith, Brian J; Nidey, Nichole; Miller, Steven F; Moreno Uribe, Lina M; Baum, Christian L; Hamilton, Grant S; Wehby, George L; Dunnwald, Martine

    2014-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive, and unbiased assessments of postsurgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue digital images of postsurgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD imaging system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software, and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with intraclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (p ranging from 1.20(-05) to 1.95(-14) ). Physicians' clinical outcome ratings from the same images showed high interobserver variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome.

  10. Ultrasonic image analysis for beef tenderness

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Thane, Brian R.; Whittaker, A. D.

    1993-05-01

    Objective measurement of meat tenderness has been a topic of concern for palatability evaluation. In this study, a real-time ultrasonic B-mode imaging method was used for measuring beef palatability attributes such as juiciness, muscle fiber tenderness, connective tissue amount, overall tenderness, flavor intensity, and percent total collagen noninvasively. A temporal averaging image enhancement method was used for image analysis. Ultrasonic image intensity, fractal dimension, attenuation, and statistical gray-tone spatial-dependence matrix image texture measurement were analyzed. The contrast of the textural feature was the most correlated parameter with palatability attributes. The longitudinal scanning method was better for juiciness, muscle fiber tenderness, flavor intensity, and percent soluble collagen, whereas, the cross-sectional method was better for connective tissue, overall tenderness. The multivariate linear regression models were developed as a function of textural features and image intensity parameters. The determinant coefficients of regression models were for juiciness (R2 equals .97), for percent total collagen (R2 equals .88), for flavor intensity (R2 equals .75), for muscle fiber tenderness (R2 equals .55), and for overall tenderness (R2 equals .49), respectively.

  11. Digital imaging analysis to assess scar phenotype

    PubMed Central

    Smith, Brian J.; Nidey, Nichole; Miller, Steven F.; Moreno, Lina M.; Baum, Christian L.; Hamilton, Grant S.; Wehby, George L.; Dunnwald, Martine

    2015-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive and unbiased assessments of post-surgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue (RGB) digital images of post-surgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD image system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with interclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (P ranging from 1.20−05 to 1.95−14). Physicians’ clinical outcome ratings from the same images showed high inter-observer variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome. PMID:24635173

  12. Accelerating Data Acquisition, Reduction, and Analysis at the Spallation Neutron Source

    SciTech Connect

    Campbell, Stuart I; Kohl, James Arthur; Granroth, Garrett E; Miller, Ross G; Doucet, Mathieu; Stansberry, Dale V; Proffen, Thomas E; Taylor, Russell J; Dillow, David

    2014-01-01

    ORNL operates the world's brightest neutron source, the Spallation Neutron Source (SNS). Funded by the US DOE Office of Basic Energy Science, this national user facility hosts hundreds of scientists from around the world, providing a platform to enable break-through research in materials science, sustainable energy, and basic science. While the SNS provides scientists with advanced experimental instruments, the deluge of data generated from these instruments represents both a big data challenge and a big data opportunity. For example, instruments at the SNS can now generate multiple millions of neutron events per second providing unprecedented experiment fidelity but leaving the user with a dataset that cannot be processed and analyzed in a timely fashion using legacy techniques. To address this big data challenge, ORNL has developed a near real-time streaming data reduction and analysis infrastructure. The Accelerating Data Acquisition, Reduction, and Analysis (ADARA) system provides a live streaming data infrastructure based on a high-performance publish subscribe system, in situ data reduction, visualization, and analysis tools, and integration with a high-performance computing and data storage infrastructure. ADARA allows users of the SNS instruments to analyze their experiment as it is run and make changes to the experiment in real-time and visualize the results of these changes. In this paper we describe ADARA, provide a high-level architectural overview of the system, and present a set of use-cases and real-world demonstrations of the technology.

  13. A Flexible Software Platform for Fast-Scan Cyclic Voltammetry Data Acquisition and Analysis

    PubMed Central

    Bucher, Elizabeth S.; Brooks, Kenneth; Verber, Matthew D.; Keithley, Richard B.; Owesson-White, Catarina; Carroll, Susan; Takmakov, Pavel; McKinney, Collin J.; Wightman, R. Mark

    2013-01-01

    Over the last several decades, fast-scan cyclic voltammetry (FSCV) has proved to be a valuable analytical tool for the real-time measurement of neurotransmitter dynamics in vitro and in vivo. Indeed, FSCV has found application in a wide variety of disciplines including electrochemistry, neurobiology and behavioral psychology. The maturation of FSCV as an in vivo technique led users to pose increasingly complex questions that require a more sophisticated experimental design. To accommodate recent and future advances in FSCV application, our lab has developed High Definition Cyclic Voltammetry (HDCV). HDCV is an electrochemical software suite, and includes data acquisition and analysis programs. The data collection program delivers greater experimental flexibility and better user feedback through live displays. It supports experiments involving multiple electrodes with customized waveforms. It is compatible with TTL-based systems that are used for monitoring animal behavior and it enables simultaneous recording of electrochemical and electrophysiological data. HDCV analysis streamlines data processing with superior filtering options, seamlessly manages behavioral events, and integrates chemometric processing. Furthermore, analysis is capable of handling single files collected over extended periods of time, allowing the user to consider biological events on both sub-second and multi-minute time scales. Here we describe and demonstrate the utility of HDCV for in vivo experiments. PMID:24083898

  14. Flexible software platform for fast-scan cyclic voltammetry data acquisition and analysis.

    PubMed

    Bucher, Elizabeth S; Brooks, Kenneth; Verber, Matthew D; Keithley, Richard B; Owesson-White, Catarina; Carroll, Susan; Takmakov, Pavel; McKinney, Collin J; Wightman, R Mark

    2013-11-01

    Over the last several decades, fast-scan cyclic voltammetry (FSCV) has proved to be a valuable analytical tool for the real-time measurement of neurotransmitter dynamics in vitro and in vivo. Indeed, FSCV has found application in a wide variety of disciplines including electrochemistry, neurobiology, and behavioral psychology. The maturation of FSCV as an in vivo technique led users to pose increasingly complex questions that require a more sophisticated experimental design. To accommodate recent and future advances in FSCV application, our lab has developed High Definition Cyclic Voltammetry (HDCV). HDCV is an electrochemical software suite that includes data acquisition and analysis programs. The data collection program delivers greater experimental flexibility and better user feedback through live displays. It supports experiments involving multiple electrodes with customized waveforms. It is compatible with transistor-transistor logic-based systems that are used for monitoring animal behavior, and it enables simultaneous recording of electrochemical and electrophysiological data. HDCV analysis streamlines data processing with superior filtering options, seamlessly manages behavioral events, and integrates chemometric processing. Furthermore, analysis is capable of handling single files collected over extended periods of time, allowing the user to consider biological events on both subsecond and multiminute time scales. Here we describe and demonstrate the utility of HDCV for in vivo experiments. PMID:24083898

  15. Symmetric subspace learning for image analysis.

    PubMed

    Papachristou, Konstantinos; Tefas, Anastasios; Pitas, Ioannis

    2014-12-01

    Subspace learning (SL) is one of the most useful tools for image analysis and recognition. A large number of such techniques have been proposed utilizing a priori knowledge about the data. In this paper, new subspace learning techniques are presented that use symmetry constraints in their objective functions. The rational behind this idea is to exploit the a priori knowledge that geometrical symmetry appears in several types of data, such as images, objects, faces, and so on. Experiments on artificial, facial expression recognition, face recognition, and object categorization databases highlight the superiority and the robustness of the proposed techniques, in comparison with standard SL techniques.

  16. SU-C-18C-06: Radiation Dose Reduction in Body Interventional Radiology: Clinical Results Utilizing a New Imaging Acquisition and Processing Platform

    SciTech Connect

    Kohlbrenner, R; Kolli, KP; Taylor, A; Kohi, M; Fidelman, N; LaBerge, J; Kerlan, R; Gould, R

    2014-06-01

    Purpose: To quantify the patient radiation dose reduction achieved during transarterial chemoembolization (TACE) procedures performed in a body interventional radiology suite equipped with the Philips Allura Clarity imaging acquisition and processing platform, compared to TACE procedures performed in the same suite equipped with the Philips Allura Xper platform. Methods: Total fluoroscopy time, cumulative dose area product, and cumulative air kerma were recorded for the first 25 TACE procedures performed to treat hepatocellular carcinoma (HCC) in a Philips body interventional radiology suite equipped with Philips Allura Clarity. The same data were collected for the prior 85 TACE procedures performed to treat HCC in the same suite equipped with Philips Allura Xper. Mean values from these cohorts were compared using two-tailed t tests. Results: Following installation of the Philips Allura Clarity platform, a 42.8% reduction in mean cumulative dose area product (3033.2 versus 1733.6 mGycm∧2, p < 0.0001) and a 31.2% reduction in mean cumulative air kerma (1445.4 versus 994.2 mGy, p < 0.001) was achieved compared to similar procedures performed in the same suite equipped with the Philips Allura Xper platform. Mean total fluoroscopy time was not significantly different between the two cohorts (1679.3 versus 1791.3 seconds, p = 0.41). Conclusion: This study demonstrates a significant patient radiation dose reduction during TACE procedures performed to treat HCC after a body interventional radiology suite was converted to the Philips Allura Clarity platform from the Philips Allura Xper platform. Future work will focus on evaluation of patient dose reduction in a larger cohort of patients across a broader range of procedures and in specific populations, including obese patients and pediatric patients, and comparison of image quality between the two platforms. Funding for this study was provided by Philips Healthcare, with 5% salary support provided to authors K. Pallav

  17. Acquisition and processing pitfall with clipped traces in surface-wave analysis

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2016-02-01

    Multichannel analysis of surface waves (MASW) is widely used in estimating near-surface shear (S)-wave velocity. In the MASW method, generating a reliable dispersion image in the frequency-velocity (f-v) domain is an important processing step. A locus along peaks of dispersion energy at different frequencies allows the dispersion curves to be constructed for inversion. When the offsets are short, the output seismic data may exceed the dynamic ranges of geophones/seismograph, as a result of which, peaks and (or) troughs of traces will be squared off in recorded shot gathers. Dispersion images generated by the raw shot gathers with clipped traces would be contaminated by artifacts, which might be misidentified as Rayleigh-wave phase velocities or body-wave velocities and potentially lead to incorrect results. We performed some synthetic models containing clipped traces, and analyzed amplitude spectra of unclipped and clipped waves. The results indicate that artifacts in the dispersion image are dependent on the level of clipping. A real-world example also shows how clipped traces would affect the dispersion image. All the results suggest that clipped traces should be removed from the shot gathers before generating dispersion images, in order to pick accurate phase velocities and set reasonable initial inversion models.

  18. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  19. Circumcision Status and Risk of HIV Acquisition during Heterosexual Intercourse for Both Males and Females: A Meta-Analysis

    PubMed Central

    WEI, Qiang; YANG, Lu; SONG, Tu run; YUAN, Hai chao; LV, Xiao; HAN, Ping

    2015-01-01

    In this study, we evaluated if male circumcision was associated with lower HIV acquisition for HIV (−) males and HIV (−) females during normal sexual behavior. We performed a systematic literature search of PubMed, EMBASE, and Cochrane Central Register of Controlled Trials (CENTRAL) databases to identify studies that compared HIV acquisition for the circumcised and uncircumcised groups. The reference lists of the included and excluded studies were also screened. Fifteen studies (4 RCTs and 11 prospective cohort studies) were included, and the related data were extracted and analyzed in a meta-analysis. Our study revealed strong evidence that male circumcision was associated with reduced HIV acquisition for HIV(−) males during sexual intercourse with females [pooled adjusted risk ratio (RR): 0.30, 95% CI 0.24 0.38, P < 0.00001] and provided a 70% protective effect. In contrast, no difference was detected in HIV acquisition for HIV (−) females between the circumcised and uncircumcised groups (pooled adjusted RR after sensitivity analysis: 0.68, 95%CI 0.40–1.15, P = 0.15). In conclusion, male circumcision could significantly protect males but not females from HIV acquisition at the population level. Male circumcision may serve as an additional approach toward HIV control, in conjunction with other strategies such as HIV counseling and testing, condom promotion, and so on. PMID:25942703

  20. Transcriptomic analysis highlights reciprocal interactions of urea and nitrate for nitrogen acquisition by maize roots.

    PubMed

    Zanin, Laura; Zamboni, Anita; Monte, Rossella; Tomasi, Nicola; Varanini, Zeno; Cesco, Stefano; Pinton, Roberto

    2015-03-01

    Even though urea and nitrate are the two major nitrogen (N) forms applied as fertilizers in agriculture and occur concomitantly in soils, the reciprocal influence of these two N sources on the mechanisms of their acquisition are poorly understood. Therefore, molecular and physiological aspects of urea and nitrate uptake were investigated in maize (Zea mays), a crop plant consuming high amounts of N. In roots, urea uptake was stimulated by the presence of urea in the external solution, indicating the presence of an inducible transport system. On the other hand, the presence of nitrate depressed the induction of urea uptake and, at the same time, the induction of nitrate uptake was depressed by the presence of urea. The expression of about 60,000 transcripts of maize in roots was monitored by microarray analyses and the transcriptional patterns of those genes involved in nitrogen acquisition were analyzed by real-time reverse transcription-PCR (RT-PCR). In comparison with the treatment without added N, the exposure of maize roots to urea modulated the expression of only very few genes, such as asparagine synthase. On the other hand, the concomitant presence of urea and nitrate enhanced the overexpression of genes involved in nitrate transport (NRT2) and assimilation (nitrate and nitrite reductase, glutamine synthetase 2), and a specific response of 41 transcripts was determined, including glutamine synthetase 1-5, glutamine oxoglutarate aminotransferase, shikimate kinase and arogenate dehydrogenase. Also based on the real-time RT-PCR analysis, the transcriptional modulation induced by both sources might determine an increase in N metabolism promoting a more efficient assimilation of the N that is taken up. PMID:25524070

  1. Transcriptomic analysis highlights reciprocal interactions of urea and nitrate for nitrogen acquisition by maize roots.

    PubMed

    Zanin, Laura; Zamboni, Anita; Monte, Rossella; Tomasi, Nicola; Varanini, Zeno; Cesco, Stefano; Pinton, Roberto

    2015-03-01

    Even though urea and nitrate are the two major nitrogen (N) forms applied as fertilizers in agriculture and occur concomitantly in soils, the reciprocal influence of these two N sources on the mechanisms of their acquisition are poorly understood. Therefore, molecular and physiological aspects of urea and nitrate uptake were investigated in maize (Zea mays), a crop plant consuming high amounts of N. In roots, urea uptake was stimulated by the presence of urea in the external solution, indicating the presence of an inducible transport system. On the other hand, the presence of nitrate depressed the induction of urea uptake and, at the same time, the induction of nitrate uptake was depressed by the presence of urea. The expression of about 60,000 transcripts of maize in roots was monitored by microarray analyses and the transcriptional patterns of those genes involved in nitrogen acquisition were analyzed by real-time reverse transcription-PCR (RT-PCR). In comparison with the treatment without added N, the exposure of maize roots to urea modulated the expression of only very few genes, such as asparagine synthase. On the other hand, the concomitant presence of urea and nitrate enhanced the overexpression of genes involved in nitrate transport (NRT2) and assimilation (nitrate and nitrite reductase, glutamine synthetase 2), and a specific response of 41 transcripts was determined, including glutamine synthetase 1-5, glutamine oxoglutarate aminotransferase, shikimate kinase and arogenate dehydrogenase. Also based on the real-time RT-PCR analysis, the transcriptional modulation induced by both sources might determine an increase in N metabolism promoting a more efficient assimilation of the N that is taken up.

  2. Angiographic imaging using an 18.9 MHz swept-wavelength laser that is phase-locked to the data acquisition clock and resonant scanners (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tozburun, Serhat; Blatter, Cedric; Siddiqui, Meena; Nam, Ahhyun S.; Vakoc, Benjamin J.

    2016-03-01

    In this study, we present an angiographic system comprised from a novel 18.9 MHz swept wavelength source integrated with a MEMs-based 23.7 kHz fast-axis scanner. The system provides rapid acquisition of frames and volumes on which a range of Doppler and intensity-based angiographic analyses can be performed. Interestingly, the source and data acquisition computer can be directly phase-locked to provide an intrinsically phase stable imaging system supporting Doppler measurements without the need for individual A-line triggers or post-processing phase calibration algorithms. The system is integrated with a 1.8 Gigasample (GS) per second acquisition card supporting continuous acquisition to computer RAM for 10 seconds. Using this system, we demonstrate phase-stable acquisitions across volumes acquired at 60 Hz frequency. We also highlight the ability to perform c-mode angiography providing volume perfusion measurements with 30 Hz temporal resolution. Ultimately, the speed and phase-stability of this laser and MEMs scanner platform can be leveraged to accelerate OCT-based angiography and both phase-sensitive and phase-insensitive extraction of blood flow velocity.

  3. Autonomous Image Analysis for Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.

    1999-01-01

    To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images

  4. Poka Yoke system based on image analysis and object recognition

    NASA Astrophysics Data System (ADS)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  5. Analysis of Fiber deposition using Automatic Image Processing Method

    NASA Astrophysics Data System (ADS)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  6. Morphological analysis of infrared images for waterjets

    NASA Astrophysics Data System (ADS)

    Gong, Yuxin; Long, Aifang

    2013-03-01

    High-speed waterjet has been widely used in industries and been investigated as a model of free shearing turbulence. This paper presents an investigation involving the flow visualization of high speed water jet, the noise reduction of the raw thermogram using a high-pass morphological filter ? and a median filter; the image enhancement using white top-hat filter; and the image segmentation using the multiple thresholding method. The image processing results by the designed morphological filters, ? - top-hat, were proved being ideal for further quantitative and in-depth analysis and can be used as a new morphological filter bank that may be of general implications for the analogous work

  7. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox

    PubMed Central

    Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry

  8. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox.

    PubMed

    Ribeiro, Andre Santos; Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19-73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry and

  9. 77 FR 26009 - CoStar Group, Inc., Lonestar Acquisition Sub, Inc., and LoopNet, Inc.; Analysis of Agreement...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-02

    ...Star Group, Inc., Lonestar Acquisition Sub, Inc., and LoopNet, Inc.; Analysis of Agreement Containing... ACoStar LoopNet, File No. 111 0172'' on your comment, and file your comment online at https... April 16, 2012. Write ACoStar LoopNet, File No. 111 0172'' on your comment. Your comment B...

  10. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  11. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  12. Automated acquisition and analysis of small angle X-ray scattering data

    NASA Astrophysics Data System (ADS)

    Franke, Daniel; Kikhney, Alexey G.; Svergun, Dmitri I.

    2012-10-01

    Small Angle X-ray Scattering (SAXS) is a powerful tool in the study of biological macromolecules providing information about the shape, conformation, assembly and folding states in solution. Recent advances in robotic fluid handling make it possible to perform automated high throughput experiments including fast screening of solution conditions, measurement of structural responses to ligand binding, changes in temperature or chemical modifications. Here, an approach to full automation of SAXS data acquisition and data analysis is presented, which advances automated experiments to the level of a routine tool suitable for large scale structural studies. The approach links automated sample loading, primary data reduction and further processing, facilitating queuing of multiple samples for subsequent measurement and analysis and providing means of remote experiment control. The system was implemented and comprehensively tested in user operation at the BioSAXS beamlines X33 and P12 of EMBL at the DORIS and PETRA storage rings of DESY, Hamburg, respectively, but is also easily applicable to other SAXS stations due to its modular design.

  13. Optimal Diphthongs: An OT Analysis of the Acquisition of Spanish Diphthongs

    ERIC Educational Resources Information Center

    Krause, Alice

    2013-01-01

    This dissertation investigates the acquisition of Spanish diphthongs by adult native speakers of English. The following research questions will be addressed: 1) How do adult native speakers of English pronounce sequences of two vowels in their L2 Spanish at different levels of acquisition? 2) Can OT learnability models, specifically the GLA,…

  14. Learned Attention in Adult Language Acquisition: A Replication and Generalization Study and Meta-Analysis

    ERIC Educational Resources Information Center

    Ellis, Nick C.; Sagarra, Nuria

    2011-01-01

    This study investigates associative learning explanations of the limited attainment of adult compared to child language acquisition in terms of learned attention to cues. It replicates and extends Ellis and Sagarra (2010) in demonstrating short- and long-term learned attention in the acquisition of temporal reference in Latin. In Experiment 1,…

  15. Library Catalog Log Analysis in E-Book Patron-Driven Acquisitions (PDA): A Case Study

    ERIC Educational Resources Information Center

    Urbano, Cristóbal; Zhang, Yin; Downey, Kay; Klingler, Thomas

    2015-01-01

    Patron-Driven Acquisitions (PDA) is a new model used for e-book acquisition by academic libraries. A key component of this model is to make records of ebooks available in a library catalog and let actual patron usage decide whether or not an item is purchased. However, there has been a lack of research examining the role of the library catalog as…

  16. Quantitative image analysis of celiac disease

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  17. Quantitative image analysis of celiac disease.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-03-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients.

  18. Characterisation of mycelial morphology using image analysis.

    PubMed

    Paul, G C; Thomas, C R

    1998-01-01

    Image analysis is now well established in quantifying and characterising microorganisms from fermentation samples. In filamentous fermentations it has become an invaluable tool for characterising complex mycelial morphologies, although it is not yet used extensively in industry. Recent method developments include characterisation of spore germination from the inoculum stage and of the subsequent dispersed and pellet forms. Further methods include characterising vacuolation and simple structural differentiation of mycelia, also from submerged cultures. Image analysis can provide better understanding of the development of mycelial morphology, of the physiological states of the microorganisms in the fermenter, and of their interactions with the fermentation conditions. This understanding should lead to improved design and operation of mycelial fermentations. PMID:9468800

  19. X-ray flat-panel imager (FPI)-based cone-beam volume CT (CBVCT) under a circle-plus-two-arc data acquisition orbit

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Ning, Ruola; Yu, Rongfeng; Conover, David L.

    2001-06-01

    The potential of cone beam volume CT (CBVCT) to improve the data acquisition efficiency for volume tomographic imaging is well recognized. A novel x-ray FPI based CBVCT prototype and its preliminary performance evaluation are presented in this paper. To meet the data sufficiency condition, the CBVCT prototype employs a circle-plus-two-arc orbit accomplished by a tiltable circular gantry. A cone beam filtered back-projection (CB-FBP) algorithm is derived for this data acquisition orbit, which employs a window function in the Radon domain to exclude the redundancy between the Radon information obtained from the circular cone beam (CB) data and that from the arc CB data. The number of projection images along the circular sub-orbit and each arc sub-orbit is 512 and 43, respectively. The reconstruction exactness of the prototype x-ray FPI based CBVCT system is evaluated using a disc phantom in which seven acrylic discs are stacked at fixed intervals. Images reconstructed with this algorithm show that both the contrast and geometric distortion existing in the disc phantom images reconstructed by the Feldkamp algorithm are substantially reduced. Meanwhile, the imaging performance of the prototype, such as modulation transfer function (MTF) and low contrast resolution, are quantitatively evaluated in detail through corresponding phantom studies. Furthermore, the capability of the prototype to reconstruct an ROI within a longitudinally unbounded object is verified. The results obtained from this preliminary performance evaluation encou