Sample records for imaging system called

  1. PACS and teleradiology for on-call support of abdominal imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Garra, Brian S.; Mun, Seong K.; Zeman, Robert K.; Levine, Betty A.; Fielding, Robert

    1991-07-01

    One aspect of the Georgetown image management and communications system (IMACS or PACS) is a built-in capability to support teleradiology. Unlike many dedicated teleradiology systems, the support of this capability as a part of PACS means that any acquired images are remotely accessible, not just those specifically input for transmission. Over the past one and one-half years, two radiologists (SCH, BSG) in the abdominal imaging division of the department of radiology have been accumulating experience with teleradiology for on-call support of emergency abdominal imaging, chiefly in ultrasound. As of the time of this writing, use of the system during on-call (one of these attending radiologists primarily responsible) or back-up call (the attending responsible for the Fellow on primary call) has resulted in a marked reduction in the number of times one of them has to drive to the hospital at night or over the weekend. Approximately 80% of the time, use of the teleradiology system obviates having to go in to review a case. The remainder of the time, the radiologist has to perform a procedure (e.g., abscess drainage) or a scan (e.g., complex Doppler study) himself. This paper reviews the system used for teleradiology, how it is electronically and operationally integrated with the PACS, the clinical benefits and disadvantages of this use, and radiologist and referring physician acceptance.

  2. [The eye, the optic system and its anomalies].

    PubMed

    Cohen, S Y

    1993-09-15

    The eye is a perceptive system with extremely complex physiology, although its optical properties can be assimilated to those of spherical diopters. Various approximations make it possible to reduce the eyeball to a single convex diopter. With a normal eye the image of an object situated ad infinitum focuses on the retina. The normal eye is called emmetropic. Otherwise, the eye is called ametropic. Several types of ametropy exist. When the image focuses in front of the retina the eye is said to be myopic. When the image focuses behind the retina the eye is called hypermetropic (or hyperopic). When the image of an object differs according to various focusing axes, the eye is said to be astigmatic.

  3. Digital Images on the DIME

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With NASA on its side, Positive Systems, Inc., of Whitefish, Montana, is veering away from the industry standards defined for producing and processing remotely sensed images. A top developer of imaging products for geographic information system (GIS) and computer-aided design (CAD) applications, Positive Systems is bucking traditional imaging concepts with a cost-effective and time-saving software tool called Digital Images Made Easy (DIME(trademark)). Like piecing a jigsaw puzzle together, DIME can integrate a series of raw aerial or satellite snapshots into a single, seamless panoramic image, known as a 'mosaic.' The 'mosaicked' images serve as useful backdrops to GIS maps - which typically consist of line drawings called 'vectors' - by allowing users to view a multidimensional map that provides substantially more geographic information.

  4. VIEW-Station software and its graphical user interface

    NASA Astrophysics Data System (ADS)

    Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki

    1992-04-01

    VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.

  5. Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh

    2017-10-01

    Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.

  6. A Freely-Available Authoring System for Browser-Based CALL with Speech Recognition

    ERIC Educational Resources Information Center

    O'Brien, Myles

    2017-01-01

    A system for authoring browser-based CALL material incorporating Google speech recognition has been developed and made freely available for download. The system provides a teacher with a simple way to set up CALL material, including an optional image, sound or video, which will elicit spoken (and/or typed) answers from the user and check them…

  7. Minimizing Barriers in Learning for On-Call Radiology Residents-End-to-End Web-Based Resident Feedback System.

    PubMed

    Choi, Hailey H; Clark, Jennifer; Jay, Ann K; Filice, Ross W

    2018-02-01

    Feedback is an essential part of medical training, where trainees are provided with information regarding their performance and further directions for improvement. In diagnostic radiology, feedback entails a detailed review of the differences between the residents' preliminary interpretation and the attendings' final interpretation of imaging studies. While the on-call experience of independently interpreting complex cases is important to resident education, the more traditional synchronous "read-out" or joint review is impossible due to multiple constraints. Without an efficient method to compare reports, grade discrepancies, convey salient teaching points, and view images, valuable lessons in image interpretation and report construction are lost. We developed a streamlined web-based system, including report comparison and image viewing, to minimize barriers in asynchronous communication between attending radiologists and on-call residents. Our system provides real-time, end-to-end delivery of case-specific and user-specific feedback in a streamlined, easy-to-view format. We assessed quality improvement subjectively through surveys and objectively through participation metrics. Our web-based feedback system improved user satisfaction for both attending and resident radiologists, and increased attending participation, particularly with regards to cases where substantive discrepancies were identified.

  8. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  9. Potential end-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1978-01-01

    Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.

  10. Potential medical applications of TAE

    NASA Technical Reports Server (NTRS)

    Fahy, J. Ben; Kaucic, Robert; Kim, Yongmin

    1986-01-01

    In cooperation with scientists in the University of Washington Medical School, a microcomputer-based image processing system for quantitative microscopy, called DMD1 (Digital Microdensitometer 1) was constructed. In order to make DMD1 transportable to different hosts and image processors, we have been investigating the possibility of rewriting the lower level portions of DMD1 software using Transportable Applications Executive (TAE) libraries and subsystems. If successful, we hope to produce a newer version of DMD1, called DMD2, running on an IBM PC/AT under the SCO XENIX System 5 operating system, using any of seven target image processors available in our laboratory. Following this implementation, copies of the system will be transferred to other laboratories with biomedical imaging applications. By integrating those applications into DMD2, we hope to eventually expand our system into a low-cost general purpose biomedical imaging workstation. This workstation will be useful not only as a self-contained instrument for clinical or research applications, but also as part of a large scale Digital Imaging Network and Picture Archiving and Communication System, (DIN/PACS). Widespread application of these TAE-based image processing and analysis systems should facilitate software exchange and scientific cooperation not only within the medical community, but between the medical and remote sensing communities as well.

  11. Home teleradiology system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Garra, Brian S.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    The Home Teleradiology Server system has been developed and installed at the Department of Radiology, Georgetown University Medical Center. The main purpose of the system is to provide a service for on-call physicians to view patients' medical images at home during off-hours. This service will reduce the overhead time required by on-call physicians to travel to the hospital, thereby increasing the efficiency of patient care and improving the total quality of the health care. Typically when a new case is conducted, the medical images generated from CT, US, and/or MRI modalities are transferred to a central server at the hospital via DICOM messages over an existing hospital network. The server has a DICOM network agent that listens to DICOM messages sent by CT, US, and MRI modalities and stores them into separate DICOM files for sending purposes. The server also has a general purpose, flexible scheduling software that can be configured to send image files to specific user(s) at certain times on any day(s) of the week. The server will then distribute the medical images to on- call physicians' homes via a high-speed modem. All file transmissions occur in the background without human interaction after the scheduling software is pre-configured accordingly. At the receiving end, the physicians' computers consist of high-end workstations that have high-speed modems to receive the medical images sent by the central server from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital will notify the physician(s) after all the image files have been completely sent. The physician(s) will then examine the medical images and decide if it is necessary to travel to the hospital for further examination on the patients. Overall, the Home Teleradiology system provides the on-call physicians with a cost-effective and convenient environment for viewing patients' medical images at home.

  12. The properties of borderlines in discontinuous conservative systems

    NASA Astrophysics Data System (ADS)

    Wang, X.-M.; Fang, Z.-J.

    2006-02-01

    The properties of the set of borderline images in discontinuous conservative systems are commonly investigated. The invertible system in which a stochastic web was found in 1999 is re-discussed here. The result shows that the set of images of the borderline actually forms the same stochastic web. The web has two typical local fine structures. Firstly, in some parts of the web the borderline crosses the manifold of hyperbolic points so that the chaotic diffusion is damped greatly; secondly, in other parts of phase space many holes and elliptic islands appear in the stochastic layer. This local structure shows infinite self-similarity. The noninvertible system in which the so-called chaotic quasi-attractor was found in [X.-M. Wang et al., Eur. Phys. J. D 19, 119 (2002)] is also studied here. The numerical investigation shows that such a chaotic quasi-attractor is confined by the preceding lower order images of the borderline. The mechanism of this confinement is revealed: a forbidden zone exists that any orbit can not visit, which is the sub-phase space of one side of the first image of the borderline. Each order of the images of the forbidden zone can be qualitatively divided into two sub-phase regions: one is the so-called escaping region that provides the orbit with an escaping channel, the other is the so-called dissipative region where the contraction of phase space occurs.

  13. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  14. Brown Dwarfs in our Backyard

    NASA Image and Video Library

    2014-03-07

    The third closest star system to the sun, called WISE J104915.57-531906, center of large image, which was taken by NASA WISE. It appeared to be a single object, but a sharper image from Gemini Observatory, revealed that it was binary star system.

  15. A system for verifying models and classification maps by extraction of information from a variety of data sources

    NASA Technical Reports Server (NTRS)

    Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.

    1992-01-01

    Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).

  16. Quality based approach for adaptive face recognition

    NASA Astrophysics Data System (ADS)

    Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.

  17. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  18. Directional ratio based on parabolic molecules and its application to the analysis of tubular structures

    NASA Astrophysics Data System (ADS)

    Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos

    2015-09-01

    As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.

  19. Video movie making using remote procedure calls and 4BSD Unix sockets on Unix, UNICOS, and MS-DOS systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, D.W.; Johnston, W.E.; Hall, D.E.

    1990-03-01

    We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.

  20. Mariner 9 Anniversary/Landslides on Mars Released 13 November 2002

    NASA Image and Video Library

    2002-11-15

    This canyon system imaged here by NASA Mars Odyssey was named Valles Marineris in honor of its discoverer, NASA Mariner 9 spacecraft. The image covers a portion of the canyon system called Melas Chasma. http://photojournal.jpl.nasa.gov/catalog/PIA04003

  1. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  2. Cerberus Fossae

    NASA Image and Video Library

    2014-01-24

    The fractures in this image are part of a large system of fractures called Cerberus Fossae. Athabasca Valles is visible in the lower right corner of the image as seen by NASA 2001 Mars Odyssey spacecraft.

  3. Virtual microscopy in a veterinary curriculum.

    PubMed

    Sims, Michael H; Mendis-Handagama, Chamindrani; Moore, Robert N

    2007-01-01

    Teaching faculty in the University of Tennessee College of Veterinary Medicine assist students in their professional education by providing a new way of viewing microscopic slides digitally. Faculty who teach classes in which glass slides are used participate in a program called Virtual Microscopy. Glass slides are digitized using a state-of-the-art integrated system, and a personal computer functions as the "microscope." Additionally, distribution of the interactive images is enhanced because they are available to students online. The digital slide offers equivalent quality and resolution to the original glass slide viewed on a microscope and has several additional advantages over microscopes. Students can choose to examine the entire slide at any of several objectives; they are able to access the slides (called WebSlides) from the college's server, using either Internet Explorer or a special browser developed by Bacus Laboratories, Inc.,(a) called the WebSlide browser, which lets the student simultaneously view a low-objective image and one or two high-objective images of the same slide. The student can "move the slide" by clicking and dragging the image to a new location. Easy archiving, annotation of images, and Web conferencing are additional features of the system.

  4. Non-photorealistic rendering of virtual implant models for computer-assisted fluoroscopy-based surgical procedures

    NASA Astrophysics Data System (ADS)

    Zheng, Guoyan

    2007-03-01

    Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.

  5. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 3, Issue 1

    DTIC Science & Technology

    2011-01-01

    release; distribution is unlimited. Multiscale Modeling of Materials The rotating reflector antenna associated with airport traffic control systems is...batteries and phased-array antennas . Power and efficiency studies evaluate on-board HPC systems and advanced image processing applications. 2010 marked...giving way in some applications to a newer technology called the phased array antenna system (sometimes called a beamformer, example shown at right

  6. Hyperspectral Remote Sensing of Atmospheric Profiles from Satellites and Aircraft

    NASA Technical Reports Server (NTRS)

    Smith, W. L.; Zhou, D. K.; Harrison, F. W.; Revercomb, H. E.; Larar, A. M.; Huang, H. L.; Huang, B.

    2001-01-01

    A future hyperspectral resolution remote imaging and sounding system, called the GIFTS (Geostationary Imaging Fourier Transform Spectrometer), is described. An airborne system, which produces the type of hyperspectral resolution sounding data to be achieved with the GIFTS, has been flown on high altitude aircraft. Results from simulations and from the airborne measurements are presented to demonstrate the revolutionary remote sounding capabilities to be realized with future satellite hyperspectral remote imaging/sounding systems.

  7. Are Your Bowels Moving?

    MedlinePlus

    ... also called the large intestine (say: in-TESS-tin), are the lower parts of your digestive system . ... doctor. © 1995- The Nemours Foundation. All rights reserved. Images provided by The Nemours Foundation, iStock, Getty Images, ...

  8. Directional x-ray dark-field imaging of strongly ordered systems

    NASA Astrophysics Data System (ADS)

    Jensen, Torben Haugaard; Bech, Martin; Zanette, Irene; Weitkamp, Timm; David, Christian; Deyhle, Hans; Rutishauser, Simon; Reznikova, Elena; Mohr, Jürgen; Feidenhans'L, Robert; Pfeiffer, Franz

    2010-12-01

    Recently a novel grating based x-ray imaging approach called directional x-ray dark-field imaging was introduced. Directional x-ray dark-field imaging yields information about the local texture of structures smaller than the pixel size of the imaging system. In this work we extend the theoretical description and data processing schemes for directional dark-field imaging to strongly scattering systems, which could not be described previously. We develop a simple scattering model to account for these recent observations and subsequently demonstrate the model using experimental data. The experimental data includes directional dark-field images of polypropylene fibers and a human tooth slice.

  9. Anomaly-Based Intrusion Detection Systems Utilizing System Call Data

    DTIC Science & Technology

    2012-03-01

    Functionality Description Persistence mechanism Mimicry technique Camouflage malware image: • renaming its image • appending its image to victim...particular industrial plant . Exactly which one was targeted still remains unknown, however a majority of the attacks took place in Iran [24]. Due... plant to unstable phase and eventually physical damage. It is interesting to note that a particular block of code - block DB8061 is automatically

  10. Ophir Planum

    NASA Image and Video Library

    2002-07-17

    This image from NASA Mars Odyssey spacecraft shows a region of Mars called Ophir Planum. The Valles Marineris system of canyons that stretch for thousands of kilometers across Mars are located just south of the area covered in the image.

  11. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  12. A New Digital Holographic Instrument for Measuring Microphysical Properties of Contrails in the SASS (Subsonic Assessment) Program

    NASA Technical Reports Server (NTRS)

    Lawson, R. Paul

    2000-01-01

    SPEC incorporated designed, built and operated a new instrument, called a pi-Nephelometer, on the NASA DC-8 for the SUCCESS field project. The pi-Nephelometer casts an image of a particle on a 400,000 pixel solid-state camera by freezing the motion of the particle using a 25 ns pulsed, high-power (60 W) laser diode. Unique optical imaging and particle detection systems precisely detect particles and define the depth-of-field so that at least one particle in the image is almost always in focus. A powerful image processing engine processes frames from the solid-state camera, identifies and records regions of interest (i.e. particle images) in real time. Images of ice crystals are displayed and recorded with 5 micron pixel resolution. In addition, a scattered light system simultaneously measures the scattering phase function of the imaged particle. The system consists of twenty-eight 1-mm optical fibers connected to microlenses bonded on the surface of avalanche photo diodes (APDs). Data collected with the pi-Nephelometer during the SUCCESS field project was reported in a special issue of Geophysical Research Letters. The pi-Nephelometer provided the basis for development of a commercial imaging probe, called the cloud particle imager (CPI), which has been installed on several research aircraft and used in More than a dozen field programs.

  13. Flow Visualization Studies in the Novacor Left Ventricular Assist System CRADA PC91-002, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovetz, H.S.; Shaffer, F.; Schaub, R.

    This paper discusses a series of experiments to visualize and measure flow fields in the Novacor left ventricular assist system (LVAS). The experiments utilize a multiple exposure, optical imaging technique called fluorescent image tracking velocimetry (FITV) to hack the motion of small, neutrally-buoyant particles in a flowing fluid.

  14. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.

  15. PIRIA: a general tool for indexing, search, and retrieval of multimedia content

    NASA Astrophysics Data System (ADS)

    Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.

    2004-05-01

    The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.

  16. Scene of Multiple Explosions

    NASA Image and Video Library

    2007-03-07

    This composite image NASA Galaxy Evolution Explorer shows Z Camelopardalis, or Z Cam, a double-star system featuring a collapsed, dead star, called a white dwarf, and a companion star, as well as a ghostly shell around the system.

  17. End-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1982-01-01

    The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.

  18. Computer-Aided Remote Driving

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    1994-01-01

    System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.

  19. Color Histogram Diffusion for Image Enhancement

    NASA Technical Reports Server (NTRS)

    Kim, Taemin

    2011-01-01

    Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.

  20. Object and image retrieval over the Internet

    NASA Astrophysics Data System (ADS)

    Gilles, Sebastien; Winter, A.; Feldmar, J.; Poirier, N.; Bousquet, R.; Bussy, B.; Lamure, H.; Demarty, C.-H.; Nastar, Chahab

    2000-12-01

    In this article, we describe some of the work that was carried out at LookThatUp for designing an infrastructure enabling image-based search over the Internet. The service was designed to be remotely accessible and easily integrated to partner sites. One application of the technology, called Image-Shopper, is described and demonstrated. The technological basis of the system is then reviewed.

  1. The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C. G.; School of Physics, University of Melbourne, Parkville VIC; CODES Centre of Excellence, University of Tasmania, Hobart TAS

    2010-04-06

    Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.

  2. The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C.G.; Siddons, D.P.; Kirkham, R.

    2010-05-25

    Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.

  3. Informatics in radiology: web-based preliminary reporting system for radiology residents with PACS integration.

    PubMed

    O'Connell, Timothy; Chang, Debra

    2012-01-01

    While on call, radiology residents review imaging studies and issue preliminary reports to referring clinicians. In the absence of an integrated reporting system at the training sites of the authors' institution, residents were typing and faxing preliminary reports. To partially automate the on-call resident workflow, a Web-based system for resident reporting was developed by using the free open-source xAMP Web application framework and an open-source DICOM (Digital Imaging and Communications in Medicine) software toolkit, with the goals of reducing errors and lowering barriers to education. This reporting system integrates with the picture archiving and communication system to display a worklist of studies. Patient data are automatically entered in the preliminary report to prevent identification errors and simplify the report creation process. When the final report for a resident's on-call study is available, the reporting system queries the report broker for the final report, and then displays the preliminary report side by side with the final report, thus simplifying the review process and encouraging review of all of the resident's reports. The xAMP Web application framework should be considered for development of radiology department informatics projects owing to its zero cost, minimal hardware requirements, ease of programming, and large support community.

  4. Part II: preparing and assessing first-year radiology resident on-call readiness technical implementation.

    PubMed

    Yam, Chun-Shan; Kruskal, Jonathan; Pedrosa, Ivan; Kressel, Herbert

    2006-06-01

    The effectiveness of using a Digital Imaging and Communications in Medicine (DICOM)-based interactive examination system in evaluating the readiness of first year radiology residents before taking overnight call in the emergency department (ED) was reported in part I of this article. This report describes technical aspects for the design and implementation of this system. The examination system consists of two modules: Data Collection and Image Viewing. The Data Collection module was a personal computer (PC)-based DICOM storage server based on a free public domain software package, the Mallinckrodt Central Test Node. The Image Viewing module was a Java-based DICOM viewer created using another freeware package: zDicom ActiveX component. The examination takes place once a year at the end of the first 6-month rotation. Cases selected for the examination were actual clinical cases according to the American Society of Emergency Radiology core curriculum. In the 3-hour timed examination, each resident was required to read the cases and provide clinical findings and recommendations. Upper-level residents also participated in the examination to serve as a control. Answers were scored by two staff radiologists. We have been using this examination system successfully in our institution since 2003 to evaluate the readiness of the first-year residents before they take overnight call in the ED. This report describes a step-by-step procedure for implementing this system into a PC-based platform. This DICOM viewing software is available as freeware to other academic radiology institutions. The total cost for implementing this system is approximately 2000 US dollars.

  5. Novel optical scanning cryptography using Fresnel telescope imaging.

    PubMed

    Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren

    2015-07-13

    We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.

  6. A statistical, task-based evaluation method for three-dimensional x-ray breast imaging systems using variable-background phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Subok; Jennings, Robert; Liu Haimo

    Purpose: For the last few years, development and optimization of three-dimensional (3D) x-ray breast imaging systems, such as digital breast tomosynthesis (DBT) and computed tomography, have drawn much attention from the medical imaging community, either academia or industry. However, there is still much room for understanding how to best optimize and evaluate the devices over a large space of many different system parameters and geometries. Current evaluation methods, which work well for 2D systems, do not incorporate the depth information from the 3D imaging systems. Therefore, it is critical to develop a statistically sound evaluation method to investigate the usefulnessmore » of inclusion of depth and background-variability information into the assessment and optimization of the 3D systems. Methods: In this paper, we present a mathematical framework for a statistical assessment of planar and 3D x-ray breast imaging systems. Our method is based on statistical decision theory, in particular, making use of the ideal linear observer called the Hotelling observer. We also present a physical phantom that consists of spheres of different sizes and materials for producing an ensemble of randomly varying backgrounds to be imaged for a given patient class. Lastly, we demonstrate our evaluation method in comparing laboratory mammography and three-angle DBT systems for signal detection tasks using the phantom's projection data. We compare the variable phantom case to that of a phantom of the same dimensions filled with water, which we call the uniform phantom, based on the performance of the Hotelling observer as a function of signal size and intensity. Results: Detectability trends calculated using the variable and uniform phantom methods are different from each other for both mammography and DBT systems. Conclusions: Our results indicate that measuring the system's detection performance with consideration of background variability may lead to differences in system performance estimates and comparisons. For the assessment of 3D systems, to accurately determine trade offs between image quality and radiation dose, it is critical to incorporate randomness arising from the imaging chain including background variability into system performance calculations.« less

  7. Multirotor micro air vehicle autonomous landing system based on image markers recognition

    NASA Astrophysics Data System (ADS)

    Skoczylas, Marcin; Gadomer, Lukasz; Walendziuk, Wojciech

    2017-08-01

    In this paper the idea of an autonomic drone landing system which bases on different markers detection, is presented. The issue of safe autonomic drone landing is one of the major aspects connected with drone missions. The idea of the proposed system is to detect the landing place, marked with an image called marker, using one of the image recognition algorithms, and heading during the landing procedure to this place. Choosing the proper marker, which allows the greatest quality of the recognition system, is the main problem faced in this paper. Seven markers are tested and compared. The achieved results are described and discussed.

  8. Development of a Mars Surface Imager

    NASA Technical Reports Server (NTRS)

    Squyres, Steve W.

    1994-01-01

    The Mars Surface Imager (MSI) is a multispectral, stereoscopic, panoramic imager that allows imaging of the full scene around a Mars lander from the lander body to the zenith. It has two functional components: panoramic imaging and sky imaging. In the most recent version of the MSI, called PIDDP-cam, a very long multi-line color CCD, an innovative high-performance drive system, and a state-of-the-art wavelet image compression code have been integrated into a single package. The requirements for the flight version of the MSI and the current design are presented.

  9. Color Image of Snow White Trenches and Scraping

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image was acquired by NASA's Phoenix Mars Lander's Surface Stereo Imager on the 31st Martian day of the mission, or Sol 31 (June 26, 2008), after the May 25, 2008 landing. This image shows the trenches informally called 'Snow White 1' (left), 'Snow White 2' (right), and within the Snow White 2 trench, the smaller scraping area called 'Snow White 3.' The Snow White 3 scraped area is about 5 centimeters (2 inches) deep. The dug and scraped areas are within the diggiing site called 'Wonderland.'

    The Snow White trenches and scraping prove that scientists can take surface soil samples, subsurface soil samples, and icy samples all from one unit. Scientists want to test samples to determine if some ice in the soil may have been liquid in the past during warmer climate cycles.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver

  10. Patient dose, gray level and exposure index with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Silva, T. R.; Yoshimura, E. M.

    2014-02-01

    Computed radiography (CR) is gradually replacing conventional screen-film system in Brazil. To assess image quality, manufactures provide the calculation of an exposure index through the acquisition software of the CR system. The objective of this study is to verify if the CR image can be used as an evaluator of patient absorbed dose too, through a relationship between the entrance skin dose and the exposure index or the gray level values obtained in the image. The CR system used for this study (Agfa model 30-X with NX acquisition software) calculates an exposure index called Log of the Median (lgM), related to the absorbed dose to the IP. The lgM value depends on the average gray level (called Scan Average Level (SAL)) of the segmented pixel value histogram of the whole image. A Rando male phantom was used to simulate a human body (chest and head), and was irradiated with an X-ray equipment, using usual radiologic techniques for chest exams. Thermoluminescent dosimeters (LiF, TLD100) were used to evaluate entrance skin dose and exit dose. The results showed a logarithm relation between entrance dose and SAL in the image center, regardless of the beam filtration. The exposure index varies linearly with the entrance dose, but the angular coefficient is beam quality dependent. We conclude that, with an adequate calibration, the CR system can be used to evaluate the patient absorbed dose.

  11. High resolution imaging of a subsonic projectile using automated mirrors with large aperture

    NASA Astrophysics Data System (ADS)

    Tateno, Y.; Ishii, M.; Oku, H.

    2017-02-01

    Visual tracking of high-speed projectiles is required for studying the aerodynamics around the objects. One solution to this problem is a tracking method based on the so-called 1 ms Auto Pan-Tilt (1ms-APT) system that we proposed in previous work, which consists of rotational mirrors and a high-speed image processing system. However, the images obtained with that system did not have high enough resolution to realize detailed measurement of the projectiles because of the size of the mirrors. In this study, we propose a new system consisting of enlarged mirrors for tracking a high-speed projectiles so as to achieve higher-resolution imaging, and we confirmed the effectiveness of the system via an experiment in which a projectile flying at subsonic speed tracked.

  12. Study on real-time images compounded using spatial light modulator

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang

    2007-01-01

    Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.

  13. Urban, Indoor and Subterranean Navigation Sensors and Systems (Capteurs et systemes de navigation urbains, interieurs et souterrains)

    DTIC Science & Technology

    2010-11-01

    3-10 Multiple Images of an Image Sequence Figure 3-10 A Digital Magnetic Compass from KVH Industries 3-11 Figure 3-11 Earth’s Magnetic Field 3-11...ARINO SENER – Ingenieria y Sistemas S.A Aerospace Division Parque Tecnologico de Madrid Calle Severo Ocho 4 28760 Tres Cantos Madrid Email...experts from government, academia, industry and the military produced an analysis of future navigation sensors and systems whose performance

  14. Yardangs

    NASA Image and Video Library

    2016-08-17

    Today's VIS image is located in Aeolis Mensae, east of Gale Crater. The linear ridge/valley system near the center of the image was formed by unidirectional winds eroding poorly cemented material. This feature is called yardangs. Orbit Number: 64265 Latitude: -5.37213 Longitude: 145.043 Instrument: VIS Captured: 2016-06-09 09:32 http://photojournal.jpl.nasa.gov/catalog/PIA20806

  15. Soil structure characterized using computed tomographic images

    Treesearch

    Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek

    2003-01-01

    Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...

  16. Pulmonary nocardiosis

    MedlinePlus

    ... you and listen to your lungs using a stethoscope. You may have abnormal lung sounds, called crackles. ... returning. Alternative Names Nocardiosis - pulmonary; Mycetoma; Nocardia Images Respiratory system References Southwick FS. Nocardiosis. In: Goldman L, ...

  17. Interfacing the PACS and the HIS: results of a 5-year implementation.

    PubMed

    Kinsey, T V; Horton, M C; Lewis, T E

    2000-01-01

    An interface was created between the Department of Defense's hospital information system (HIS) and its two picture archiving and communication system (PACS)-based radiology information systems (RISs). The HIS is called the Composite Healthcare Computer System (CHCS), and the RISs are called the Medical Diagnostic Imaging System (MDIS) and the Digital Imaging Network (DIN)-PACS. Extensive mapping between dissimilar data protocols was required to translate data from the HIS into both RISs. The CHCS uses a Health Level 7 (HL7) protocol, whereas the MDIS uses the American College of Radiology-National Electrical Manufacturers Association 2.0 protocol and the DIN-PACS uses the Digital Imaging and Communications in Medicine (DICOM) 3.0 protocol. An interface engine was required to change some data formats, as well as to address some nonstandard HL7 data being output from the CHCS. In addition, there are differences in terminology between fields and segments in all three protocols. This interface is in use at 20 military facilities throughout the world. The interface reduces the amount of manual entry into more than one automated system to the smallest level possible. Data mapping during installation saved time, improved productivity, and increased user acceptance during PACS implementation. It also resulted in more standardized database entries in both the HIS (CHCS) and the RIS (PACS).

  18. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography).

    PubMed

    Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary

    2015-02-07

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.

  19. Algorithms for differentiating between images of heterogeneous tissue across fluorescence microscopes.

    PubMed

    Chitalia, Rhea; Mueller, Jenna; Fu, Henry L; Whitley, Melodi Javid; Kirsch, David G; Brown, J Quincy; Willett, Rebecca; Ramanujam, Nimmi

    2016-09-01

    Fluorescence microscopy can be used to acquire real-time images of tissue morphology and with appropriate algorithms can rapidly quantify features associated with disease. The objective of this study was to assess the ability of various segmentation algorithms to isolate fluorescent positive features (FPFs) in heterogeneous images and identify an approach that can be used across multiple fluorescence microscopes with minimal tuning between systems. Specifically, we show a variety of image segmentation algorithms applied to images of stained tumor and muscle tissue acquired with 3 different fluorescence microscopes. Results indicate that a technique called maximally stable extremal regions followed by thresholding (MSER + Binary) yielded the greatest contrast in FPF density between tumor and muscle images across multiple microscopy systems.

  20. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  1. Reconciliation of diverse telepathology system designs. Historic issues and implications for emerging markets and new applications.

    PubMed

    Weinstein, Ronald S; Graham, Anna R; Lian, Fangru; Braunhut, Beth L; Barker, Gail R; Krupinski, Elizabeth A; Bhattacharyya, Achyut K

    2012-04-01

    Telepathology, the distant service component of digital pathology, is a growth industry. The word "telepathology" was introduced into the English Language in 1986. Initially, two different, competing imaging modalities were used for telepathology. These were dynamic (real time) robotic telepathology and static image (store-and-forward) telepathology. In 1989, a hybrid dynamic robotic/static image telepathology system was developed in Norway. This hybrid imaging system bundled these two primary pathology imaging modalities into a single multi-modality pathology imaging system. Similar hybrid systems were subsequently developed and marketed in other countries as well. It is noteworthy that hybrid dynamic robotic/static image telepathology systems provided the infrastructure for the first truly sustainable telepathology services. Since then, impressive progress has been made in developing another telepathology technology, so-called "virtual microscopy" telepathology (also called "whole slide image" telepathology or "WSI" telepathology). Over the past decade, WSI has appeared to be emerging as the preferred digital telepathology digital imaging modality. However, recently, there has been a re-emergence of interest in dynamic-robotic telepathology driven, in part, by concerns over the lack of a means for up-and-down focusing (i.e., Z-axis focusing) using early WSI processors. In 2010, the initial two U.S. patents for robotic telepathology (issued in 1993 and 1994) expired enabling many digital pathology equipment companies to incorporate dynamic-robotic telepathology modules into their WSI products for the first time. The dynamic-robotic telepathology module provided a solution to the up-and-down focusing issue. WSI and dynamic robotic telepathology are now, rapidly, being bundled into a new class of telepathology/digital pathology imaging system, the "WSI-enhanced dynamic robotic telepathology system". To date, six major WSI processor equipment companies have embraced the approach and developed WSI-enhanced dynamic-robotic digital telepathology systems, marketed under a variety of labels. Successful commercialization of such systems could help overcome the current resistance of some pathologists to incorporate digital pathology, and telepathology, into their routine and esoteric laboratory services. Also, WSI-enhanced dynamic robotic telepathology could be useful for providing general pathology and subspecialty pathology services to many of the world's underserved populations in the decades ahead. This could become an important enabler for the delivery of patient-centered healthcare in the future. © 2012 The Authors APMIS © 2012 APMIS.

  2. ART-Ada design project, phase 2

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel; Allen, Bradley P.

    1990-01-01

    Interest in deploying expert systems in Ada has increased. An Ada based expert system tool is described called ART-Ada, which was built to support research into the language and methodological issues of expert systems in Ada. ART-Ada allows applications of an existing expert system tool called ART-IM (Automated Reasoning Tool for Information Management) to be deployed in various Ada environments. ART-IM, a C-based expert system tool, is used to generate Ada source code which is compiled and linked with an Ada based inference engine to produce an Ada executable image. ART-Ada is being used to implement several expert systems for NASA's Space Station Freedom Program and the U.S. Air Force.

  3. Facial fluid synthesis for assessment of acne vulgaris using luminescent visualization system through optical imaging and integration of fluorescent imaging system

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.

    2017-06-01

    Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.

  4. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  5. Model-based restoration using light vein for range-gated imaging systems.

    PubMed

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  6. Video Guidance Sensor and Time-of-Flight Rangefinder

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.

    2007-01-01

    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.

  7. Identification of suitable fundus images using automated quality assessment methods.

    PubMed

    Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet

    2014-04-01

    Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.

  8. Image Analysis Program for Measuring Particles with the Zeiss CSM 950 Scanning Electron Microscope (SEM)

    DTIC Science & Technology

    1990-01-01

    7 𔄁 . ,: 1& *U _’ ś TECHNICAL REPORT AD NATICK/TR-90/014 (V) N* IMAGE ANALYSIS PROGRAM FOR MEASURING PARTICLES < WITH THE ZEISS CSM 950 SCANNING... image analysis program for measuring particles using the Zeiss CSM 950/Kontron system is as follows: A>CSM calls the image analysis program. Press D to...27 vili LIST OF TABLES TABLE PAGE 1. Image Analysis Program for Measuring 29 Spherical Particles 14 2. Printout of Statistical Data Frcm Table 1 16 3

  9. Image texture segmentation using a neural network

    NASA Astrophysics Data System (ADS)

    Sayeh, Mohammed R.; Athinarayanan, Ragu; Dhali, Pushpuak

    1992-09-01

    In this paper we use a neural network called the Lyapunov associative memory (LYAM) system to segment image texture into different categories or clusters. The LYAM system is constructed by a set of ordinary differential equations which are simulated on a digital computer. The clustering can be achieved by using a single tuning parameter in the simplest model. Pattern classes are represented by the stable equilibrium states of the system. Design of the system is based on synthesizing two local energy functions, namely, the learning and recall energy functions. Before the implementation of the segmentation process, a Gauss-Markov random field (GMRF) model is applied to the raw image. This application suitably reduces the image data and prepares the texture information for the neural network process. We give a simple image example illustrating the capability of the technique. The GMRF-generated features are also used for a clustering, based on the Euclidean distance.

  10. D3D augmented reality imaging system: proof of concept in mammography.

    PubMed

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  11. A-Track: A New Approach for Detection of Moving Objects in FITS Images

    NASA Astrophysics Data System (ADS)

    Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat

    2016-07-01

    Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.

  12. SELF IMAGES AND COMMUNITY IMAGES OF THE ELEMENTARY SCHOOL PRINCIPAL--FINDINGS AND IMPLICATIONS OF A SOCIOLOGICAL INQUIRY.

    ERIC Educational Resources Information Center

    FOSKETT, JOHN M.; WOLCOTT, HARRY F.

    THE SYSTEM OF RULES THAT GUIDES THE BEHAVIOR OF ELEMENTARY SCHOOL PRINCIPALS WAS INVESTIGATED. THIS BODY OF RULES, TERMED "THE NORMATIVE STRUCTURE OF THE COMMUNITY AS IT PERTAINS TO SCHOOL ADMINISTRATORS," WAS STUDIED BY MEANS OF AN INSTRUMENT CALLED THE "ROLE NORM INVENTORY." SEPARATE INVENTORIES WERE DEVELOPED FOR ELEMENTARY…

  13. Magnetic Interactions and the Method of Images: A Wealth of Educational Suggestions

    ERIC Educational Resources Information Center

    Bonanno, A.; Camarca, M.; Sapia, P.

    2011-01-01

    Under some conditions, the method of images (well known in electrostatics) may be implemented in magnetostatic problems too, giving an excellent example of the usefulness of formal analogies in the description of physical systems. In this paper, we develop a quantitative model for the magnetic interactions underlying the so-called Geomag[TM]…

  14. Phoenix Deepens Trenches on Mars

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Surface Stereo Imager on NASA's Phoenix Mars Lander took this false color image on Oct. 21, 2008, during the 145th Martian day, or sol, since landing. The bluish-white areas seen in these trenches are part of an ice layer beneath the soil.

    The trench on the upper left, called 'Dodo-Goldilocks,' is about 38 centimeters (15 inches) long and 4 centimeters (1.5 inches) deep. The trench on the right, called 'Upper Cupboard,' is about 60 centimeters (24 inches) long and 3 centimeters (1 inch) deep. The trench in the lower middle is called 'Stone Soup.'

    The Phoenix mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  15. Tenth Planet Discovered

    NASA Image and Video Library

    2005-08-03

    These time-lapse images of a newfound dwarf planet in our solar system, formerly known as 2003 UB313 or Xena, and now called Eris, were taken using the Samuel Oschin Telescope at the Palomar Observatory.

  16. A segmentation-free approach to Arabic and Urdu OCR

    NASA Astrophysics Data System (ADS)

    Sabbour, Nazly; Shafait, Faisal

    2013-01-01

    In this paper, we present a generic Optical Character Recognition system for Arabic script languages called Nabocr. Nabocr uses OCR approaches specific for Arabic script recognition. Performing recognition on Arabic script text is relatively more difficult than Latin text due to the nature of Arabic script, which is cursive and context sensitive. Moreover, Arabic script has different writing styles that vary in complexity. Nabocr is initially trained to recognize both Urdu Nastaleeq and Arabic Naskh fonts. However, it can be trained by users to be used for other Arabic script languages. We have evaluated our system's performance for both Urdu and Arabic. In order to evaluate Urdu recognition, we have generated a dataset of Urdu text called UPTI (Urdu Printed Text Image Database), which measures different aspects of a recognition system. The performance of our system for Urdu clean text is 91%. For Arabic clean text, the performance is 86%. Moreover, we have compared the performance of our system against Tesseract's newly released Arabic recognition, and the performance of both systems on clean images is almost the same.

  17. Classification by diagnosing all absorption features (CDAF) for the most abundant minerals in airborne hyperspectral images

    NASA Astrophysics Data System (ADS)

    Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen

    2011-12-01

    Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.

  18. JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium

    DTIC Science & Technology

    1990-10-29

    screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and

  19. Magnetic resonance electrical impedance tomography (MREIT): simulation study of J-substitution algorithm.

    PubMed

    Kwon, Ohin; Woo, Eung Je; Yoon, Jeong-Rock; Seo, Jin Keun

    2002-02-01

    We developed a new image reconstruction algorithm for magnetic resonance electrical impedance tomography (MREIT). MREIT is a new EIT imaging technique integrated into magnetic resonance imaging (MRI) system. Based on the assumption that internal current density distribution is obtained using magnetic resonance imaging (MRI) technique, the new image reconstruction algorithm called J-substitution algorithm produces cross-sectional static images of resistivity (or conductivity) distributions. Computer simulations show that the spatial resolution of resistivity image is comparable to that of MRI. MREIT provides accurate high-resolution cross-sectional resistivity images making resistivity values of various human tissues available for many biomedical applications.

  20. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography)

    PubMed Central

    Siegel, Nisan; Storrie, Brian; Bruce, Marc

    2016-01-01

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443

  1. A simple microfluidic device for live cell imaging of Arabidopsis cotyledons, leaves, and seedlings.

    PubMed

    Vang, Shia; Seitz, Kati; Krysan, Patrick J

    2018-06-01

    One of the challenges of performing live-cell imaging in plants is establishing a system for securing the sample during imaging that allows for the rapid addition of treatments. Here we report how a commercially available device called a HybriWell ™ can be repurposed to create an imaging chamber suitable for Arabidopsis seedlings, cotyledons and leaves. Liquid in the imaging chamber can be rapidly exchanged to introduce chemical treatments via microfluidic passive pumping. When used in conjunction with fluorescent biosensors, this system can facilitate live-cell imaging studies of signal transduction pathways triggered by different treatments. As a demonstration, we show how the HybriWell can be used to monitor flg22-induced calcium transients using the R-GECO1 calcium indicator in detached Arabidopsis leaves.

  2. A novel design for scintillator-based neutron and gamma imaging in inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Geppert-Kleinrath, Verena; Cutler, Theresa; Danly, Chris; Madden, Amanda; Merrill, Frank; Tybo, Josh; Volegov, Petr; Wilde, Carl

    2017-10-01

    The LANL Advanced Imaging team has been providing reliable 2D neutron imaging of the burning fusion fuel at NIF for years, revealing possible multi-dimensional asymmetries in the fuel shape, and therefore calling for additional views. Adding a passive imaging system using image plate techniques along a new polar line of sight has recently demonstrated the merit of 3D neutron image reconstruction. Now, the team is in the process of designing a new active neutron imaging system for an additional equatorial view. The design will include a gamma imaging system as well, to allow for the imaging of carbon in the ablator of the NIF fuel capsules, constraining the burning fuel shape even further. The selection of ideal scintillator materials for a position-sensitive detector system is the key component for the new design. A comprehensive study of advanced scintillators has been carried out at the Los Alamos Neutron Science Center and the OMEGA Laser Facility in Rochester, NY. Neutron radiography using a fast-gated CCD camera system delivers measurements of resolution, light output and noise characteristics. The measured performance parameters inform the novel design, for which we conclude the feasibility of monolithic scintillators over pixelated counterparts.

  3. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  4. An Integrated System for Superharmonic Contrast-Enhanced Ultrasound Imaging: Design and Intravascular Phantom Imaging Study.

    PubMed

    Li, Yang; Ma, Jianguo; Martin, K Heath; Yu, Mingyue; Ma, Teng; Dayton, Paul A; Jiang, Xiaoning; Shung, K Kirk; Zhou, Qifa

    2016-09-01

    Superharmonic contrast-enhanced ultrasound imaging, also called acoustic angiography, has previously been used for the imaging of microvasculature. This approach excites microbubble contrast agents near their resonance frequency and receives echoes at nonoverlapping superharmonic bandwidths. No integrated system currently exists could fully support this application. To fulfill this need, an integrated dual-channel transmit/receive system for superharmonic imaging was designed, built, and characterized experimentally. The system was uniquely designed for superharmonic imaging and high-resolution B-mode imaging. A complete ultrasound system including a pulse generator, a data acquisition unit, and a signal processing unit were integrated into a single package. The system was controlled by a field-programmable gate array, on which multiple user-defined modes were implemented. A 6-, 35-MHz dual-frequency dual-element intravascular ultrasound transducer was designed and used for imaging. The system successfully obtained high-resolution B-mode images of coronary artery ex vivo with 45-dB dynamic range. The system was capable of acquiring in vitro superharmonic images of a vasa vasorum mimicking phantom with 30-dB contrast. It could detect a contrast agent filled tissue mimicking tube of 200 μm diameter. For the first time, high-resolution B-mode images and superharmonic images were obtained in an intravascular phantom, made possible by the dedicated integrated system proposed. The system greatly reduced the cost and complexity of the superharmonic imaging intended for preclinical study. Significant: The system showed promise for high-contrast intravascular microvascular imaging, which may have significant importance in assessment of the vasa vasorum associated with atherosclerotic plaques.

  5. Space Radar Image of Wadi Kufra, Libya

    NASA Image and Video Library

    1998-04-14

    The ability of a sophisticated radar instrument to image large regions of the world from space, using different frequencies that can penetrate dry sand cover, produced the discovery in this image: a previously unknown branch of an ancient river, buried under thousands of years of windblown sand in a region of the Sahara Desert in North Africa. This area is near the Kufra Oasis in southeast Libya, centered at 23.3 degrees north latitude, 22.9 degrees east longitude. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture (SIR-C/X-SAR) imaging radar when it flew aboard the space shuttle Endeavour on its 60th orbit on October 4, 1994. This SIR-C image reveals a system of old, now inactive stream valleys, called "paleodrainage systems, http://photojournal.jpl.nasa.gov/catalog/PIA01310

  6. Object-oriented design of medical imaging software.

    PubMed

    Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R

    1994-01-01

    A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.

  7. EROS main image file - A picture perfect database for Landsat imagery and aerial photography

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    The Earth Resources Observation System (EROS) Program was established by the U.S. Department of the Interior in 1966 under the administration of the Geological Survey. It is primarily concerned with the application of remote sensing techniques for the management of natural resources. The retrieval system employed to search the EROS database is called INORAC (Inquiry, Ordering, and Accounting). A description is given of the types of images identified in EROS, taking into account Landsat imagery, Skylab images, Gemini/Apollo photography, and NASA aerial photography. Attention is given to retrieval commands, geographic coordinate searching, refinement techniques, various online functions, and questions regarding the access to the EROS Main Image File.

  8. Real-time FPGA-based radar imaging for smart mobility systems

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Neri, Bruno

    2016-04-01

    The paper presents an X-band FMCW (Frequency Modulated Continuous Wave) Radar Imaging system, called X-FRI, for surveillance in smart mobility applications. X-FRI allows for detecting the presence of targets (e.g. obstacles in a railway crossing or urban road crossing, or ships in a small harbor), as well as their speed and their position. With respect to alternative solutions based on LIDAR or camera systems, X-FRI operates in real-time also in bad lighting and weather conditions, night and day. The radio-frequency transceiver is realized through COTS (Commercial Off The Shelf) components on a single-board. An FPGA-based baseband platform allows for real-time Radar image processing.

  9. Natural Resource Monitoring of Rheum tanguticum by Multilevel Remote Sensing

    PubMed Central

    Xie, Caixiang; Song, Jingyuan; Suo, Fengmei; Li, Xiwen; Li, Ying; Yu, Hua; Xu, Xiaolan; Luo, Kun; Li, Qiushi; Xin, Tianyi; Guan, Meng; Xu, Xiuhai; Miki, Eiji; Takeda, Osami; Chen, Shilin

    2014-01-01

    Remote sensing has been extensively applied in agriculture for its objectiveness and promptness. However, few applications are available for monitoring natural medicinal plants. In the paper, a multilevel monitoring system, which includes satellite and aerial remote sensing, as well as ground investigation, was initially proposed to monitor natural Rheum tanguticum resource in Baihe Pasture, Zoige County, Sichuan Province. The amount of R. tanguticum from images is M = S*ρ and S is vegetation coverage obtained by satellite imaging, whereas ρ is R. tanguticum density obtained by low-altitude imaging. Only the R. tanguticum which coverages exceeded 1 m2 could be recognized from the remote sensing image because of the 0.1 m resolution of the remote sensing image (called effective resource at that moment), and the results of ground investigation represented the amounts of R. tanguticum resource in all sizes (called the future resource). The data in paper showed that the present available amount of R. tanguticum accounted for 4% to 5% of the total quantity. The quantity information and the population structure of R. tanguticum in the Baihe Pasture were initially confirmed by this system. It is feasible to monitor the quantitative distribution for natural medicinal plants with scattered distribution. PMID:25101134

  10. Development of image mappers for hyperspectral biomedical imaging applications

    PubMed Central

    Kester, Robert T.; Gao, Liang; Tkaczyk, Tomasz S.

    2010-01-01

    A new design and fabrication method is presented for creating large-format (>100 mirror facets) image mappers for a snapshot hyperspectral biomedical imaging system called an image mapping spectrometer (IMS). To verify this approach a 250 facet image mapper with 25 multiple-tilt angles is designed for a compact IMS that groups the 25 subpupils in a 5 × 5 matrix residing within a single collecting objective's pupil. The image mapper is fabricated by precision diamond raster fly cutting using surface-shaped tools. The individual mirror facets have minimal edge eating, tilt errors of <1 mrad, and an average roughness of 5.4 nm. PMID:20357875

  11. The Role of Prototype Learning in Hierarchical Models of Vision

    ERIC Educational Resources Information Center

    Thomure, Michael David

    2014-01-01

    I conduct a study of learning in HMAX-like models, which are hierarchical models of visual processing in biological vision systems. Such models compute a new representation for an image based on the similarity of image sub-parts to a number of specific patterns, called prototypes. Despite being a central piece of the overall model, the issue of…

  12. Real-time access of large volume imagery through low-bandwidth links

    NASA Astrophysics Data System (ADS)

    Phillips, James; Grohs, Karl; Brower, Bernard; Kelly, Lawrence; Carlisle, Lewis; Pellechia, Matthew

    2010-04-01

    Providing current, time-sensitive imagery and geospatial information to deployed tactical military forces or first responders continues to be a challenge. This challenge is compounded through rapid increases in sensor collection volumes, both with larger arrays and higher temporal capture rates. Focusing on the needs of these military forces and first responders, ITT developed a system called AGILE (Advanced Geospatial Imagery Library Enterprise) Access as an innovative approach based on standard off-the-shelf techniques to solving this problem. The AGILE Access system is based on commercial software called Image Access Solutions (IAS) and incorporates standard JPEG 2000 processing. Our solution system is implemented in an accredited, deployable form, incorporating a suite of components, including an image database, a web-based search and discovery tool, and several software tools that act in concert to process, store, and disseminate imagery from airborne systems and commercial satellites. Currently, this solution is operational within the U.S. Government tactical infrastructure and supports disadvantaged imagery users in the field. This paper presents the features and benefits of this system to disadvantaged users as demonstrated in real-world operational environments.

  13. [Development of image quality assurance support system using image recognition technology in radiography in lacked images of chest and abdomen].

    PubMed

    Shibuya, Toru; Kato, Kyouichi; Eshima, Hidekazu; Sumi, Shinichirou; Kubo, Tadashi; Ishida, Hideki; Nakazawa, Yasuo

    2012-01-01

    In order to provide a precise radiography for diagnosis, it is required that we avoid radiography with defects by having enough evaluation. Conventionally, evaluation was performed only by observation of a radiological technologist (RT). The evaluation support system was developed for providing a high quality assurance without depending on RT observation only. The evaluation support system, called as the Image Quality Assurance Support System (IQASS), is characterized in that "image recognition technology" for the purpose of diagnostic radiography of chest and abdomen areas. The technique of the system used in this study. Of the 259 samples of posterior-anterior (AP) chest, lateral chest, and upright abdominal x-rays, the sensitivity and specificity was 93.1% and 91.8% in the chest AP, 93.3% and 93.6% in the chest lateral, and 95.0% and 93.8% in the upright abdominal x-rays. In the light of these results, it is suggested that AIQAS could be applied to practical usage for the RT.

  14. Phoenix's 'Dodo' Trench

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image was taken by NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). The center of the image shows a trench informally called 'Dodo' after the second dig. 'Dodo' is located within the previously determined digging area, informally called 'Knave of Hearts.' The light square to the right of the trench is the Robotic Arm's Thermal and Electrical Conductivity Probe (TECP). The Robotic Arm has scraped to a bright surface which indicated the Arm has reached a solid structure underneath the surface, which has been seen in other images as well.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  15. Space Object and Light Attribute Rendering (SOLAR) Projection System

    DTIC Science & Technology

    2017-05-08

    AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT A state of the art planetarium style projection system...Rendering (SOLAR) Projection System 1 Abstract A state of the art planetarium style projection system called Space Object and Light Attribute Rendering...planetarium style projection system for emulation of a variety of close proximity and long range imaging experiments. University at Buffalo’s Space

  16. Statistical characterization of portal images and noise from portal imaging systems.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge

    2013-06-01

    In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.

  17. High speed multiphoton imaging

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Brustle, Anne; Gautam, Vini; Cockburn, Ian; Gillespie, Cathy; Gaus, Katharina; Lee, Woei Ming

    2016-12-01

    Intravital multiphoton microscopy has emerged as a powerful technique to visualize cellular processes in-vivo. Real time processes revealed through live imaging provided many opportunities to capture cellular activities in living animals. The typical parameters that determine the performance of multiphoton microscopy are speed, field of view, 3D imaging and imaging depth; many of these are important to achieving data from in-vivo. Here, we provide a full exposition of the flexible polygon mirror based high speed laser scanning multiphoton imaging system, PCI-6110 card (National Instruments) and high speed analog frame grabber card (Matrox Solios eA/XA), which allows for rapid adjustments between frame rates i.e. 5 Hz to 50 Hz with 512 × 512 pixels. Furthermore, a motion correction algorithm is also used to mitigate motion artifacts. A customized control software called Pscan 1.0 is developed for the system. This is then followed by calibration of the imaging performance of the system and a series of quantitative in-vitro and in-vivo imaging in neuronal tissues and mice.

  18. OPSO - The OpenGL based Field Acquisition and Telescope Guiding System

    NASA Astrophysics Data System (ADS)

    Škoda, P.; Fuchs, J.; Honsa, J.

    2006-07-01

    We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.

  19. A projective surgical navigation system for cancer resection

    NASA Astrophysics Data System (ADS)

    Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald

    2016-03-01

    Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.

  20. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  1. Serendipia: Castilla-La Mancha telepathology network

    PubMed Central

    Peces, Carlos; García-Rojo, Marcial; Sacristán, José; Gallardo, Antonio José; Rodríguez, Ambrosio

    2008-01-01

    Nowadays, there is no standard solution for acquiring, archiving and communication of Pathology digital images. In addition, there does not exist any commercial Pathology Information System (LIS) that can manage the relationship between the reports generated by the pathologist and their corresponding images. Due to this situation, the Healthcare Service of Castilla-La Mancha decided to create a completely digital Pathology Department, the project is called SERENDIPIA. SERENDIPIA project provides all the necessary image acquiring devices needed to cover all kind of images that can be generated in a Pathology Department. In addition, in the SERENDIPIA project an Information System was developed that allows, on the one hand, it to cover the daily workflow of a Pathology Department (including the storage and the manage of the reports and its images), and, on the other hand, the Information System provides a WEB telepathology portal with collaborative tools like second opinion. PMID:18673519

  2. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  3. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  4. Promoting calls to a quitline: quantifying the influence of message theme, strong negative emotions and graphic images in television advertisements.

    PubMed

    Farrelly, Matthew C; Davis, Kevin C; Nonnemaker, James M; Kamyab, Kian; Jackson, Christine

    2011-07-01

    To understand the relative effectiveness of television advertisements that differ in their thematic focus and portrayals of negative emotions and/or graphic images in promoting calls to a smokers' quitline. Regression analysis is used to explain variation in quarterly media market-level per smoker calls to the New York State Smokers' Quitline from 2001 to 2009. The primary independent variable is quarterly market-level delivery of television advertisements measured by target audience rating points (TARPs). Advertisements were characterised by their overall objective--promoting cessation, highlighting the dangers of secondhand smoke (SHS) or other--and by their portrayals of strong negative emotions and graphic images. Per smoker call volume is positively correlated with total TARPs (p<0.001), and cessation advertisements are more effective than SHS advertisements in promoting quitline call volume. Advertisements with graphic images only or neither strong negative emotions nor graphic images are associated with higher call volume with similar effect sizes. Call volume was not significantly associated with the number of TARPs for advertisements with strong negative emotions only (p=0.71) or with both graphic images and strong emotions (p=0.09). Exposure to television advertisements is strongly associated with quitline call volume, and both cessation and SHS advertisements can be effective. The use of strong negative emotions in advertisements may be effective in promoting smoking cessation in the population but does not appear to influence quitline call volume. Further research is needed to understand the role of negative emotions in promoting calls to quitlines and cessation more broadly among the majority of smokers who do not call quitlines.

  5. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  6. Collaborative Workspaces within Distributed Virtual Environments.

    DTIC Science & Technology

    1996-12-01

    such as a text document, a 3D model, or a captured image using a collaborative workspace called the InPerson Whiteboard . The Whiteboard contains a...commands for editing objects drawn on the screen. Finally, when the call is completed, the Whiteboard can be saved to a file for future use . IRIS Annotator... use , and a shared whiteboard that includes a number of multimedia annotation tools. Both systems are also mindful of bandwidth limitations and can

  7. The mass remote sensing image data management based on Oracle InterMedia

    NASA Astrophysics Data System (ADS)

    Zhao, Xi'an; Shi, Shaowei

    2013-07-01

    With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.

  8. Camera Ready to Install on Mars Reconnaissance Orbiter

    NASA Image and Video Library

    2005-01-07

    A telescopic camera called the High Resolution Imaging Science Experiment, or HiRISE, right was installed onto the main structure of NASA Mars Reconnaissance Orbiter left on Dec. 11, 2004 at Lockheed Martin Space Systems, Denver.

  9. Science & Technology Review November 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radousky, H

    This months issue has the following articles: (1) Expanded Supercomputing Maximizes Scientific Discovery--Commentary by Dona Crawford; (2) Thunder's Power Delivers Breakthrough Science--Livermore's Thunder supercomputer allows researchers to model systems at scales never before possible. (3) Extracting Key Content from Images--A new system called the Image Content Engine is helping analysts find significant but hard-to-recognize details in overhead images. (4) Got Oxygen?--Oxygen, especially oxygen metabolism, was key to evolution, and a Livermore project helps find out why. (5) A Shocking New Form of Laserlike Light--According to research at Livermore, smashing a crystal with a shock wave can result in coherent light.

  10. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  11. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  12. Multi-frequency fine resolution imaging radar instrumentation and data acquisition. [side-looking radar for airborne imagery

    NASA Technical Reports Server (NTRS)

    Rendleman, R. A.; Champagne, E. B.; Ferris, J. E.; Liskow, C. L.; Marks, J. M.; Salmer, R. J.

    1974-01-01

    Development of a dual polarized L-band radar imaging system to be used in conjunction with the present dual polarized X-band radar is described. The technique used called for heterodyning the transmitted frequency from X-band to L-band and again heterodyning the received L-band signals back to X-band for amplification, detection, and recording.

  13. GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems

    PubMed Central

    Rellán-Álvarez, Rubén; Lobet, Guillaume; Lindner, Heike; Pradier, Pierre-Luc; Sebastian, Jose; Yee, Muh-Ching; Geng, Yu; Trontin, Charlotte; LaRue, Therese; Schrager-Lavelle, Amanda; Haney, Cara H; Nieu, Rita; Maloof, Julin; Vogel, John P; Dinneny, José R

    2015-01-01

    Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow the spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes. DOI: http://dx.doi.org/10.7554/eLife.07597.001 PMID:26287479

  14. GLO-Roots: An imaging platform enabling multidimensional characterization of soil-grown root systems

    DOE PAGES

    Rellan-Alvarez, Ruben; Lobet, Guillaume; Lindner, Heike; ...

    2015-08-19

    Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow themore » spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes.« less

  15. A New Tool for Quality Control

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Diffracto, Ltd. is now offering a new product inspection system that allows detection of minute flaws previously difficult or impossible to observe. Called D-Sight, it represents a revolutionary technique for inspection of flat or curved surfaces to find such imperfections as dings, dents and waviness. System amplifies defects, making them highly visible to simplify decision making as to corrective measures or to identify areas that need further study. CVA 3000 employs a camera, high intensity lamps and a special reflective screen to produce a D- Sight image of light reflected from a surface. Image is captured and stored in a computerized vision system then analyzed by a computer program. A live image of surface is projected onto a video display and compared with a stored master image to identify imperfections. Localized defects measuring less than 1/1000 of an inch are readily detected.

  16. Detecting Exoplanets with the New Worlds Observer: The Problem of Exozodiacal Dust

    NASA Technical Reports Server (NTRS)

    Roberge, A.; Noecker, M. C.; Glassman, T. M.; Oakley, P.; Turnbull, M. C.

    2009-01-01

    Dust coming from asteroids and comets will strongly affect direct imaging and characterization of terrestrial planets in the Habitable Zones of nearby stars. Such dust in the Solar System is called the zodiacal dust (or 'zodi' for short). Higher levels of similar dust are seen around many nearby stars, confined in disks called debris disks. Future high-contrast images of an Earth-like exoplanet will very likely be background-limited by light scattered of both the local Solar System zodi and the circumstellar dust in the extrasolar system (the exozodiacal dust). Clumps in the exozodiacal dust, which are expected in planet-hosting systems, may also be a source of confusion. Here we discuss the problems associated with imaging an Earth-like planet in the presence of unknown levels of exozodiacal dust. Basic formulae for the exoplanet imaging exposure time as function of star, exoplanet, zodi, exozodi, and telescope parameters will be presented. To examine the behavior of these formulae, we apply them to the New Worlds Observer (NWO) mission. NWO is a proposed 4-meter UV/optical/near-IR telescope, with a free flying starshade to suppress the light from a nearby star and achieve the high contrast needed for detection and characterization of a terrestrial planet in the star's Habitable Zone. We find that NWO can accomplish its science goals even if exozodiacal dust levels are typically much higher than the Solar System zodi level. Finally, we highlight a few additional problems relating to exozodiacal dust that have yet to be solved.

  17. Reengineering the picture archiving and communication system (PACS) process for digital imaging networks PACS.

    PubMed

    Horton, M C; Lewis, T E; Kinsey, T V

    1999-05-01

    Prior to June 1997, military picture archiving and communications systems (PACS) were planned, procured, and installed with key decisions on the system, equipment, and even funding sources made through a research and development office called Medical Diagnostic Imaging Systems (MDIS). Beginning in June 1997, the Joint Imaging Technology Project Office (JITPO) initiated a collaborative and consultative process for planning and implementing PACS into military treatment facilities through a new Department of Defense (DoD) contract vehicle called digital imaging networks (DIN)-PACS. The JITPO reengineered this process incorporating multiple organizations and politics. The reengineered PACS process administered through the JITPO transformed the decision process and accountability from a single office to a consultative method that increased end-user knowledge, responsibility, and ownership in PACS. The JITPO continues to provide information and services that assist multiple groups and users in rendering PACS planning and implementation decisions. Local site project managers are involved from the outset and this end-user collaboration has made the sometimes difficult transition to PACS an easier and more acceptable process for all involved. Corporately, this process saved DoD sites millions by having PACS plans developed within the government and proposed to vendors second, and then having vendors respond specifically to those plans. The integrity and efficiency of the process have reduced the opportunity for implementing nonstandard systems while sharing resources and reducing wasted government dollars. This presentation will describe the chronology of changes, encountered obstacles, and lessons learned within the reengineering of the PACS process for DIN-PACS.

  18. Towards Scalable 1024 Processor Shared Memory Systems

    NASA Technical Reports Server (NTRS)

    Ciotti, Robert B.; Thigpen, William W. (Technical Monitor)

    2001-01-01

    Over the past 3 years, NASA Ames has been involved in a cooperative effort with SGI to develop the largest single system image systems available. Currently a 1024 Origin3OOO is under development, with first boot expected later in the summer of 2001. This paper discusses some early results with a 512p Origin3OOO system and some arcane IRIX system calls that can dramatically improve scaling performance.

  19. Do drug warnings and market withdrawals have an impact on the number of calls to teratogen information services?

    PubMed

    Sheehy, O; Gendron, M-P; Martin, B; Bérard, A

    2012-06-01

    IMAGe provides information on risks and benefits of medication use during pregnancy and lactation. The aim of this study was to determine the impact of Health Canada warnings on the number of calls received at IMAGe. We analyzed calls received between January 2003 and March 2008. The impact of the following warning/withdrawal were studied: paroxetine and risk of cardiac malformations (09/29/2005), selective serotonin reuptake inhibitors (SSRIs) and risk of persistent pulmonary hypertension of the newborn (PPHN) (03/10/2006), and impact of rofecoxib market withdrawal (09/30/2004). Interrupted auto-regressive integrated -moving average (ARIMA) analyses were used to test the impact of each warning on the number of calls received to IMAGe. 61,505 calls were analyzed. The paroxetine warning had a temporary impact increasing the overall number of calls to IMAGe, and an abrupt permanent effect on the number of calls related to antidepressant exposures. The PPHN warning had no impact but we observed a significant increase in the number of calls following rofecoxib market withdrawal. Health Canada needs to consider the increase in the demand of information to IMAGe following warnings on the risk of medication use during pregnancy. © Georg Thieme Verlag KG Stuttgart · New York.

  20. Enantiomer‐specific measurements of current‐use pesticides in aquatic systems

    EPA Science Inventory

    Some current‐use pesticides are chiral and have nonsuperimposable mirror images called enantiomers that exhibit identical physical–chemical properties but can behave differently when in contact with other chiral molecules (e.g., regarding degradation and uptake). Thes...

  1. 'Rosy Red' Soil in Phoenix's Scoop

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image shows fine-grained material inside the Robotic Arm scoop as seen by the Robotic Arm Camera (RAC) aboard NASA's Phoenix Mars Lander on June 25, 2008, the 30th Martian day, or sol, of the mission.

    The image shows fine, fluffy, red soil particles collected in a sample called 'Rosy Red.' The sample was dug from the trench named 'Snow White' in the area called 'Wonderland.' Some of the Rosy Red sample was delivered to Phoenix's Optical Microscope and Wet Chemistry Laboratory for analysis.

    The RAC provides its own illumination, so the color seen in RAC images is color as seen on Earth, not color as it would appear on Mars.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  2. Applications of superconducting bolometers in security imaging

    NASA Astrophysics Data System (ADS)

    Luukanen, A.; Leivo, M. M.; Rautiainen, A.; Grönholm, M.; Toivanen, H.; Grönberg, L.; Helistö, P.; Mäyrä, A.; Aikio, M.; Grossman, E. N.

    2012-12-01

    Millimeter-wave (MMW) imaging systems are currently undergoing deployment World-wide for airport security screening applications. Security screening through MMW imaging is facilitated by the relatively good transmission of these wavelengths through common clothing materials. Given the long wavelength of operation (frequencies between 20 GHz to ~ 100 GHz, corresponding to wavelengths between 1.5 cm and 3 mm), existing systems are suited for close-range imaging only due to substantial diffraction effects associated with practical aperture diameters. The present and arising security challenges call for systems that are capable of imaging concealed threat items at stand-off ranges beyond 5 meters at near video frame rates, requiring substantial increase in operating frequency in order to achieve useful spatial resolution. The construction of such imaging systems operating at several hundred GHz has been hindered by the lack of submm-wave low-noise amplifiers. In this paper we summarize our efforts in developing a submm-wave video camera which utilizes cryogenic antenna-coupled microbolometers as detectors. Whilst superconducting detectors impose the use of a cryogenic system, we argue that the resulting back-end complexity increase is a favorable trade-off compared to complex and expensive room temperature submm-wave LNAs both in performance and system cost.

  3. Lagrangian formulation of irreversible thermodynamics and the second law of thermodynamics.

    PubMed

    Glavatskiy, K S

    2015-05-28

    We show that the equations which describe irreversible evolution of a system can be derived from a variational principle. We suggest a Lagrangian, which depends on the properties of the normal and the so-called "mirror-image" system. The Lagrangian is symmetric in time and therefore compatible with microscopic reversibility. The evolution equations in the normal and mirror-imaged systems are decoupled and describe therefore independent irreversible evolution of each of the systems. The second law of thermodynamics follows from a symmetry of the Lagrangian. Entropy increase in the normal system is balanced by the entropy decrease in the mirror-image system, such that there exists an "integral of evolution" which is a constant. The derivation relies on the property of local equilibrium, which states that the local relations between the thermodynamic quantities in non-equilibrium are the same as in equilibrium.

  4. Designing a wearable navigation system for image-guided cancer resection surgery

    PubMed Central

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2015-01-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. PMID:24980159

  5. Designing a wearable navigation system for image-guided cancer resection surgery.

    PubMed

    Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald

    2014-11-01

    A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure.

  6. Excitation-scanning hyperspectral imaging system for microscopic and endoscopic applications

    NASA Astrophysics Data System (ADS)

    Mayes, Sam A.; Leavesley, Silas J.; Rich, Thomas C.

    2016-04-01

    Current microscopic and endoscopic technologies for cancer screening utilize white-light illumination sources. Hyper-spectral imaging has been shown to improve sensitivity while retaining specificity when compared to white-light imaging in both microscopy and in vivo imaging. However, hyperspectral imaging methods have historically suffered from slow acquisition times due to the narrow bandwidth of spectral filters. Often minutes are required to gather a full image stack. We have developed a novel approach called excitation-scanning hyperspectral imaging that provides 2-3 orders of magnitude increased signal strength. This reduces acquisition times significantly, allowing for live video acquisition. Here, we describe a preliminary prototype excitation-scanning hyperspectral imaging system that can be coupled with endoscopes or microscopes for hyperspectral imaging of tissues and cells. Our system is comprised of three subsystems: illumination, transmission, and imaging. The illumination subsystem employs light-emitting diode arrays to illuminate at different wavelengths. The transmission subsystem utilizes a unique geometry of optics and a liquid light guide. Software controls allow us to interface with and control the subsystems and components. Digital and analog signals are used to coordinate wavelength intensity, cycling and camera triggering. Testing of the system shows it can cycle 16 wavelengths at as fast as 1 ms per cycle. Additionally, more than 18% of the light transmits through the system. Our setup should allow for hyperspectral imaging of tissue and cells in real time.

  7. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    PubMed

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  8. [The dilemma of data flood - reducing costs and increasing quality control].

    PubMed

    Gassmann, B

    2012-09-05

    Digitization is found everywhere in sonography. Printing of ultrasound images using the videoprinter with special paper will be done in single cases. The documentation of sonography procedures is more and more done by saving image sequences instead of still frames. Echocardiography is routinely recorded in between with so called R-R-loops. Doing contrast enhanced ultrasound recording of sequences is necessary to get a deep impression of the vascular structure of interest. Working with this data flood in daily practice a specialized software is required. Comparison in follow up of stored and recent images/sequences is very helpful. Nevertheless quality control of the ultrasound system and the transducers is simple and safe - using a phantom for detail resolution and general image quality the stored images/sequences are comparable over the life cycle of the system. The comparison in follow up is showing decreased image quality and transducer defects immediately.

  9. EIR: enterprise imaging repository, an alternative imaging archiving and communication system.

    PubMed

    Bian, Jiang; Topaloglu, Umit; Lane, Cheryl

    2009-01-01

    The enormous number of studies performed at the Nuclear Medicine Department of University of Arkansas for Medical Sciences (UAMS) generates a huge amount PET/CT images daily. A DICOM workstation had been used as "mini-PACS" to route all studies, which is historically proven to be slow due to various reasons. However, replacing the workstation with a commercial PACS server is not only cost inefficient; and more often, the PACS vendors are reluctant to take responsibility for the final integration of these components. Therefore, in this paper, we propose an alternative imaging archiving and communication system called Enterprise Imaging Repository (EIR). EIR consists of two distinguished components: an image processing daemon and a user friendly web interface. EIR not only reduces the overall waiting time of transferring a study from the modalities to radiologists' workstations, but also provides a more preferable presentation.

  10. A New Approach to Image Fusion Based on Cokriging

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.

    2005-01-01

    We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.

  11. Optical Coherence Tomography

    PubMed Central

    Huang, David; Swanson, Eric A.; Lin, Charles P.; Schuman, Joel S.; Stinson, William G.; Chang, Warren; Hee, Michael R.; Flotte, Thomas; Gregory, Kenton; Puliafito, Carmen A.; Fujimoto, James G.

    2015-01-01

    A technique called optical coherence tomography (OCT) has been developed for noninvasive cross-sectional imaging in biological systems. OCT uses low-coherence interferometry to produce a two-dimensional image of optical scattering from internal tissue microstructures in a way that is analogous to ultrasonic pulse-echo imaging. OCT has longitudinal and lateral spatial resolutions of a few micrometers and can detect reflected signals as small as ~10−10 of the incident optical power. Tomographic imaging is demonstrated in vitro in the peripapillary area of the retina and in the coronary artery, two clinically relevant examples that are representative of transparent and turbid media, respectively. PMID:1957169

  12. Wind Etching

    NASA Image and Video Library

    2016-08-09

    Today's VIS image is located in a region that has been heavily modified by wind action. The narrow ridge/valley system seen in this image are a feature called yardangs. Yardangs form when unidirectional winds blow across poorly cemented materials. Multiple yardang directions can indicate changes in regional wind regimes. Orbit Number: 64188 Latitude: -0.629314 Longitude: 206.572 Instrument: VIS Captured: 2016-06-03 01:20 http://photojournal.jpl.nasa.gov/catalog/PIA20799

  13. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  14. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  15. Autonomous control systems: applications to remote sensing and image processing

    NASA Astrophysics Data System (ADS)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  16. Three Dimensional Visualization of GOES Cloud Data Using Octress

    DTIC Science & Technology

    1993-06-01

    structure for CAD of integrated circuits that can subdivide the cubes into more complex polyhedrons . Medical imaging is also taking advantage of the...CIGOES 501 FORMAT(A) CALL OPENDBCPARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database .’, "+ ISTATRM) CALL OLDIMAGE(1, CIGOES, STATUS...image name (no .ext):’ ACCEPT 501, CIGOES 501 FORMAT(A) CALL OPENDB(’PARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database

  17. DSCOVR Transcendance

    NASA Astrophysics Data System (ADS)

    Herman, J. R.; Boccara, M.; Albers, S. C.

    2017-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the DSCOVR satellite continuously views the sun-illuminated portion of the Earth with spectral coverage in the visible band, among others. Ideally, such a system would be able to provide a video with continuous coverage up to real time. However due to limits in onboard storage, bandwidth, and antenna coverage on the ground, we can receive at most 20 images a day, separated by at least one hour. Also, the processing time to generate the visible image out of the separate RGB channels delays public images delivery by a day or two. Finally, occasional remote tuning of instruments can cause several day periods where the imagery is completely missing. We are proposing a model-based method to fill these gaps and restore images lost in real-time processing. We are combining two sets of algorithms. The first, called Blueturn, interpolates successive images while projecting them on a 3-D model of the Earth, all this being done in real-time using the GPU. The second, called Simulated Weather Imagery (SWIM), makes EPIC-like images utilizing a ray-tracing model of scattering and absorption of sunlight by clouds, atmospheric gases, aerosols, and land surface. Clouds are obtained from 3-D gridded analyses and forecasts using weather modeling systems such as the Local Analysis and Prediction System (LAPS), and the Flow-following finite-volume Finite Icosahedral Model (FIM). SWIM uses EPIC images to validate its models. Typical model grid spacing is about 20km and is roughly commensurate with the EPIC imagery. Calculating one image per hour is enough for Blueturn to generate a smooth video. The synthetic images are designed to be visually realistic and aspire to be indistinguishable from the real ones. Resulting interframe transitions become seamless, and real-time delay is reduced to 1 hour. With Blueturn already available as a free online app, streaming EPIC images directly from NASA's public website, and with another SWIM server to ensure constant interval between key images, this work brings transcendance to EPIC's tribute. Enriched by two years of actual service in space, the most real holistic view of the Earth will be continued at a high degree of fidelity, regardless of EPIC limitations or interruptions.

  18. Development of a real time multiple target, multi camera tracker for civil security applications

    NASA Astrophysics Data System (ADS)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  19. Calibration for single multi-mode fiber digital scanning microscopy imaging system

    NASA Astrophysics Data System (ADS)

    Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong

    2015-11-01

    Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.

  20. Immersive telepresence system using high-resolution omnidirectional movies and a locomotion interface

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu

    2004-05-01

    Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.

  1. Developing tools for digital radar image data evaluation

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.; Raggam, J.

    1986-01-01

    The refinement of radar image analysis methods has led to a need for a systems approach to radar image processing software. Developments stimulated through satellite radar are combined with standard image processing techniques to create a user environment to manipulate and analyze airborne and satellite radar images. One aim is to create radar products for the user from the original data to enhance the ease of understanding the contents. The results are called secondary image products and derive from the original digital images. Another aim is to support interactive SAR image analysis. Software methods permit use of a digital height model to create ortho images, synthetic images, stereo-ortho images, radar maps or color combinations of different component products. Efforts are ongoing to integrate individual tools into a combined hardware/software environment for interactive radar image analysis.

  2. Parallel object-oriented data mining system

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick

    2004-01-06

    A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.

  3. Enhanced fluorescence microscope and its application

    NASA Astrophysics Data System (ADS)

    Wang, Susheng; Li, Qin; Yu, Xin

    1997-12-01

    A high gain fluorescence microscope is developed to meet the needs in medical and biological research. By the help of an image intensifier with luminance gain of 4 by 104 the sensitivity of the system can achieve 10-6 1x level and be 104 times higher than ordinary fluorescence microscope. Ultra-weak fluorescence image can be detected by it. The concentration of fluorescent label and emitting light intensity of the system are decreased as much as possible, therefore, the natural environment of the detected call can be kept. The CCD image acquisition set-up controlled by computer obtains the quantitative data of each point according to the gray scale. The relation between luminous intensity and output of CCD is obtained by using a wide range weak photometry. So the system not only shows the image of ultra-weak fluorescence distribution but also gives the intensity of fluorescence of each point. Using this system, we obtained the images of distribution of hypocrellin A (HA) in Hela cell, the images of Hela cell being protected by antioxidant reagent Vit. E, SF and BHT. The images show that the digitized ultra-sensitive fluorescence microscope is a useful tool for medical and biological research.

  4. Implementation of sobel method to detect the seed rubber plant leaves

    NASA Astrophysics Data System (ADS)

    Suyanto; Munte, J.

    2018-03-01

    This research was conducted to develop a system that can identify and recognize the type of rubber tree based on the pattern of leaves of the plant. The steps research are started with the identification of the image data acquisition, image processing, image edge detection and identification method template matching. Edge detection is using Sobel edge detection. Pattern recognition would detect image as input and compared with other images in a database called templates. Experiments carried out in one phase, identification of the leaf edge, using a rubber plant leaf image 14 are superior and 5 for each type of test images (clones) of the plant. From the experimental results obtained by the recognition rate of 91.79%.

  5. Information and image integration: project spectrum

    NASA Astrophysics Data System (ADS)

    Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin

    1998-07-01

    The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.

  6. First Map of Alien World animation

    NASA Image and Video Library

    2007-05-09

    This image shows the first-ever map of the surface of an exoplanet, or a planet beyond our solar system. Showing temperature variations across the cloudy tops of a gas giant called HD 189733b, the infrared data is taken by NASA Spitzer Space Telescope.

  7. Discriminative Projection Selection Based Face Image Hashing

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  8. Characterizing Complexity of Containerized Cargo X-ray Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guangxing; Martz, Harry; Glenn, Steven

    X-ray imaging can be used to inspect cargos imported into the United States. In order to better understand the performance of X-ray inspection systems, the X-ray characteristics (density, complexity) of cargo need to be quantified. In this project, an image complexity measure called integrated power spectral density (IPSD) was studied using both DNDO engineered cargos and stream-of-commerce (SOC) cargos. A joint distribution of cargo density and complexity was obtained. A support vector machine was used to classify the SOC cargos into four categories to estimate the relative fractions.

  9. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  10. A smart technique for attendance system to recognize faces through parallelism

    NASA Astrophysics Data System (ADS)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  11. Hologlyphics: volumetric image synthesis performance system

    NASA Astrophysics Data System (ADS)

    Funk, Walter

    2008-02-01

    This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.

  12. Utilizing HDTV as Data for Space Flight

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Lindblom, Walt

    2006-01-01

    In the aftermath of the Space Shuttle Columbia accident February 1, 2003, the Columbia Accident Investigation Board recognized the need for better video data from launch, on-orbit, and landing to assess the status and safety of the shuttle orbiter fleet. The board called on NASA to improve its imagery assets and update the Agency s methods for analyzing video. This paper will feature details of several projects implemented prior to the return to flight of the Space Shuttle, including an airborne HDTV imaging system called the WB-57 Ascent Video Experiment, use of true 60 Hz progressive scan HDTV for ground and airborne HDTV camera systems, and the decision to utilize a wavelet compression system for recording. This paper will include results of compression testing, imagery from the launch of STS-114, and details of how commercial components were utilized to image the shuttle launch from an aircraft flying at 400 knots at 60,000 feet altitude. The paper will conclude with a review of future plans to expand on the upgrades made prior to return to flight.

  13. Standardisation of DNA quantitation by image analysis: quality control of instrumentation.

    PubMed

    Puech, M; Giroud, F

    1999-05-01

    DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.

  14. Versatile illumination platform and fast optical switch to give standard observation camera gated active imaging capacity

    NASA Astrophysics Data System (ADS)

    Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie

    2015-10-01

    CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.

  15. [Differential and ontogenetic meaningfulness of iconic, linguistic and formal codes on cognitive development: new questions].

    PubMed

    Wittwer, J

    1990-01-01

    Man is a semiotic functioning animal, i.e. civilizations are units of symbolic (architectural), iconic, linguistic, formal, etc...) organizations. These units can only initially develop when enabled--but not necessarily produced--by a material base of a bio-physical nature, namely the central nervous system. In short, taking but three more academic factors, images, texts, and algebra, for example, are grasped by this material base. However, it is clear that the effects produced on children (and on adults, for that matter) are not equal. Scholastic goals, however, emphasize "fables" and "equations" whereas social mediatization emphasizes "images" and economic mediatization "equations". Hence the problems of appropriation of linguistic codes. To show the danger of an imbalance in these appropriations, the concept of differential semanticization is called upon: images are over-semanticized, with identification at risk; algebra is under-semanticized, at risks of obsessionalization. Texts, themselves, call upon the imagination and not on an imaginary structure imposed by a multivocal iconic pressure nor an imaginary structure rarefied by the prevalence of systems with univocal elements. Hence the importance of reading and writing for maintaining a nondepersonalizing semiotic balance.

  16. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  17. J-substitution algorithm in magnetic resonance electrical impedance tomography (MREIT): phantom experiments for static resistivity images.

    PubMed

    Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun

    2002-06-01

    Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.

  18. Implementing An Image Understanding System Architecture Using Pipe

    NASA Astrophysics Data System (ADS)

    Luck, Randall L.

    1988-03-01

    This paper will describe PIPE and how it can be used to implement an image understanding system. Image understanding is the process of developing a description of an image in order to make decisions about its contents. The tasks of image understanding are generally split into low level vision and high level vision. Low level vision is performed by PIPE -a high performance parallel processor with an architecture specifically designed for processing video images at up to 60 fields per second. High level vision is performed by one of several types of serial or parallel computers - depending on the application. An additional processor called ISMAP performs the conversion from iconic image space to symbolic feature space. ISMAP plugs into one of PIPE's slots and is memory mapped into the high level processor. Thus it forms the high speed link between the low and high level vision processors. The mechanisms for bottom-up, data driven processing and top-down, model driven processing are discussed.

  19. Speckle imaging with the MAMA detector: Preliminary results

    NASA Technical Reports Server (NTRS)

    Horch, E.; Heanue, J. F.; Morgan, J. S.; Timothy, J. G.

    1994-01-01

    We report on the first successful speckle imaging studies using the Stanford University speckle interferometry system, an instrument that uses a multianode microchannel array (MAMA) detector as the imaging device. The method of producing high-resolution images is based on the analysis of so-called 'near-axis' bispectral subplanes and follows the work of Lohmann et al. (1983). In order to improve the signal-to-noise ratio in the bispectrum, the frame-oversampling technique of Nakajima et al. (1989) is also employed. We present speckle imaging results of binary stars and other objects from V magnitude 5.5 to 11, and the quality of these images is studied. While the Stanford system is capable of good speckle imaging results, it is limited by the overall quantum efficiency of the current MAMA detector (which is due to the response of the photocathode at visible wavelengths and other detector properties) and by channel saturation of the microchannel plate. Both affect the signal-to-noise ratio of the power spectrum and bispectrum.

  20. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  1. Toward image guided robotic surgery: system validation.

    PubMed

    Herrell, Stanley D; Kwartowitz, David Morgan; Milhoua, Paul M; Galloway, Robert L

    2009-02-01

    Navigation for current robotic assisted surgical techniques is primarily accomplished through a stereo pair of laparoscopic camera images. These images provide standard optical visualization of the surface but provide no subsurface information. Image guidance methods allow the visualization of subsurface information to determine the current position in relationship to that of tracked tools. A robotic image guided surgical system was designed and implemented based on our previous laboratory studies. A series of experiments using tissue mimicking phantoms with injected target lesions was performed. The surgeon was asked to resect "tumor" tissue with and without the augmentation of image guidance using the da Vinci robotic surgical system. Resections were performed and compared to an ideal resection based on the radius of the tumor measured from preoperative computerized tomography. A quantity called the resection ratio, that is the ratio of resected tissue compared to the ideal resection, was calculated for each of 13 trials and compared. The mean +/- SD resection ratio of procedures augmented with image guidance was smaller than that of procedures without image guidance (3.26 +/- 1.38 vs 9.01 +/- 1.81, p <0.01). Additionally, procedures using image guidance were shorter (average 8 vs 13 minutes). It was demonstrated that there is a benefit from the augmentation of laparoscopic video with updated preoperative images. Incorporating our image guided system into the da Vinci robotic system improved overall tissue resection, as measured by our metric. Adding image guidance to the da Vinci robotic surgery system may result in the potential for improvements such as the decreased removal of benign tissue while maintaining an appropriate surgical margin.

  2. In vivo liver visualizations with magnetic particle imaging based on the calibration measurement approach

    NASA Astrophysics Data System (ADS)

    Dieckhoff, J.; Kaul, M. G.; Mummert, T.; Jung, C.; Salamon, J.; Adam, G.; Knopp, T.; Ludwig, F.; Balceris, C.; Ittrich, H.

    2017-05-01

    Magnetic particle imaging (MPI) facilitates the rapid determination of 3D in vivo magnetic nanoparticle distributions. In this work, liver MPI following intravenous injections of ferucarbotran (Resovist®) was studied. The image reconstruction was based on a calibration measurement, the so called system function. The application of an enhanced system function sample reflecting the particle mobility and aggregation status of ferucarbotran resulted in significantly improved image reconstructions. The finding was supported by characterizations of different ferucarbotran compositions with the magnetorelaxometry and magnetic particle spectroscopy technique. For instance, similar results were obtained between ferucarbotran embedded in freeze-dried mannitol sugar and liver tissue harvested after a ferucarbotran injection. In addition, the combination of multiple shifted measurement patches for a joint reconstruction of the MPI data enlarged the field of view and increased the covering of liver MPI on magnetic resonance images noticeably.

  3. In vivo liver visualizations with magnetic particle imaging based on the calibration measurement approach.

    PubMed

    Dieckhoff, J; Kaul, M G; Mummert, T; Jung, C; Salamon, J; Adam, G; Knopp, T; Ludwig, F; Balceris, C; Ittrich, H

    2017-05-07

    Magnetic particle imaging (MPI) facilitates the rapid determination of 3D in vivo magnetic nanoparticle distributions. In this work, liver MPI following intravenous injections of ferucarbotran (Resovist ® ) was studied. The image reconstruction was based on a calibration measurement, the so called system function. The application of an enhanced system function sample reflecting the particle mobility and aggregation status of ferucarbotran resulted in significantly improved image reconstructions. The finding was supported by characterizations of different ferucarbotran compositions with the magnetorelaxometry and magnetic particle spectroscopy technique. For instance, similar results were obtained between ferucarbotran embedded in freeze-dried mannitol sugar and liver tissue harvested after a ferucarbotran injection. In addition, the combination of multiple shifted measurement patches for a joint reconstruction of the MPI data enlarged the field of view and increased the covering of liver MPI on magnetic resonance images noticeably.

  4. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    NASA Astrophysics Data System (ADS)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  5. Geospatial Analysis | Energy Analysis | NREL

    Science.gov Websites

    products and tools. Image of a triangle divided into sections called Market, Economic, Technical, and Featured Study U.S. Renewable Energy Technical Potentials: A GIS-Based Analysis summarizes the achievable energy generation, or technical potential, of specific renewable energy technologies given system

  6. Digital tomosynthesis (DTS) with a Circular X-ray tube: Its image reconstruction based on total-variation minimization and the image characteristics

    NASA Astrophysics Data System (ADS)

    Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.

    2013-09-01

    In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.

  7. Integrated approach to ischemic heart disease. The one-stop shop.

    PubMed

    Kramer, C M

    1998-05-01

    Magnetic resonance imaging is unique in its variety of applications for imaging the cardiovascular system. A thorough assessment of myocardial structure, function, and perfusion; assessment of coronary artery anatomy and flow; and spectroscopic evaluation of cardiac energetics can be readily performed by magnetic resonance imaging. One key to the advancement of cardiac magnetic resonance imaging as a clinical tool in the evaluation, the so called one stop shop. Improvements in magnetic resonance hardware, software, and imaging speed now permit this integrated examination. Cardiac magnetic resonance is a powerful technique with the potential to replace or complement other commonly used techniques in the diagnostic armamentarium of physicians caring for patients with ischemic heart disease.

  8. Poly-Pattern Compressive Segmentation of ASTER Data for GIS

    NASA Technical Reports Server (NTRS)

    Myers, Wayne; Warner, Eric; Tutwiler, Richard

    2007-01-01

    Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.

  9. 'Dodo' and 'Baby Bear' Trenches

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA's Phoenix Mars Lander's Surface Stereo Imager took this image on Sol 11 (June 5, 2008), the eleventh day after landing. It shows the trenches dug by Phoenix's Robotic Arm. The trench on the left is informally called 'Dodo' and was dug as a test. The trench on the right is informally called 'Baby Bear.' The sample dug from Baby Bear will be delivered to the Phoenix's Thermal and Evolved-Gas Analyzer, or TEGA. The Baby Bear trench is 9 centimeters (3.1 inches) wide and 4 centimeters (1.6 inches) deep.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. Biological imaging in radiation therapy: role of positron emission tomography.

    PubMed

    Nestle, Ursula; Weber, Wolfgang; Hentschel, Michael; Grosu, Anca-Ligia

    2009-01-07

    In radiation therapy (RT), staging, treatment planning, monitoring and evaluation of response are traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). These radiological investigations have the significant advantage to show the anatomy with a high resolution, being also called anatomical imaging. In recent years, so called biological imaging methods which visualize metabolic pathways have been developed. These methods offer complementary imaging of various aspects of tumour biology. To date, the most prominent biological imaging system in use is positron emission tomography (PET), whose diagnostic properties have clinically been evaluated for years. The aim of this review is to discuss the valences and implications of PET in RT. We will focus our evaluation on the following topics: the role of biological imaging for tumour tissue detection/delineation of the gross tumour volume (GTV) and for the visualization of heterogeneous tumour biology. We will discuss the role of fluorodeoxyglucose-PET in lung and head and neck cancer and the impact of amino acids (AA)-PET in target volume delineation of brain gliomas. Furthermore, we summarize the data of the literature about tumour hypoxia and proliferation visualized by PET. We conclude that, regarding treatment planning in radiotherapy, PET offers advantages in terms of tumour delineation and the description of biological processes. However, to define the real impact of biological imaging on clinical outcome after radiotherapy, further experimental, clinical and cost/benefit analyses are required.

  11. TOPICAL REVIEW: Biological imaging in radiation therapy: role of positron emission tomography

    NASA Astrophysics Data System (ADS)

    Nestle, Ursula; Weber, Wolfgang; Hentschel, Michael; Grosu, Anca-Ligia

    2009-01-01

    In radiation therapy (RT), staging, treatment planning, monitoring and evaluation of response are traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). These radiological investigations have the significant advantage to show the anatomy with a high resolution, being also called anatomical imaging. In recent years, so called biological imaging methods which visualize metabolic pathways have been developed. These methods offer complementary imaging of various aspects of tumour biology. To date, the most prominent biological imaging system in use is positron emission tomography (PET), whose diagnostic properties have clinically been evaluated for years. The aim of this review is to discuss the valences and implications of PET in RT. We will focus our evaluation on the following topics: the role of biological imaging for tumour tissue detection/delineation of the gross tumour volume (GTV) and for the visualization of heterogeneous tumour biology. We will discuss the role of fluorodeoxyglucose-PET in lung and head and neck cancer and the impact of amino acids (AA)-PET in target volume delineation of brain gliomas. Furthermore, we summarize the data of the literature about tumour hypoxia and proliferation visualized by PET. We conclude that, regarding treatment planning in radiotherapy, PET offers advantages in terms of tumour delineation and the description of biological processes. However, to define the real impact of biological imaging on clinical outcome after radiotherapy, further experimental, clinical and cost/benefit analyses are required.

  12. A frameless stereotaxic operating microscope for neurosurgery.

    PubMed

    Friets, E M; Strohbehn, J W; Hatch, J F; Roberts, D W

    1989-06-01

    A new system, which we call the frameless stereotaxic operating microscope, is discussed. Its purpose is to display CT or other image data in the operating microscope in the correct scale, orientation, and position without the use of a stereotaxic frame. A nonimaging ultrasonic rangefinder allows the position of the operating microscope and the position of the patient to be determined. Discrete fiducial points on the patient's external anatomy are located in both image space and operating room space, linking the image data and the operating room. Physician-selected image information, e.g., tumor contours or guidance to predetermined targets, is projected through the optics of the operating microscope using a miniature cathode ray tube and a beam splitter. Projected images superpose the surgical field, reconstructed from image data to match the focal plane of the operating microscope. The algorithms on which the system is based are described, and the sources and effects of errors are discussed. The system's performance is simulated, providing an estimate of accuracy. Two phantoms are used to measure accuracy experimentally. Clinical results and observations are given.

  13. Entropy reduction via simplified image contourization

    NASA Technical Reports Server (NTRS)

    Turner, Martin J.

    1993-01-01

    The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.

  14. Web-based document and content management with off-the-shelf software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuster, J

    1999-03-18

    This, then, is the current status of the project: Since we made the switch to Intradoc, we are now treating the project as a document and image management system. In reality, it could be considered a document and content management system since we can manage almost any file input to the system such as video or audio. At present, however, we are concentrating on images. As mentioned above, my CRADA funding was only targeted at including thumbnails of images in Intradoc. We still had to modify Intradoc so that it would compress images submitted to the system. All processing ofmore » files submitted to Intradoc is handled in what is called the Document Refinery. Even though MrSID created thumbnails in the process of compressing an image, work needed to be done to somehow build this capability into the Document Refinery. Therefore we made the decision to contract the Intradoc Engineering Team to perform this custom development work. To make Intradoc even more capable of handling images, we have also contracted for customization of the Document Refinery to accept Adobe PhotoShop and Illustrator file in their native format.« less

  15. Using the phase-space imager to analyze partially coherent imaging systems: bright-field, phase contrast, differential interference contrast, differential phase contrast, and spiral phase contrast

    NASA Astrophysics Data System (ADS)

    Mehta, Shalin B.; Sheppard, Colin J. R.

    2010-05-01

    Various methods that use large illumination aperture (i.e. partially coherent illumination) have been developed for making transparent (i.e. phase) specimens visible. These methods were developed to provide qualitative contrast rather than quantitative measurement-coherent illumination has been relied upon for quantitative phase analysis. Partially coherent illumination has some important advantages over coherent illumination and can be used for measurement of the specimen's phase distribution. However, quantitative analysis and image computation in partially coherent systems have not been explored fully due to the lack of a general, physically insightful and computationally efficient model of image formation. We have developed a phase-space model that satisfies these requirements. In this paper, we employ this model (called the phase-space imager) to elucidate five different partially coherent systems mentioned in the title. We compute images of an optical fiber under these systems and verify some of them with experimental images. These results and simulated images of a general phase profile are used to compare the contrast and the resolution of the imaging systems. We show that, for quantitative phase imaging of a thin specimen with matched illumination, differential phase contrast offers linear transfer of specimen information to the image. We also show that the edge enhancement properties of spiral phase contrast are compromised significantly as the coherence of illumination is reduced. The results demonstrate that the phase-space imager model provides a useful framework for analysis, calibration, and design of partially coherent imaging methods.

  16. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-07-21

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.

  17. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264

  18. Forensic imaging tools for law enforcement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SMITHPETER,COLIN L.; SANDISON,DAVID R.; VARGO,TIMOTHY D.

    2000-01-01

    Conventional methods of gathering forensic evidence at crime scenes are encumbered by difficulties that limit local law enforcement efforts to apprehend offenders and bring them to justice. Working with a local law-enforcement agency, Sandia National Laboratories has developed a prototype multispectral imaging system that can speed up the investigative search task and provide additional and more accurate evidence. The system, called the Criminalistics Light-imaging Unit (CLU), has demonstrated the capabilities of locating fluorescing evidence at crime scenes under normal lighting conditions and of imaging other types of evidence, such as untreated fingerprints, by direct white-light reflectance. CLU employs state ofmore » the art technology that provides for viewing and recording of the entire search process on videotape. This report describes the work performed by Sandia to design, build, evaluate, and commercialize CLU.« less

  19. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  20. A Possible Landing Site in Aram Dorsum for the ExoMars Rover

    NASA Image and Video Library

    2014-08-27

    This image captured by NASA Mars Reconnaissance Orbiter is of an area called Aram Dorsum also known by its old name, Oxia Palus that has been suggested for the 2018/2020 ExoMars Rover because it contains an ancient, exhumed alluvial system.

  1. Correspondence Search Mitigation Using Feature Space Anti-Aliasing

    DTIC Science & Technology

    2007-01-01

    trackers are widely used in astro -inertial nav- igation systems for long-range aircraft, space navigation, and ICBM guidance. When ground images are to be...frequency domain representation of the point spread function, H( fx , fy), is called the optical transfer function. Applying the Fourier transform to the...frequency domain representation of the image: I( fx , fy, t) = O( fx , fy, t)H( fx , fy) (4) In most conditions, the projected scene can be treated as a

  2. Automated Blazar Light Curves Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Spencer James

    Every night in a remote clearing called Fenton Hill high in the Jemez Mountains of central New Mexico, a bank of robotically controlled telescopes tilt their lenses to the sky for another round of observation through digital imaging. Los Alamos National Laboratory’s Thinking Telescopes project is watching for celestial transients including high-power cosmic flashes called, and like all science, it can be messy work. To keep the project clicking along, Los Alamos scientists routinely install equipment upgrades, maintain the site, and refine the sophisticated machinelearning computer programs that process those images and extract useful data from them. Each week themore » system amasses 100,000 digital images of the heavens, some of which are compromised by clouds, wind gusts, focus problems, and so on. For a graduate student at the Lab taking a year’s break between master’s and Ph.D. studies, working with state-of-the-art autonomous telescopes that can make fundamental discoveries feels light years beyond the classroom.« less

  3. Color View 'Dodo' and 'Baby Bear' Trenches

    NASA Technical Reports Server (NTRS)

    2008-01-01

    NASA's Phoenix Mars Lander's Surface Stereo Imager took this image on Sol 14 (June 8, 2008), the 14th Martian day after landing. It shows two trenches dug by Phoenix's Robotic Arm.

    Soil from the right trench, informally called 'Baby Bear,' was delivered to Phoenix's Thermal and Evolved-Gas Analyzer, or TEGA, on Sol 12 (June 6). The following several sols included repeated attempts to shake the screen over TEGA's oven number 4 to get fine soil particles through the screen and into the oven for analysis.

    The trench on the left is informally called 'Dodo' and was dug as a test.

    Each of the trenches is about 9 centimeters (3 inches) wide. This view is presented in approximately true color by combining separate exposures taken through different filters of the Surface Stereo Imager.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  4. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  5. Sharing programming resources between Bio* projects through remote procedure call and native call stack strategies.

    PubMed

    Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki

    2012-01-01

    Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.

  6. License Plate Recognition System for Indian Vehicles

    NASA Astrophysics Data System (ADS)

    Sanap, P. R.; Narote, S. P.

    2010-11-01

    We consider the task of recognition of Indian vehicle number plates (also called license plates or registration plates in other countries). A system for Indian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. Also, vehicle owners may place the plates inside glass covered frames or use plates made of nonstandard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Indian vehicle number plates in digital images. Commercial application of the system is envisaged.

  7. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  8. An Automated Self-Learning Quantification System to Identify Visible Areas in Capsule Endoscopy Images.

    PubMed

    Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao

    2017-08-01

    Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.

  9. Using an image-extended relational database to support content-based image retrieval in a PACS.

    PubMed

    Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M

    2005-12-01

    This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.

  10. Deeply learnt hashing forests for content based image retrieval in prostate MR images

    NASA Astrophysics Data System (ADS)

    Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin

    2016-03-01

    Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.

  11. 42 CFR 410.78 - Telehealth services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... telecommunications system in single media format does not include telephone calls, images transmitted via facsimile... photograph of a skin lesion, may be considered to meet the requirement of a single media format under this... clinical psychologist as described in § 410.71. (vii) A clinical social worker as described in § 410.73...

  12. 42 CFR 410.78 - Telehealth services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... telecommunications system in single media format does not include telephone calls, images transmitted via facsimile... photograph of a skin lesion, may be considered to meet the requirement of a single media format under this... described in § 410.71. (vii) A clinical social worker as described in § 410.73. (viii) A registered...

  13. 42 CFR 410.78 - Telehealth services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... telecommunications system in single media format does not include telephone calls, images transmitted via facsimile... photograph of a skin lesion, may be considered to meet the requirement of a single media format under this... clinical psychologist as described in § 410.71. (vii) A clinical social worker as described in § 410.73...

  14. 42 CFR 410.78 - Telehealth services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... telecommunications system in single media format does not include telephone calls, images transmitted via facsimile... photograph of a skin lesion, may be considered to meet the requirement of a single media format under this... § 410.77. (vi) A clinical psychologist as described in § 410.71. (vii) A clinical social worker as...

  15. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask

  16. Gradient Index Optics at DARPA

    DTIC Science & Technology

    2013-11-01

    four efforts were selected for further development and demonstration: fluidic adaptive zoom lenses, foveated imaging, photon sieves, and nanolayer...2-4 1. Fluidic Adaptive Zoom Lenses... gastropod mollusks. In simple optical systems such as the fish lens, the focal length is a function of the wavelength of light. This distortion is called

  17. Classifying cotton bark and grass extraneous matter using image analysis

    USDA-ARS?s Scientific Manuscript database

    Cotton extraneous matter (EM) and special conditions are the only cotton quality attributes still determined manually by USDA-AMS classers. To develop a machine EM classing system, a better understanding of what triggers a classer EM call is needed. The goal of this work was to develop new informati...

  18. Advanced NDE research in electromagnetic, thermal, and coherent optics

    NASA Technical Reports Server (NTRS)

    Skinner, S. Ballou

    1992-01-01

    A new inspection technology called magneto-optic/eddy current imaging was investigated. The magneto-optic imager makes readily visible irregularities and inconsistencies in airframe components. Other research observed in electromagnetics included (1) disbond detection via resonant modal analysis; (2) AC magnetic field frequency dependence of magnetoacoustic emission; and (3) multi-view magneto-optic imaging. Research observed in the thermal group included (1) thermographic detection and characterization of corrosion in aircraft aluminum; (2) a multipurpose infrared imaging system for thermoelastic stress detection; (3) thermal diffusivity imaging of stress induced damage in composites; and (4) detection and measurement of ice formation on the space shuttle main fuel tank. Research observed in the optics group included advancements in optical nondestructive evaluation (NDE).

  19. Analysis of x-ray hand images for bone age assessment

    NASA Astrophysics Data System (ADS)

    Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.

    1990-09-01

    In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.

  20. Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach

    NASA Astrophysics Data System (ADS)

    Taufik, Afirah; Sakinah Syed Ahmad, Sharifah

    2016-06-01

    The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.

  1. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  2. Simultaneous transmission for an encrypted image and a double random-phase encryption key

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Zhou, Xin; Li, Da-Hai; Zhou, Ding-Fu

    2007-06-01

    We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.

  3. Simultaneous transmission for an encrypted image and a double random-phase encryption key.

    PubMed

    Yuan, Sheng; Zhou, Xin; Li, Da-hai; Zhou, Ding-fu

    2007-06-20

    We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.

  4. Computer control of a scanning electron microscope for digital image processing of thermal-wave images

    NASA Technical Reports Server (NTRS)

    Gilbert, Percy; Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.

    1987-01-01

    Using a recently developed technology called thermal-wave microscopy, NASA Lewis Research Center has developed a computer controlled submicron thermal-wave microscope for the purpose of investigating III-V compound semiconductor devices and materials. This paper describes the system's design and configuration and discusses the hardware and software capabilities. Knowledge of the Concurrent 3200 series computers is needed for a complete understanding of the material presented. However, concepts and procedures are of general interest.

  5. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  6. Network-based reading system for lung cancer screening CT

    NASA Astrophysics Data System (ADS)

    Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio

    2006-03-01

    This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.

  7. A new omni-directional multi-camera system for high resolution surveillance

    NASA Astrophysics Data System (ADS)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  8. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  9. OmniBird: a miniature PTZ NIR sensor system for UCAV day/night autonomous operations

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Li, Hui

    2007-04-01

    Through a SBIR funding from NAVAIR, we have successfully developed an innovative, miniaturized, and lightweight PTZ UCAV imager called OmniBird for UCAV taxiing. The proposed OmniBird will be able to fit in a small space. The designed zoom capability allows it to acquire focused images for targets ranging from 10 to 250 feet. The innovative panning mechanism also allows the system to have a field of view of +/- 100 degrees within the provided limited spacing (6 cubic inches). The integrated optics, camera sensor, and mechanics solution will allow the OmniBird to stay optically aligned and shock-proof under harsh environments.

  10. Using sparsity information for iterative phase retrieval in x-ray propagation imaging.

    PubMed

    Pein, A; Loock, S; Plonka, G; Salditt, T

    2016-04-18

    For iterative phase retrieval algorithms in near field x-ray propagation imaging experiments with a single distance measurement, it is indispensable to have a strong constraint based on a priori information about the specimen; for example, information about the specimen's support. Recently, Loock and Plonka proposed to use the a priori information that the exit wave is sparsely represented in a certain directional representation system, a so-called shearlet system. In this work, we extend this approach to complex-valued signals by applying the new shearlet constraint to amplitude and phase separately. Further, we demonstrate its applicability to experimental data.

  11. An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth

    NASA Astrophysics Data System (ADS)

    Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.

    2012-12-01

    This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.

  12. Wireless local area networking for linking a PC reporting system and PACS: clinical feasibility in emergency reporting.

    PubMed

    Yoshihiro, Akiko; Nakata, Norio; Harada, Junta; Tada, Shimpei

    2002-01-01

    Although local area networks (LANs) are commonplace in hospital-based radiology departments today, wireless LANs are still relatively unknown and untried. A linked wireless reporting system was developed to improve work throughput and efficiency. It allows radiologists, physicians, and technologists to review current radiology reports and images and instantly compare them with reports and images from previous examinations. This reporting system also facilitates creation of teaching files quickly, easily, and accurately. It consists of a Digital Imaging and Communications in Medicine 3.0-based picture archiving and communication system (PACS), a diagnostic report server, and portable laptop computers. The PACS interfaces with magnetic resonance imagers, computed tomographic scanners, and computed radiography equipment. The same kind of functionality is achievable with a wireless LAN as with a wired LAN, with comparable bandwidth but with less cabling infrastructure required. This wireless system is presently incorporated into the operations of the emergency and radiology departments, with future plans calling for applications in operating rooms, outpatient departments, all hospital wards, and intensive care units. No major problems have been encountered with the system, which is in constant use and appears to be quite successful. Copyright RSNA, 2002

  13. Unification of two fractal families

    NASA Astrophysics Data System (ADS)

    Liu, Ying

    1995-06-01

    Barnsley and Hurd classify the fractal images into two families: iterated function system fractals (IFS fractals) and fractal transform fractals, or local iterated function system fractals (LIFS fractals). We will call IFS fractals, class 2 fractals and LIFS fractals, class 3 fractals. In this paper, we will unify these two approaches plus another family of fractals, the class 5 fractals. The basic idea is given as follows: a dynamical system can be represented by a digraph, the nodes in a digraph can be divided into two parts: transient states and persistent states. For bilevel images, a persistent node is a black pixel. A transient node is a white pixel. For images with more than two gray levels, a stochastic digraph is used. A transient node is a pixel with the intensity of 0. The intensity of a persistent node is determined by a relative frequency. In this way, the two families of fractals can be generated in a similar way. In this paper, we will first present a classification of dynamical systems and introduce the transformation based on digraphs, then we will unify the two approaches for fractal binary images. We will compare the decoding algorithms of the two families. Finally, we will generalize the discussion to continuous-tone images.

  14. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.

    PubMed

    Glover, Jack L; Hudson, Lawrence T

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.

  15. Automated 100-Position Specimen Loader and Image Acquisition System for Transmission Electron Microscopy

    PubMed Central

    Lefman, Jonathan; Morrison, Robert; Subramaniam, Sriram

    2007-01-01

    We report the development of a novel, multi-specimen imaging system for high-throughput transmission electron microscopy. Our cartridge-based loading system, called the “Gatling”, permits the sequential examination of as many as 100 specimens in the microscope for room temperature electron microscopy using mechanisms for rapid and automated specimen exchange. The software for the operation of the Gatling and automated data acquisition has been implemented in an updated version of our in-house program AutoEM. In the current implementation of the system, the time required to deliver 95 specimens into the microscope and collect overview images from each is about 13 hours. Regions of interest are identified from a low magnification atlas generation from each specimen and an unlimited number of higher magnifications images can be subsequently acquired from these regions using fully automated data acquisition procedures that can be controlled from a remote interface. We anticipate that the availability of the Gatling will greatly accelerate the speed of data acquisition for a variety of applications in biology, materials science and nanotechnology that require rapid screening and image analysis of multiple specimens. PMID:17240161

  16. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    PubMed Central

    Glover, Jack L.; Hudson, Lawrence T.

    2016-01-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586

  17. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Glover, Jack L.; Hudson, Lawrence T.

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.

  18. Determination of the microbolometric FPA's responsivity with imaging system's radiometric considerations

    NASA Astrophysics Data System (ADS)

    Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal

    2013-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  19. FBI Fingerprint Image Capture System High-Speed-Front-End throughput modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathke, P.M.

    1993-09-01

    The Federal Bureau of Investigation (FBI) has undertaken a major modernization effort called the Integrated Automated Fingerprint Identification System (IAFISS). This system will provide centralized identification services using automated fingerprint, subject descriptor, mugshot, and document processing. A high-speed Fingerprint Image Capture System (FICS) is under development as part of the IAFIS program. The FICS will capture digital and microfilm images of FBI fingerprint cards for input into a central database. One FICS design supports two front-end scanning subsystems, known as the High-Speed-Front-End (HSFE) and Low-Speed-Front-End, to supply image data to a common data processing subsystem. The production rate of themore » HSFE is critical to meeting the FBI`s fingerprint card processing schedule. A model of the HSFE has been developed to help identify the issues driving the production rate, assist in the development of component specifications, and guide the evolution of an operations plan. A description of the model development is given, the assumptions are presented, and some HSFE throughput analysis is performed.« less

  20. Web-based video monitoring of CT and MRI procedures

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael

    2000-05-01

    A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.

  1. Halo CME

    NASA Image and Video Library

    2017-12-08

    A giant cloud appears to expand outward from the sun in all directions in this image from Sept. 28, 2012, which is called a halo CME. This kind of image occurs when a CME moves toward Earth – as here – or directly away from it. Credit: ESA/NASA/SOHO CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system...NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Phoenix Deepens Trenches on Mars (3D)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Surface Stereo Imager on NASA's Phoenix Mars Lander took this anaglyph on Oct. 21, 2008, during the 145th Martian day, or sol. Phoenix landed on Mars' northern plains on May 25, 2008.

    The trench on the upper left, called 'Dodo-Goldilocks,' is about 38 centimeters (15 inches) long and 4 centimeters (1.5 inches) deep. The trench on the right, called 'Upper Cupboard,' is about 60 centimeters (24 inches) long and 3 centimeters (1 inch) deep. The trench in the lower middle is called 'Stone Soup.'

    The Phoenix mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  3. Introduction to the concepts of TELEDEMO and TELEDIMS

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Schlutsmeyer, A. P.

    1982-01-01

    An introduction to the system concepts: TELEDEMO and TELEDIMS is provided. TELEDEMO is derived primarily from computer graphics and, via incorporation of sophisticated image data compression, enables effective low cost teleconferencing at data rates as low as 1K bit/second using dial-up phone lines. Combining TELEDEMO's powerful capabilities for the development of presentation material with microprocessor-based Information Management Systems (IMS) yields a truly all electronic IMS called TELEDIMS.

  4. Opto-mechanical design of an image slicer for the GRIS spectrograph at GREGOR

    NASA Astrophysics Data System (ADS)

    Vega Reyes, N.; Esteves, M. A.; Sánchez-Capuchino, J.; Salaun, Y.; López, R. L.; Gracia, F.; Estrada Herrera, P.; Grivel, C.; Vaz Cedillo, J. J.; Collados, M.

    2016-07-01

    An image slicer has been proposed for the Integral Field Spectrograph [1] of the 4-m European Solar Telescope (EST) [2] The image slicer for EST is called MuSICa (Multi-Slit Image slicer based on collimator-Camera) [3] and it is a telecentric system with diffraction limited optical quality offering the possibility to obtain high resolution Integral Field Solar Spectroscopy or Spectro-polarimetry by coupling a polarimeter after the generated slit (or slits). Considering the technical complexity of the proposed Integral Field Unit (IFU), a prototype has been designed for the GRIS spectrograph at GREGOR telescope at Teide Observatory (Tenerife), composed by the optical elements of the image slicer itself, a scanning system (to cover a larger field of view with sequential adjacent measurements) and an appropriate re-imaging system. All these subsystems are placed in a bench, specially designed to facilitate their alignment, integration and verification, and their easy installation in front of the spectrograph. This communication describes the opto-mechanical solution adopted to upgrade GRIS while ensuring repeatability between the observational modes, IFU and long-slit. Results from several tests which have been performed to validate the opto-mechanical prototypes are also presented.

  5. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  6. [Development of a secure and cost-effective infrastructure for the access of arbitrary web-based image distribution systems].

    PubMed

    Hackländer, T; Kleber, K; Schneider, H; Demabre, N; Cramer, B M

    2004-08-01

    To build an infrastructure that enables radiologists on-call and external users a teleradiological access to the HTML-based image distribution system inside the hospital via internet. In addition, no investment costs should arise on the user side and the image data should be sent renamed using cryptographic techniques. A pure HTML-based system manages the image distribution inside the hospital, with an open source project extending this system through a secure gateway outside the firewall of the hospital. The gateway handles the communication between the external users and the HTML server within the network of the hospital. A second firewall is installed between the gateway and the external users and builds up a virtual private network (VPN). A connection between the gateway and the external user is only acknowledged if the computers involved authenticate each other via certificates and the external users authenticate via a multi-stage password system. All data are transferred encrypted. External users get only access to images that have been renamed to a pseudonym by means of automated processing before. With an ADSL internet access, external users achieve an image load frequency of 0.4 CT images per second. More than 90 % of the delay during image transfer results from security checks within the firewalls. Data passing the gateway induce no measurable delay. Project goals were realized by means of an infrastructure that works vendor independently with any HTML-based image distribution systems. The requirements of data security were realized using state-of-the-art web techniques. Adequate access and transfer speed lead to a widespread acceptance of the system on the part of external users.

  7. 42 CFR 410.78 - Telehealth services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... telecommunications system in single media format does not include telephone calls, images transmitted via facsimile... photograph of a skin lesion, may be considered to meet the requirement of a single media format under this... described in § 410.77. (vi) A clinical psychologist as described in § 410.71. (vii) A clinical social worker...

  8. A Hybrid Computing Testbed for Mobile Threat Detection and Enhanced Research and Education in Information

    DTIC Science & Technology

    2014-11-20

    techniques to defend against stealthy malware, i.e., rootkits. For example, we have been developing new virtualization-based security service called AirBag ...for mobile devices. AirBag is a virtualization-based system that enables dynamic switching of (guest) Android im- ages in one VM, with one image

  9. The Economics of Managed Print and Imaging Services

    DTIC Science & Technology

    2011-06-01

    process are called poka - yokes , which are methods to prevent mistakes. This combination of controls is designed to make a system foolproof because it...must be reformatted prior to turn-in.” This sticker serves as a poka - yoke , as mentioned in Chapter IV. A breach of PII can also result from

  10. Toward a fusion of optical coherence tomography and hyperspectral imaging for poultry meat quality assessment

    USDA-ARS?s Scientific Manuscript database

    An emerging poultry meat quality concern is associated with chicken breast fillets having an uncharacteristically hard or rigid feel (called the wooden breast condition). The cause of the wooden breast condition is still largely unknown, and there is no single objective evaluation method or system k...

  11. The Effects of Emotional Memory Skills on Public Speaking Anxiety: A First Look.

    ERIC Educational Resources Information Center

    Holtz, James; Reynolds, Gayla

    This paper focuses on the use of emotional memory skills to reduce communication apprehension, pioneered as a new cognitive intervention treatment called "The Imaging System for Public Speaking" (Keaten et al, 1994). The paper briefly explains other cognitive intervention strategies commonly used, including rational-emotive therapy,…

  12. TL dosimetry for quality control of CR mammography imaging systems

    NASA Astrophysics Data System (ADS)

    Gaona, E.; Nieto, J. A.; Góngora, J. A. I. D.; Arreola, M.; Enríquez, J. G. F.

    The aim of this work is to estimate the average glandular dose with thermoluminescent (TL) dosimetry and comparison with quality imaging in computed radiography (CR) mammography. For a measuring dose, the Food and Drug Administration (FDA) and the American College of Radiology (ACR) use a phantom, so that dose and image quality are assessed with the same test object. The mammography is a radiological image to visualize early biological manifestations of breast cancer. Digital systems have two types of image-capturing devices, full field digital mammography (FFDM) and CR mammography. In Mexico, there are several CR mammography systems in clinical use, but only one system has been approved for use by the FDA. Mammography CR uses a photostimulable phosphor detector (PSP) system. Most CR plates are made of 85% BaFBr and 15% BaFI doped with europium (Eu) commonly called barium flourohalideE We carry out an exploratory survey of six CR mammography units from three different manufacturers and six dedicated X-ray mammography units with fully automatic exposure. The results show three CR mammography units (50%) have a dose greater than 3.0 mGy without demonstrating improved image quality. The differences between doses averages from TLD system and dosimeter with ionization chamber are less than 10%. TLD system is a good option for average glandular dose measurement for X-rays with a HVL (0.35-0.38 mmAl) and kVp (24-26) used in quality control procedures with ACR Mammography Accreditation Phantom.

  13. Commercially available high-speed system for recording and monitoring vocal fold vibrations.

    PubMed

    Sekimoto, Sotaro; Tsunoda, Koichi; Kaga, Kimitaka; Makiyama, Kiyoshi; Tsunoda, Atsunobu; Kondo, Kenji; Yamasoba, Tatsuya

    2009-12-01

    We have developed a special purpose adaptor making it possible to use a commercially available high-speed camera to observe vocal fold vibrations during phonation. The camera can capture dynamic digital images at speeds of 600 or 1200 frames per second. The adaptor is equipped with a universal-type attachment and can be used with most endoscopes sold by various manufacturers. Satisfactory images can be obtained with a rigid laryngoscope even with the standard light source. The total weight of the adaptor and camera (including battery) is only 1010 g. The new system comprising the high-speed camera and the new adaptor can be purchased for about $3000 (US), while the least expensive stroboscope costs about 10 times that price, and a high-performance high-speed imaging system may cost 100 times as much. Therefore the system is both cost-effective and useful in the outpatient clinic or casualty setting, on house calls, and for the purpose of student or patient education.

  14. Noise-immune complex correlation for vasculature imaging based on standard and Jones-matrix optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Li, En; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    A new optical coherence angiography (OCA) method, called correlation mapping OCA (cmOCA), is presented by using the SNR-corrected complex correlation. An SNR-correction theory for the complex correlation calculation is presented. The method also integrates a motion-artifact-removal method for the sample motion induced decorrelation artifact. The theory is further extended to compute more reliable correlation by using multi- channel OCT systems, such as Jones-matrix OCT. The high contrast vasculature imaging of in vivo human posterior eye has been obtained. Composite imaging of cmOCA and degree of polarization uniformity indicates abnormalities of vasculature and pigmented tissues simultaneously.

  15. Martian Surface Beneath Phoenix

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is an image of the Martian surface beneath NASA's Phoenix Mars Lander. The image was taken by Phoenix's Robotic Arm Camera (RAC) on the eighth Martian day of the mission, or Sol 8 (June 2, 2008). The light feature in the middle of the image below the leg is informally called 'Holy Cow.' The dust, shown in the dark foreground, has been blown off of 'Holy Cow' by Phoenix's thruster engines.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. Coating on Rock Beside a Young Martian Crater

    NASA Image and Video Library

    2010-03-24

    This image from the microscopic imager on NASA Mars Exploration Rover Opportunity shows details of the coating on a rock called Chocolate Hills, which the rover found and examined at the edge of a young crater called Concepción.

  17. Gaucher cell, photomicrograph (image)

    MedlinePlus

    Gaucher disease is called a "lipid storage disease" where abnormal amounts of lipids called "glycosphingolipids" are stored in special cells called reticuloendothelial cells. Classically, the nucleus is ...

  18. Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musgrove, Cameron

    2014-05-01

    For synthetic aperture radar image products interference can degrade the quality of the images while techniques to mitigate the interference also reduce the image quality. Usually the radar system designer will try to balance the amount of mitigation for the amount of interference to optimize the image quality. This may work well for many situations, but coherent data products derived from the image products are more sensitive than the human eye to distortions caused by interference and mitigation of interference. This dissertation examines the e ect that interference and mitigation of interference has upon coherent data products. An improvement tomore » the standard notch mitigation is introduced, called the equalization notch. Other methods are suggested to mitigation interference while improving the quality of coherent data products over existing methods.« less

  19. Mercury Transit (Composite Image)

    NASA Image and Video Library

    2017-12-08

    On May 9, 2016, Mercury passed directly between the sun and Earth. This event – which happens about 13 times each century – is called a transit. NASA’s Solar Dynamics Observatory, or SDO, studies the sun 24/7 and captured the entire seven-and-a-half-hour event. This composite image of Mercury’s journey across the sun was created with visible-light images from the Helioseismic and Magnetic Imager on SDO. Image Credit: NASA's Goddard Space Flight Center/SDO/Genna Duberstein NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. Image Format Conversion to DICOM and Lookup Table Conversion to Presentation Value of the Japanese Society of Radiological Technology (JSRT) Standard Digital Image Database.

    PubMed

    Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki

    2016-01-01

    Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.

  1. X-space MPI: magnetic nanoparticles for safe medical imaging.

    PubMed

    Goodwill, Patrick William; Saritas, Emine Ulku; Croft, Laura Rose; Kim, Tyson N; Krishnan, Kannan M; Schaffer, David V; Conolly, Steven M

    2012-07-24

    One quarter of all iodinated contrast X-ray clinical imaging studies are now performed on Chronic Kidney Disease (CKD) patients. Unfortunately, the iodine contrast agent used in X-ray is often toxic to CKD patients' weak kidneys, leading to significant morbidity and mortality. Hence, we are pioneering a new medical imaging method, called Magnetic Particle Imaging (MPI), to replace X-ray and CT iodinated angiography, especially for CKD patients. MPI uses magnetic nanoparticle contrast agents that are much safer than iodine for CKD patients. MPI already offers superb contrast and extraordinary sensitivity. The iron oxide nanoparticle tracers required for MPI are also used in MRI, and some are already approved for human use, but the contrast agents are far more effective at illuminating blood vessels when used in the MPI modality. We have recently developed a systems theoretic framework for MPI called x-space MPI, which has already dramatically improved the speed and robustness of MPI image reconstruction. X-space MPI has allowed us to optimize the hardware for fi ve MPI scanners. Moreover, x-space MPI provides a powerful framework for optimizing the size and magnetic properties of the iron oxide nanoparticle tracers used in MPI. Currently MPI nanoparticles have diameters in the 10-20 nanometer range, enabling millimeter-scale resolution in small animals. X-space MPI theory predicts that larger nanoparticles could enable up to 250 micrometer resolution imaging, which would represent a major breakthrough in safe imaging for CKD patients.

  2. A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm.

    PubMed

    Budak, Umit; Şengür, Abdulkadir; Guo, Yanhui; Akbulut, Yaman

    2017-12-01

    Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.

  3. Optical switch probes and optical lock-in detection (OLID) imaging microscopy: high-contrast fluorescence imaging within living systems.

    PubMed

    Yan, Yuling; Marriott, M Emma; Petchprayoon, Chutima; Marriott, Gerard

    2011-02-01

    Few to single molecule imaging of fluorescent probe molecules can provide information on the distribution, dynamics, interactions and activity of specific fluorescently tagged proteins during cellular processes. Unfortunately, these imaging studies are made challenging in living cells because of fluorescence signals from endogenous cofactors. Moreover, related background signals within multi-cell systems and intact tissue are even higher and reduce signal contrast even for ensemble populations of probe molecules. High-contrast optical imaging within high-background environments will therefore require new ideas on the design of fluorescence probes, and the way their fluorescence signals are generated and analysed to form an image. To this end, in the present review we describe recent studies on a new family of fluorescent probe called optical switches, with descriptions of the mechanisms that underlie their ability to undergo rapid and reversible transitions between two distinct states. Optical manipulation of the fluorescent and non-fluorescent states of an optical switch probe generates a modulated fluorescence signal that can be isolated from a larger unmodulated background by using OLID (optical lock-in detection) techniques. The present review concludes with a discussion on select applications of synthetic and genetically encoded optical switch probes and OLID microscopy for high-contrast imaging of specific proteins and membrane structures within living systems.

  4. Pandora Cluster Seen by Spitzer

    NASA Image and Video Library

    2016-09-28

    This image of galaxy cluster Abell 2744, also called Pandora's Cluster, was taken by the Spitzer Space Telescope. The gravity of this galaxy cluster is strong enough that it acts as a lens to magnify images of more distant background galaxies. This technique is called gravitational lensing. The fuzzy blobs in this Spitzer image are the massive galaxies at the core of this cluster, but astronomers will be poring over the images in search of the faint streaks of light created where the cluster magnifies a distant background galaxy. The cluster is also being studied by NASA's Hubble Space Telescope and Chandra X-Ray Observatory in a collaboration called the Frontier Fields project. In this image, light from Spitzer's infrared channels is colored blue at 3.6 microns and green at 4.5 microns. http://photojournal.jpl.nasa.gov/catalog/PIA20920

  5. Gaucher cell, photomicrograph #2 (image)

    MedlinePlus

    Gaucher disease is called a "lipid storage disease" where abnormal amounts of lipids called "glycosphingolipids" are stored in special cells called reticuloendothelial cells. Classically, the nucleus is ...

  6. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  7. Continuous blood pressure recordings simultaneously with functional brain imaging: studies of the glymphatic system

    NASA Astrophysics Data System (ADS)

    Zienkiewicz, Aleksandra; Huotari, Niko; Raitamaa, Lauri; Raatikainen, Ville; Ferdinando, Hany; Vihriälä, Erkki; Korhonen, Vesa; Myllylä, Teemu; Kiviniemi, Vesa

    2017-03-01

    The lymph system is responsible for cleaning the tissues of metabolic waste products, soluble proteins and other harmful fluids etc. Lymph flow in the body is driven by body movements and muscle contractions. Moreover, it is indirectly dependent on the cardiovascular system, where the heart beat and blood pressure maintain force of pressure in lymphatic channels. Over the last few years, studies revealed that the brain contains the so-called glymphatic system, which is the counterpart of the systemic lymphatic system in the brain. Similarly, the flow in the glymphatic system is assumed to be mostly driven by physiological pulsations such as cardiovascular pulses. Thus, continuous measurement of blood pressure and heart function simultaneously with functional brain imaging is of great interest, particularly in studies of the glymphatic system. We present our MRI compatible optics based sensing system for continuous blood pressure measurement and show our current results on the effects of blood pressure variations on cerebral brain dynamics, with a focus on the glymphatic system. Blood pressure was measured simultaneously with near-infrared spectroscopy (NIRS) combined with an ultrafast functional brain imaging (fMRI) sequence magnetic resonance encephalography (MREG, 3D brain 10 Hz sampling rate).

  8. Lagrangian formulation of irreversible thermodynamics and the second law of thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glavatskiy, K. S.

    We show that the equations which describe irreversible evolution of a system can be derived from a variational principle. We suggest a Lagrangian, which depends on the properties of the normal and the so-called “mirror-image” system. The Lagrangian is symmetric in time and therefore compatible with microscopic reversibility. The evolution equations in the normal and mirror-imaged systems are decoupled and describe therefore independent irreversible evolution of each of the systems. The second law of thermodynamics follows from a symmetry of the Lagrangian. Entropy increase in the normal system is balanced by the entropy decrease in the mirror-image system, such thatmore » there exists an “integral of evolution” which is a constant. The derivation relies on the property of local equilibrium, which states that the local relations between the thermodynamic quantities in non-equilibrium are the same as in equilibrium.« less

  9. ARC-1990-A91-2000

    NASA Image and Video Library

    1990-02-12

    Range : 1 million miles (1.63 million km) This image of the planet Venus was taken by NASA's Galileo spacecraft shortly befor 10pm PST when the space craft was directly above Venus' equator. This is the 66th of more than 80 Venus images Galileo was programmed to take and record during its Venus flyby. In the picture, cloud features as small as 25 miles (40 km) can be seen. Patches of waves and convective clouds are superimpposed on the swirl of the planet's broad weather patterns, marked by the dark chevron at the center. North is at the top. The several ring-shaped shadows are blemishes, not planetary features. The spacecraft imaging system has a 1500-mm, f/8.5 reflecting telescope; the exposure time was 1/40 second. The image was taken through the violet filter (0.41 micron.). It was produced by the imaging system in digital form, as a set of numbers representing the brightness perceived in each of the 640,000 picture elements defined on the solid-state plate, called a charged-coupled-device or CCD, on which the image was focused.

  10. Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.

    PubMed

    Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre

    2016-04-01

    The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.

  11. Automated baseline change detection -- Phases 1 and 2. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byler, E.

    1997-10-31

    The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrelmore » and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.« less

  12. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  13. Enhanced optical design by distortion control

    NASA Astrophysics Data System (ADS)

    Thibault, Simon; Gauvin, Jonny; Doucet, Michel; Wang, Min

    2005-09-01

    The control of optical distortion is useful for the design of a variety of optical system. The most popular is the F-theta lens used in laser scanning system to produce a constant scan velocity across the image plane. Many authors have designed during the last 20 years distortion control corrector. Today, many challenging digital imaging system can use distortion the enhanced their imaging capability. A well know example is a reversed telephoto type, if the barrel distortion is increased instead of being corrected; the result is a so-called Fish-eye lens. However, if we control the barrel distortion instead of only increasing it, the resulting system can have enhanced imaging capability. This paper will present some lens design and real system examples that clearly demonstrate how the distortion control can improve the system performances such as resolution. We present innovative optical system which increases the resolution in the field of view of interest to meet the needs of specific applications. One critical issue when we designed using distortion is the optimization management. Like most challenging lens design, the automatic optimization is less reliable. Proper management keeps the lens design within the correct range, which is critical for optimal performance (size, cost, manufacturability). Many lens design presented tailor a custom merit function and approach.

  14. An automated mapping satellite system ( Mapsat).

    USGS Publications Warehouse

    Colvocoresses, A.P.

    1982-01-01

    The favorable environment of space permits a satellite to orbit the Earth with very high stability as long as no local perturbing forces are involved. Solid-state linear-array sensors have no moving parts and create no perturbing force on the satellite. Digital data from highly stabilized stereo linear arrays are amenable to simplified processing to produce both planimetric imagery and elevation data. A satellite imaging system, called Mapsat, including this concept has been proposed to produce data from which automated mapping in near real time can be accomplished. Image maps as large as 1:50 000 scale with contours as close as a 20-m interval may be produced from Mapsat data. -from Author

  15. Earth benefits from space life sciences

    NASA Technical Reports Server (NTRS)

    Garshnek, V.; Nicogossian, A. E.; Griffiths, L.

    1988-01-01

    The applications to medicine of various results from space exploration are examined. Improvements have been made in the management of cardiovascular disease, in particular the use of the ultrasonic scanner to image arteries in three dimensions, the use of excimer lasers to disrupt arterial plaques in coronary blood vessels, and the use of advanced electrodes for cardiac monitoring. A bone stiffness analyzer has helped to diagnose osteoporosis and aid in its treatment. An automated light microscope system is used for chromosome analysis, and an X-ray image intensifier called Lixiscope is used in emergency medical care. An advanced portable defibrillator has been developed for the heart, and an insulin delivery system has been derived from space microminiaturization techniques.

  16. A Design Verification of the Parallel Pipelined Image Processings

    NASA Astrophysics Data System (ADS)

    Wasaki, Katsumi; Harai, Toshiaki

    2008-11-01

    This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.

  17. Welcome to Outer Space

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This video gives a brief history of the Jet Propulsion Laboratory, current missions and what the future may hold. Scenes includes various planets in the solar system, robotic exploration of space, discussions on the Hubble Space Telescope, the source of life, and solar winds. This video was narrated by Jodie Foster. Animations include: close-up image of the Moon; close-up images of the surface of Mars; robotic exploration of Mars; the first mapping assignment of Mars; animated views of Jupiter; animated views of Saturn; and views of a Giant Storm on Neptune called the Great Dark Spot.

  18. Signal to Noise Studies on Thermographic Data with Fabricated Defects for Defense Structures

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Rajic, Nik; Genest, Marc

    2006-01-01

    There is a growing international interest in thermal inspection systems for asset life assessment and management of defense platforms. The efficacy of flash thermography is generally enhanced by applying image processing algorithms to the observations of raw temperature. Improving the defect signal to noise ratio (SNR) is of primary interest to reduce false calls and allow for easier interpretation of a thermal inspection image. Several factors affecting defect SNR were studied such as data compression and reconstruction using principal component analysis and time window processing.

  19. The So-Called Face

    NASA Image and Video Library

    2002-05-21

    The so-called Face on Mars can be seen slightly above center and to the right in this NASA Mars Odyssey image. This 3-km long knob was first imaged by NASA Viking spacecraft in the 1970 and to some resembled a face carved into the rocks of Mars.

  20. Harnessing the power of multimedia in offender-based law enforcement information systems

    NASA Astrophysics Data System (ADS)

    Zimmerman, Alan P.

    1997-02-01

    Criminal offenders are increasingly administratively processed by automated multimedia information systems. During this processing, case and offender biographical data, mugshot photos, fingerprints and other valuable information and media are collected by law enforcement officers. As part of their criminal investigations, law enforcement officers are routinely called to solve criminal cases based upon limited evidence . . . evidence increasingly comprised of human DNA, ballistic casings and projectiles, chemical residues, latent fingerprints, surveillance camera facial images and voices. As multimedia systems receive greater use in law enforcement, traditional approaches used to index text data are not appropriate for images and signal data which comprise a multimedia database. Multimedia systems with integrated advanced pattern matching tools will provide law enforcement the ability to effectively locate multimedia information based upon content, without reliance upon the accuracy or completeness of text-based indexing.

  1. Design and realization of an active SAR calibrator for TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Dummer, Georg; Lenz, Rainer; Lutz, Benjamin; Kühl, Markus; Müller-Glaser, Klaus D.; Wiesbeck, Werner

    2005-10-01

    TerraSAR-X is a new earth observing satellite which will be launched in spring 2006. It carries a high resolution X-band SAR sensor. For high image data quality, accurate ground calibration targets are necessary. This paper describes a novel system concept for an active and highly integrated, digitally controlled SAR system calibrator. A total of 16 active transponder and receiver systems and 17 receiver only systems will be fabricated for a calibration campaign. The calibration units serve for absolute radiometric calibration of the SAR image data. Additionally, they are equipped with an extra receiver path for two dimensional satellite antenna pattern recognition. The calibrator is controlled by a dedicated digital Electronic Control Unit (ECU). The different voltages needed by the calibrator and the ECU are provided by the third main unit called Power Management Unit (PMU).

  2. IMDISP - INTERACTIVE IMAGE DISPLAY PROGRAM

    NASA Technical Reports Server (NTRS)

    Martin, M. D.

    1994-01-01

    The Interactive Image Display Program (IMDISP) is an interactive image display utility for the IBM Personal Computer (PC, XT and AT) and compatibles. Until recently, efforts to utilize small computer systems for display and analysis of scientific data have been hampered by the lack of sufficient data storage capacity to accomodate large image arrays. Most planetary images, for example, require nearly a megabyte of storage. The recent development of the "CDROM" (Compact Disk Read-Only Memory) storage technology makes possible the storage of up to 680 megabytes of data on a single 4.72-inch disk. IMDISP was developed for use with the CDROM storage system which is currently being evaluated by the Planetary Data System. The latest disks to be produced by the Planetary Data System are a set of three disks containing all of the images of Uranus acquired by the Voyager spacecraft. The images are in both compressed and uncompressed format. IMDISP can read the uncompressed images directly, but special software is provided to decompress the compressed images, which can not be processed directly. IMDISP can also display images stored on floppy or hard disks. A digital image is a picture converted to numerical form so that it can be stored and used in a computer. The image is divided into a matrix of small regions called picture elements, or pixels. The rows and columns of pixels are called "lines" and "samples", respectively. Each pixel has a numerical value, or DN (data number) value, quantifying the darkness or brightness of the image at that spot. In total, each pixel has an address (line number, sample number) and a DN value, which is all that the computer needs for processing. DISPLAY commands allow the IMDISP user to display all or part of an image at various positions on the display screen. The user may also zoom in and out from a point on the image defined by the cursor, and may pan around the image. To enable more or all of the original image to be displayed on the screen at once, the image can be "subsampled." For example, if the image were subsampled by a factor of 2, every other pixel from every other line would be displayed, starting from the upper left corner of the image. Any positive integer may be used for subsampling. The user may produce a histogram of an image file, which is a graph showing the number of pixels per DN value, or per range of DN values, for the entire image. IMDISP can also plot the DN value versus pixels along a line between two points on the image. The user can "stretch" or increase the contrast of an image by specifying low and high DN values; all pixels with values lower than the specified "low" will then become black, and all pixels higher than the specified "high" value will become white. Pixels between the low and high values will be evenly shaded between black and white. IMDISP is written in a modular form to make it easy to change it to work with different display devices or on other computers. The code can also be adapted for use in other application programs. There are device dependent image display modules, general image display subroutines, image I/O routines, and image label and command line parsing routines. The IMDISP system is written in C-language (94%) and Assembler (6%). It was implemented on an IBM PC with the MS DOS 3.21 operating system. IMDISP has a memory requirement of about 142k bytes. IMDISP was developed in 1989 and is a copyrighted work with all copyright vested in NASA. Additional planetary images can be obtained from the National Space Science Data Center at (301) 286-6695.

  3. A New Presentation and Exploration of Human Cerebral Vasculature Correlated with Surface and Sectional Neuroanatomy

    ERIC Educational Resources Information Center

    Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Volkau, Ihar; Marchenko, Yevgen; Aminah, Bivi; Gelas, Arnaud; Huang, Su; Lee, Looi Chow; Liu, Jimin; Ng, Ting Ting; Nowinska, Natalia G.; Qian, Guoyu Yu; Puspitasari, Fiftarina; Runge, Val M.

    2009-01-01

    The increasing complexity of human body models enabled by advances in diagnostic imaging, computing, and growing knowledge calls for the development of a new generation of systems for intelligent exploration of these models. Here, we introduce a novel paradigm for the exploration of digital body models illustrating cerebral vasculature. It enables…

  4. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  5. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  6. A study of glasses-type color CGH using a color filter considering reduction of blurring

    NASA Astrophysics Data System (ADS)

    Iwami, Saki; Sakamoto, Yuji

    2009-02-01

    We have developed a glasses-type color computer generated hologram (CGH) by using a color filter. The proposed glasses consist of two "lenses" made of overlapping holograms and color filters. The holograms, which are calculated to reconstruct images in each primary color, are divided to small areas, which we called cells, and superimposed on one hologram. In the same way, colors of the filter correspond to the hologram cells. We can configure it very simply without a complex optical system, and the configuration yields a small and light weight system suitable for glasses. When the cell is small enough, the colors are mixed and reconstructed color images are observed. In addition, color expression of reconstruction images improves, too. However, using small cells blurrs reconstructed images because of the following reasons: (1) interference between cells because of the correlation with the cells, and (2) reduction of resolution caused by the size of the cell hologram. We are investigating in order to make a hologram that has high resolution reconstructed color images without ghost images. In this paper, we discuss (1) the details of the proposed glasses-type color CGH, (2) appropriate cell size for an eye system, (3) effects of cell shape on the reconstructed images, and (4) a new method to reduce the blurring of the images.

  7. Analysis of dual tree M-band wavelet transform based features for brain image classification.

    PubMed

    Ayalapogu, Ratna Raju; Pabboju, Suresh; Ramisetty, Rajeswara Rao

    2018-04-29

    The most complex organ in the human body is the brain. The unrestrained growth of cells in the brain is called a brain tumor. The cause of a brain tumor is still unknown and the survival rate is lower than other types of cancers. Hence, early detection is very important for proper treatment. In this study, an efficient computer-aided diagnosis (CAD) system is presented for brain image classification by analyzing MRI of the brain. At first, the MRI brain images of normal and abnormal categories are modeled by using the statistical features of dual tree m-band wavelet transform (DTMBWT). A maximum margin classifier, support vector machine (SVM) is then used for the classification and validated with k-fold approach. Results show that the system provides promising results on a repository of molecular brain neoplasia data (REMBRANDT) with 97.5% accuracy using 4 th level statistical features of DTMBWT. Viewing the experimental results, we conclude that the system gives a satisfactory performance for the brain image classification. © 2018 International Society for Magnetic Resonance in Medicine.

  8. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  9. Darkfield Adapter for Whole Slide Imaging: Adapting a Darkfield Internal Reflection Illumination System to Extend WSI Applications

    PubMed Central

    Kawano, Yoshihiro; Higgins, Christopher; Yamamoto, Yasuhito; Nyhus, Julie; Bernard, Amy; Dong, Hong-Wei; Karten, Harvey J.; Schilling, Tobias

    2013-01-01

    We present a new method for whole slide darkfield imaging. Whole Slide Imaging (WSI), also sometimes called virtual slide or virtual microscopy technology, produces images that simultaneously provide high resolution and a wide field of observation that can encompass the entire section, extending far beyond any single field of view. For example, a brain slice can be imaged so that both overall morphology and individual neuronal detail can be seen. We extended the capabilities of traditional whole slide systems and developed a prototype system for darkfield internal reflection illumination (DIRI). Our darkfield system uses an ultra-thin light-emitting diode (LED) light source to illuminate slide specimens from the edge of the slide. We used a new type of side illumination, a variation on the internal reflection method, to illuminate the specimen and create a darkfield image. This system has four main advantages over traditional darkfield: (1) no oil condenser is required for high resolution imaging (2) there is less scatter from dust and dirt on the slide specimen (3) there is less halo, providing a more natural darkfield contrast image, and (4) the motorized system produces darkfield, brightfield and fluorescence images. The WSI method sometimes allows us to image using fewer stains. For instance, diaminobenzidine (DAB) and fluorescent staining are helpful tools for observing protein localization and volume in tissues. However, these methods usually require counter-staining in order to visualize tissue structure, limiting the accuracy of localization of labeled cells within the complex multiple regions of typical neurohistological preparations. Darkfield imaging works on the basis of light scattering from refractive index mismatches in the sample. It is a label-free method of producing contrast in a sample. We propose that adapting darkfield imaging to WSI is very useful, particularly when researchers require additional structural information without the use of further staining. PMID:23520500

  10. A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging

    NASA Astrophysics Data System (ADS)

    Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc

    2015-06-01

    High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.

  11. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  12. Near-real-time biplanar fluoroscopic tracking system for the video tumor fighter

    NASA Astrophysics Data System (ADS)

    Lawson, Michael A.; Wika, Kevin G.; Gilles, George T.; Ritter, Rogers C.

    1991-06-01

    We have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the 'Video Tumor Fighter' (VTF). The software was written for a Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally-oriented, visible- light cameras and a simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented.

  13. Evaluation of glymphatic system activity with the diffusion MR technique: diffusion tensor image analysis along the perivascular space (DTI-ALPS) in Alzheimer's disease cases.

    PubMed

    Taoka, Toshiaki; Masutani, Yoshitaka; Kawai, Hisashi; Nakane, Toshiki; Matsuoka, Kiwamu; Yasuno, Fumihiko; Kishimoto, Toshifumi; Naganawa, Shinji

    2017-04-01

    The activity of the glymphatic system is impaired in animal models of Alzheimer's disease (AD). We evaluated the activity of the human glymphatic system in cases of AD with a diffusion-based technique called diffusion tensor image analysis along the perivascular space (DTI-ALPS). Diffusion tensor images were acquired to calculate diffusivities in the x, y, and z axes of the plane of the lateral ventricle body in 31 patients. We evaluated the diffusivity along the perivascular spaces as well as projection fibers and association fibers separately, to acquire an index for diffusivity along the perivascular space (ALPS-index) and correlated them with the mini mental state examinations (MMSE) score. We found a significant negative correlation between diffusivity along the projection fibers and association fibers. We also observed a significant positive correlation between diffusivity along perivascular spaces shown as ALPS-index and the MMSE score, indicating lower water diffusivity along the perivascular space in relation to AD severity. Activity of the glymphatic system may be evaluated with diffusion images. Lower diffusivity along the perivascular space on DTI-APLS seems to reflect impairment of the glymphatic system. This method may be useful for evaluating the activity of the glymphatic system.

  14. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  15. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  16. Spotlight-Mode Synthetic Aperture Radar Processing for High-Resolution Lunar Mapping

    NASA Technical Reports Server (NTRS)

    Harcke, Leif; Weintraub, Lawrence; Yun, Sang-Ho; Dickinson, Richard; Gurrola, Eric; Hensley, Scott; Marechal, Nicholas

    2010-01-01

    During the 2008-2009 year, the Goldstone Solar System Radar was upgraded to support radar mapping of the lunar poles at 4 m resolution. The finer resolution of the new system and the accompanying migration through resolution cells called for spotlight, rather than delay-Doppler, imaging techniques. A new pre-processing system supports fast-time Doppler removal and motion compensation to a point. Two spotlight imaging techniques which compensate for phase errors due to i) out of focus-plane motion of the radar and ii) local topography, have been implemented and tested. One is based on the polar format algorithm followed by a unique autofocus technique, the other is a full bistatic time-domain backprojection technique. The processing system yields imagery of the specified resolution. Products enabled by this new system include topographic mapping through radar interferometry, and change detection techniques (amplitude and coherent change) for geolocation of the NASA LCROSS mission impact site.

  17. Hadoop-based implementation of processing medical diagnostic records for visual patient system

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo

    2018-03-01

    We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.

  18. Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2008-05-01

    In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.

  19. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.

  20. Linearization of an annular image by using a diffractive optic

    NASA Technical Reports Server (NTRS)

    Matthys, Donald R.

    1996-01-01

    The goal for this project is to develop the algorithms for fracturing the zones defined by the mapping transformation, and to actually produce the binary optic in an appropriate setup. In 1984 a side-viewing panoramic viewing system was patented, consisting of a single piece of glass with spherical surfaces which produces a 360 degree view of the region surrounding the lens which extends about 25 degrees in front of and 20 degrees behind the lens. The system not only produces images of good quality, it is also afocal, i.e., images stay in focus for objects located right next to the lens as well as those located far from the lens. The lens produced a panoramic view in an annular shaped image, and so the lens was called a PAL (panoramic annular lens). When applying traditional measurements to PAL images, it is found advantageous to linearize the annular image. This can easily be done with a computer and such a linearized image can be produced within about 40 seconds on current microcomputers. However, this process requires a frame-grabber and a computer, and is not real-time. Therefore, it was decided to try to perform this linearization optically by using a diffractive optic.

  1. Using 3D dosimetry to quantify the Electron Return Effect (ERE) for MR-image-guided radiation therapy (MR-IGRT) applications

    NASA Astrophysics Data System (ADS)

    Lee, Hannah J.; Choi, Gye Won; Alqathami, Mamdooh; Kadbi, Mo; Ibbott, Geoffrey

    2017-05-01

    Image-guided radiation therapy (IGRT) using computed tomography (CT), cone-beam CT, MV on-board imager (OBI), and kV OBI systems have allowed for more accurate patient positioning prior to each treatment fraction. While these imaging modalities provide excellent bony anatomy image quality, MRI surpasses them in soft tissue image contrast for better visualization and tracking of soft tissue tumors with no additional radiation dose to the patient. A pre-clinical integrated 1.5 T magnetic resonance imaging and 7 MV linear accelerator system (MR-linac) allows for real-time tracking of soft tissues and adaptive treatment planning prior to each treatment fraction. However, due to the presence of a strong magnetic field from the MR component, there is a three dimensional (3D) change in dose deposited by the secondary electrons. Especially at nonhomogeneous anatomical sites with tissues of very different densities, dose enhancements and reductions can occur due to the Lorentz force influencing the trajectories of secondary electrons. These dose changes at tissue interfaces are called the electron return effect or ERE. This study investigated the ERE using 3D dosimeters.

  2. New public dataset for spotting patterns in medieval document images

    NASA Astrophysics Data System (ADS)

    En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent

    2017-01-01

    With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.

  3. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  4. Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery

    NASA Astrophysics Data System (ADS)

    Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn

    2015-04-01

    Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.

  5. Informatics in radiology (infoRAD): navigating the fifth dimension: innovative interface for multidimensional multimodality image navigation.

    PubMed

    Rosset, Antoine; Spadola, Luca; Pysher, Lance; Ratib, Osman

    2006-01-01

    The display and interpretation of images obtained by combining three-dimensional data acquired with two different modalities (eg, positron emission tomography and computed tomography) in the same subject require complex software tools that allow the user to adjust the image parameters. With the current fast imaging systems, it is possible to acquire dynamic images of the beating heart, which add a fourth dimension of visual information-the temporal dimension. Moreover, images acquired at different points during the transit of a contrast agent or during different functional phases add a fifth dimension-functional data. To facilitate real-time image navigation in the resultant large multidimensional image data sets, the authors developed a Digital Imaging and Communications in Medicine-compliant software program. The open-source software, called OsiriX, allows the user to navigate through multidimensional image series while adjusting the blending of images from different modalities, image contrast and intensity, and the rate of cine display of dynamic images. The software is available for free download at http://homepage.mac.com/rossetantoine/osirix. (c) RSNA, 2006.

  6. Visualization of GPM Standard Products at the Precipitation Processing System (PPS)

    NASA Astrophysics Data System (ADS)

    Kelley, O.

    2010-12-01

    Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).

  7. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Li, Qingbo; Lu, Xiandan

    1998-04-21

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  8. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Chang, Huan-Tsang; Fung, Eliza N.; Li, Qingbo; Lu, Xiandan

    1996-12-10

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  9. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Li, Q.; Lu, X.

    1998-04-21

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  10. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Chang, H.T.; Fung, E.N.; Li, Q.; Lu, X.

    1996-12-10

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  11. Two-tier tissue decomposition for histopathological image representation and classification.

    PubMed

    Gultekin, Tunc; Koyuncu, Can Fahrettin; Sokmensuer, Cenk; Gunduz-Demir, Cigdem

    2015-01-01

    In digital pathology, devising effective image representations is crucial to design robust automated diagnosis systems. To this end, many studies have proposed to develop object-based representations, instead of directly using image pixels, since a histopathological image may contain a considerable amount of noise typically at the pixel-level. These previous studies mostly employ color information to define their objects, which approximately represent histological tissue components in an image, and then use the spatial distribution of these objects for image representation and classification. Thus, object definition has a direct effect on the way of representing the image, which in turn affects classification accuracies. In this paper, our aim is to design a classification system for histopathological images. Towards this end, we present a new model for effective representation of these images that will be used by the classification system. The contributions of this model are twofold. First, it introduces a new two-tier tissue decomposition method for defining a set of multityped objects in an image. Different than the previous studies, these objects are defined combining texture, shape, and size information and they may correspond to individual histological tissue components as well as local tissue subregions of different characteristics. As its second contribution, it defines a new metric, which we call dominant blob scale, to characterize the shape and size of an object with a single scalar value. Our experiments on colon tissue images reveal that this new object definition and characterization provides distinguishing representation of normal and cancerous histopathological images, which is effective to obtain more accurate classification results compared to its counterparts.

  12. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  13. Gravity assisted recovery of liquid xenon at large mass flow rates

    NASA Astrophysics Data System (ADS)

    Virone, L.; Acounis, S.; Beaupère, N.; Beney, J.-L.; Bert, J.; Bouvier, S.; Briend, P.; Butterworth, J.; Carlier, T.; Chérel, M.; Crespi, P.; Cussonneau, J.-P.; Diglio, S.; Manzano, L. Gallego; Giovagnoli, D.; Gossiaux, P.-B.; Kraeber-Bodéré, F.; Ray, P. Le; Lefèvre, F.; Marty, P.; Masbou, J.; Morteau, E.; Picard, G.; Roy, D.; Staempflin, M.; Stutzmann, J.-S.; Visvikis, D.; Xing, Y.; Zhu, Y.; Thers, D.

    2018-06-01

    We report on a liquid xenon gravity assisted recovery method for nuclear medical imaging applications. The experimental setup consists of an elevated detector enclosed in a cryostat connected to a storage tank called ReStoX. Both elements are part of XEMIS2 (XEnon Medical Imaging System): an innovative medical imaging facility for pre-clinical research that uses pure liquid xenon as detection medium. Tests based on liquid xenon transfer from the detector to ReStoX have been successfully performed showing that an unprecedented mass flow rate close to 1 ton per hour can be reached. This promising achievement as well as future areas of improvement will be discussed in this paper.

  14. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  15. Pointing and control system performance and improvement strategies for the SOFIA Airborne Telescope

    NASA Astrophysics Data System (ADS)

    Graf, Friederike; Reinacher, Andreas; Jakob, Holger; Lampater, Ulrich; Pfueller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Fasoulas, Stefanos

    2016-07-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has already successfully conducted over 300 flights. In its early science phase, SOFIA's pointing requirements and especially the image jitter requirements of less than 1 arcsec rms have driven the design of the control system. Since the first observation flights, the image jitter has been gradually reduced by various control mechanisms. During smooth flight conditions, the current pointing and control system allows us to achieve the standards set for early science on SOFIA. However, the increasing demands on the image size require an image jitter of less than 0.4 arcsec rms during light turbulence to reach SOFIA's scientific goals. The major portion of the remaining image motion is caused by deformation and excitation of the telescope structure in a wide range of frequencies due to aircraft motion and aerodynamic and aeroacoustic effects. Therefore the so-called Flexible Body Compensation system (FBC) is used, a set of fixed-gain filters to counteract the structural bending and deformation. Thorough testing of the current system under various flight conditions has revealed a variety of opportunities for further improvements. The currently applied filters have solely been developed based on a FEM analysis. By implementing the inflight measurements in a simulation and optimization, an improved fixed-gain compensation method was identified. This paper will discuss promising results from various jitter measurements recorded with sampling frequencies of up to 400 Hz using the fast imaging tracking camera.

  16. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    PubMed Central

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  17. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.

    PubMed

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-02-04

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  18. High sensitive volumetric imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography

    NASA Astrophysics Data System (ADS)

    Zhi, Zhongwei; Jung, Yeongri; Jia, Yali; An, Lin; Wang, Ruikang K.

    2011-03-01

    We present a non-invasive, label-free imaging technique called Ultrahigh Sensitive Optical Microangiography (UHSOMAG) for high sensitive volumetric imaging of renal microcirculation. The UHS-OMAG imaging system is based on spectral domain optical coherence tomography (SD-OCT), which uses a 47000 A-line scan rate CCD camera to perform an imaging speed of 150 frames per second that takes only ~7 seconds to acquire a 3D image. The technique, capable of measuring slow blood flow down to 4 um/s, is sensitive enough to image capillary networks, such as peritubular capillaries and glomerulus within renal cortex. We show superior performance of UHS-OMAG in providing depthresolved volumetric images of rich renal microcirculation. We monitored the dynamics of renal microvasculature during renal ischemia and reperfusion. Obvious reduction of renal microvascular density due to renal ischemia was visualized and quantitatively analyzed. This technique can be helpful for the assessment of chronic kidney disease (CKD) which relates to abnormal microvasculature.

  19. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing

    NASA Astrophysics Data System (ADS)

    Liu, Junchi; Zarshenas, Amin; Qadir, Ammar; Wei, Zheng; Yang, Limin; Fajardo, Laurie; Suzuki, Kenji

    2018-03-01

    To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding "teaching" higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term "virtual" HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32+/-14 mAs at 33+/-5 kVp) and full-dose (standard dose: 68+/-23 mAs at 33+/-5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.

  20. Self-Illuminating 64Cu-Doped CdSe/ZnS Nanocrystals for in Vivo Tumor Imaging

    PubMed Central

    2015-01-01

    Construction of self-illuminating semiconducting nanocrystals, also called quantum dots (QDs), has attracted much attention recently due to their potential as highly sensitive optical probes for biological imaging applications. Here we prepared a self-illuminating QD system by doping positron-emitting radionuclide 64Cu into CdSe/ZnS core/shell QDs via a cation-exchange reaction. The 64Cu-doped CdSe/ZnS QDs exhibit efficient Cerenkov resonance energy transfer (CRET). The signal of 64Cu can accurately reflect the biodistribution of the QDs during circulation with no dissociation of 64Cu from the nanoparticles. We also explored this system for in vivo tumor imaging. This nanoprobe showed high tumor-targeting ability in a U87MG glioblastoma xenograft model (12.7% ID/g at 17 h time point) and feasibility for in vivo luminescence imaging of tumor in the absence of excitation light. The availability of these self-illuminating integrated QDs provides an accurate and convenient tool for in vivo tumor imaging and detection. PMID:24401138

  1. Planetary exploration with optical imaging systems review: what is the best sensor for future missions

    NASA Astrophysics Data System (ADS)

    Michaelis, H.; Behnke, T.; Bredthauer, R.; Holland, A.; Janesick, J.; Jaumann, R.; Keller, H. U.; Magrin, D.; Greggio, D.; Mottola, Stefano; Thomas, N.; Smith, P.

    2017-11-01

    When we talk about planetary exploration missions most people think spontaneously about fascinating images from other planets or close-up pictures of small planetary bodies such as asteroids and comets. Such images come in most cases from VIS/NIR- imaging- systems, simply called `cameras', which were typically built by institutes in collaboration with industry. Until now, they have nearly all been based on silicon CCD sensors, they have filter wheels and have often high power-consuming electronics. The question is, what are the challenges for future missions and what can be done to improve performance and scientific output. The exploration of Mars is ongoing. NASA and ESA are planning future missions to the outer planets like to the icy Jovian moons. Exploration of asteroids and comets are in focus of several recent and future missions. Furthermore, the detection and characterization of exo-planets will keep us busy for next generations. The paper is discussing the challenges and visions of imaging sensors for future planetary exploration missions. The focus of the talk is monolithic VIS/NIR- detectors.

  2. Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo.

    PubMed

    Xia, Jun; Chatni, Muhammad R; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V

    2012-05-01

    We report a novel small-animal whole-body imaging system called ring-shaped confocal photoacoustic computed tomography (RC-PACT). RC-PACT is based on a confocal design of free-space ring-shaped light illumination and 512-element full-ring ultrasonic array signal detection. The free-space light illumination maximizes the light delivery efficiency, and the full-ring signal detection ensures a full two-dimensional view aperture for accurate image reconstruction. Using cylindrically focused array elements, RC-PACT can image a thin cross section with 0.10 to 0.25 mm in-plane resolutions and 1.6  s/frame acquisition time. By translating the mouse along the elevational direction, RC-PACT provides a series of cross-sectional images of the brain, liver, kidneys, and bladder.

  3. Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo

    NASA Astrophysics Data System (ADS)

    Xia, Jun; Chatni, Muhammad R.; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V.

    2012-05-01

    We report a novel small-animal whole-body imaging system called ring-shaped confocal photoacoustic computed tomography (RC-PACT). RC-PACT is based on a confocal design of free-space ring-shaped light illumination and 512-element full-ring ultrasonic array signal detection. The free-space light illumination maximizes the light delivery efficiency, and the full-ring signal detection ensures a full two-dimensional view aperture for accurate image reconstruction. Using cylindrically focused array elements, RC-PACT can image a thin cross section with 0.10 to 0.25 mm in-plane resolutions and 1.6 s/frame acquisition time. By translating the mouse along the elevational direction, RC-PACT provides a series of cross-sectional images of the brain, liver, kidneys, and bladder.

  4. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  5. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  6. Design and development of a very high resolution thermal imager

    NASA Astrophysics Data System (ADS)

    Kuerbitz, Gunther; Duchateau, Ruediger

    1998-10-01

    The design goal of this project was to develop a thermal imaging system with ultimate geometrical resolution without sacrificing thermal sensitivity. It was necessary to fulfil the criteria for a future advanced video standard. This video standard is the so-called HDTV standard (HDTV High Definition TeleVision). The thermal imaging system is a parallel scanning system working in the 7...11 micrometer spectral region. The detector for that system has to have 576 X n (n number of TDI stages) detector elements taking into account a twofold interlace. It must be carefully optimized in terms of range performance and size of optics entrance pupil as well as producibility and yield. This was done in strong interaction with the detector manufacturer. The 16:9 aspect ratio of the HDTV standard together with the high number of 1920 pixels/line impose high demands on the scanner design in terms of scan efficiency and linearity. As an advanced second generation thermal imager the system has an internal thermal reference. The electronics is fully digitized and comprises circuits for Non Uniformity Correction (NUC), scan conversion, electronic zoom, auto gain and level, edge enhancement, up/down and left/right reversion etc. It can be completely remote-controlled via a serial interface.

  7. Quasi-real-time telemedical checkup system for x-ray examination of UGI tract based on high-speed network

    NASA Astrophysics Data System (ADS)

    Sakano, Toshikazu; Yamaguchi, Takahiro; Fujii, Tatsuya; Okumura, Akira; Furukawa, Isao; Ono, Sadayasu; Suzuki, Junji; Ando, Yutaka; Kohda, Ehiichi; Sugino, Yoshinori; Okada, Yoshiyuki; Amaki, Sachi

    2000-05-01

    We constructed a high-speed medical information network testbed, which is one of the largest testbeds in Japan, and applied it to practical medical checkups for the first time. The constructed testbed, which we call IMPACT, consists of a Super-High Definition Imaging system, a video conferencing system, a remote database system, and a 6 - 135 Mbps ATM network. The interconnected facilities include the School of Medicine in Keio University, a company's clinic, and an NTT R&D center, all in and around Tokyo. We applied IMPACT to the mass screening of the upper gastrointestinal (UGI) tract at the clinic. All 5419 radiographic images acquired at them clinic for 523 employees were digitized (2048 X 1698 X 12 bits) and transferred to a remote database in NTT. We then picked up about 50 images from five patients and sent them to nine radiological specialists at Keio University. The processing, which includes film digitization, image data transfer, and database registration, took 574 seconds per patient in average. The average reading time at Keio Univ. was 207 seconds. The overall processing time was estimated to be 781 seconds per patient. From these experimental results, we conclude that quasi-real time tele-medical checkups are possible with our prototype system.

  8. Clinical skin imaging using color spatial frequency domain imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Lesicko, John; Moy, Austin J.; Reichenberg, Jason; Tunnell, James W.

    2016-02-01

    Skin diseases are typically associated with underlying biochemical and structural changes compared with normal tissues, which alter the optical properties of the skin lesions, such as tissue absorption and scattering. Although widely used in dermatology clinics, conventional dermatoscopes don't have the ability to selectively image tissue absorption and scattering, which may limit its diagnostic power. Here we report a novel clinical skin imaging technique called color spatial frequency domain imaging (cSFDI) which enhances contrast by rendering color spatial frequency domain (SFD) image at high spatial frequency. Moreover, by tuning spatial frequency, we can obtain both absorption weighted and scattering weighted images. We developed a handheld imaging system specifically for clinical skin imaging. The flexible configuration of the system allows for better access to skin lesions in hard-to-reach regions. A total of 48 lesions from 31 patients were imaged under 470nm, 530nm and 655nm illumination at a spatial frequency of 0.6mm^(-1). The SFD reflectance images at 470nm, 530nm and 655nm were assigned to blue (B), green (G) and red (R) channels to render a color SFD image. Our results indicated that color SFD images at f=0.6mm-1 revealed properties that were not seen in standard color images. Structural features were enhanced and absorption features were reduced, which helped to identify the sources of the contrast. This imaging technique provides additional insights into skin lesions and may better assist clinical diagnosis.

  9. Vision Algorithms Catch Defects in Screen Displays

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.

  10. SkySat-1: very high-resolution imagery from a small satellite

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk

    2014-10-01

    This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

  11. Dendro-dendritic interactions between motion-sensitive large-field neurons in the fly.

    PubMed

    Haag, Juergen; Borst, Alexander

    2002-04-15

    For visual course control, flies rely on a set of motion-sensitive neurons called lobula plate tangential cells (LPTCs). Among these cells, the so-called CH (centrifugal horizontal) cells shape by their inhibitory action the receptive field properties of other LPTCs called FD (figure detection) cells specialized for figure-ground discrimination based on relative motion. Studying the ipsilateral input circuitry of CH cells by means of dual-electrode and combined electrical-optical recordings, we find that CH cells receive graded input from HS (large-field horizontal system) cells via dendro-dendritic electrical synapses. This particular wiring scheme leads to a spatial blur of the motion image on the CH cell dendrite, and, after inhibiting FD cells, to an enhancement of motion contrast. This could be crucial for enabling FD cells to discriminate object from self motion.

  12. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  13. Pulse-compression ghost imaging lidar via coherent detection.

    PubMed

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng

    2016-11-14

    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  14. Laser designator protection filter for see-spot thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Donval, Ariela; Fisher, Tali; Lipman, Ofir; Oron, Moshe

    2012-06-01

    In some cases the FLIR has an open window in the 1.06 micrometer wavelength range; this capability is called 'see spot' and allows seeing a laser designator spot using the FLIR. A problem arises when the returned laser energy is too high for the camera sensitivity, and therefore can cause damage to the sensor. We propose a non-linear, solid-state dynamic filter solution protecting from damage in a passive way. Our filter blocks the transmission, only if the power exceeds a certain threshold as opposed to spectral filters that block a certain wavelength permanently. In this paper we introduce the Wideband Laser Protection Filter (WPF) solution for thermal imaging systems possessing the ability to see the laser spot.

  15. How Radiologists Think: Understanding Fast and Slow Thought Processing and How It Can Improve Our Teaching.

    PubMed

    van der Gijp, Anouk; Webb, Emily M; Naeger, David M

    2017-06-01

    Scholars have identified two distinct ways of thinking. This "Dual Process Theory" distinguishes a fast, nonanalytical way of thinking, called "System 1," and a slow, analytical way of thinking, referred to as "System 2." In radiology, we use both methods when interpreting and reporting images, and both should ideally be emphasized when educating our trainees. This review provides practical tips for improving radiology education, by enhancing System 1 and System 2 thinking among our trainees. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  16. Theory and Application of Image Enhancement

    DTIC Science & Technology

    1994-02-01

    for collecting RGB data and displaying image 11205 beadbits - 0: updown m 0 11210 rowx - 3: columnx m 22: vidthlx - 58: depthx - 6: forex - 15: backx...1 11220 VIEW PRINT 2 TO 24 11230 CALL box(rowx, columnx, widthlx, depthx, forex , backx) 11240 LOCATE rowx + 1, columnx + 1: INPUT ’Type Image...Filename " f$ 11242 IF f$ - " THEN forex - 0: backx - 0 11244 IF £0 - " THEN CALL box(rowx, columnx, widthlx, depthx, forex , backx) 11246 IF f$ - THEN

  17. Capillaries for use in a multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Chang, Huan-Tsang; Fung, Eliza N.

    1997-12-09

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  18. Capillaries for use in a multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Chang, H.T.; Fung, E.N.

    1997-12-09

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  19. Deformable Mirrors Correct Optical Distortions

    NASA Technical Reports Server (NTRS)

    2010-01-01

    By combining the high sensitivity of space telescopes with revolutionary imaging technologies consisting primarily of adaptive optics, the Terrestrial Planet Finder is slated to have imaging power 100 times greater than the Hubble Space Telescope. To this end, Boston Micromachines Corporation, of Cambridge, Massachusetts, received Small Business Innovation Research (SBIR) contracts from the Jet Propulsion Laboratory for space-based adaptive optical technology. The work resulted in a microelectromechanical systems (MEMS) deformable mirror (DM) called the Kilo-DM. The company now offers a full line of MEMS DMs, which are being used in observatories across the world, in laser communication, and microscopy.

  20. Twisting Blob of Plasma

    NASA Image and Video Library

    2017-12-08

    A twisted blob of solar material – a hot, charged gas called plasma – can be seen erupting off the side of the sun on Sept. 26, 2014. The image is from NASA's Solar Dynamics Observatory, focusing in on ionized Helium at 60,000 degrees C. Credit: NASA/SDO NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. An Automated Platform for High-Resolution Tissue Imaging Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.

    2012-10-02

    An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSImore » QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.« less

  2. A 2D/3D hybrid integral imaging display by using fast switchable hexagonal liquid crystal lens array

    NASA Astrophysics Data System (ADS)

    Lee, Hsin-Hsueh; Huang, Ping-Ju; Wu, Jui-Yi; Hsieh, Po-Yuan; Huang, Yi-Pai

    2017-05-01

    The paper proposes a new display which could switch 2D and 3D images on a monitor, and we call it as Hybrid Display. In 3D display technologies, the reduction of image resolution is still an important issue. The more angle information offer to the observer, the less spatial resolution would offer to image resolution because of the fixed panel resolution. Take it for example, in the integral photography system, the part of image without depth, like background, will reduce its resolution by transform from 2D to 3D image. Therefore, we proposed a method by using liquid crystal component to quickly switch the 2D image and 3D image. Meanwhile, the 2D image is set as a background to compensate the resolution.. In the experiment, hexagonal liquid crystal lens array would be used to take the place of fixed lens array. Moreover, in order to increase lens power of the hexagonal LC lens array, we applied high resistance (Hi-R) layer structure on the electrode. Hi-R layer would make the gradient electric field and affect the lens profile. Also, we use panel with 801 PPI to display the integral image in our system. Hence, the consequence of full resolution 2D background with the 3D depth object forms the Hybrid Display.

  3. A global "imaging'' view on systems approaches in immunology.

    PubMed

    Ludewig, Burkhard; Stein, Jens V; Sharpe, James; Cervantes-Barragan, Luisa; Thiel, Volker; Bocharov, Gennady

    2012-12-01

    The immune system exhibits an enormous complexity. High throughput methods such as the "-omic'' technologies generate vast amounts of data that facilitate dissection of immunological processes at ever finer resolution. Using high-resolution data-driven systems analysis, causal relationships between complex molecular processes and particular immunological phenotypes can be constructed. However, processes in tissues, organs, and the organism itself (so-called higher level processes) also control and regulate the molecular (lower level) processes. Reverse systems engineering approaches, which focus on the examination of the structure, dynamics and control of the immune system, can help to understand the construction principles of the immune system. Such integrative mechanistic models can properly describe, explain, and predict the behavior of the immune system in health and disease by combining both higher and lower level processes. Moving from molecular and cellular levels to a multiscale systems understanding requires the development of methodologies that integrate data from different biological levels into multiscale mechanistic models. In particular, 3D imaging techniques and 4D modeling of the spatiotemporal dynamics of immune processes within lymphoid tissues are central for such integrative approaches. Both dynamic and global organ imaging technologies will be instrumental in facilitating comprehensive multiscale systems immunology analyses as discussed in this review. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  5. Towards nonionizing photoacoustic cystography

    NASA Astrophysics Data System (ADS)

    Kim, Chulhong; Jeon, Mansik; Wang, Lihong V.

    2012-02-01

    Normally, urine flows down from kidneys to bladders. Vesicoureteral reflux (VUR) is the abnormal flow of urine from bladders back to kidneys. VUR commonly follows urinary tract infection and leads to renal infection. Fluoroscopic voiding cystourethrography and direct radionuclide voiding cystography have been clinical gold standards for VUR imaging, but these methods are ionizing. Here, we demonstrate the feasibility of a novel and nonionizing process for VUR mapping in vivo, called photoacoustic cystography (PAC). Using a photoacoustic (PA) imaging system, we have successfully imaged a rat bladder filled with clinically being used methylene blue dye. An image contrast of ~8 was achieved. Further, spectroscopic PAC confirmed the accumulation of methylene blue in the bladder. Using a laser pulse energy of less than 1 mJ/cm2, bladder was clearly visible in the PA image. Our results suggest that this technology would be a useful clinical tool, allowing clinicians to identify bladder noninvasively in vivo.

  6. Fundamental limits of reconstruction-based superresolution algorithms under local translation.

    PubMed

    Lin, Zhouchen; Shum, Heung-Yeung

    2004-01-01

    Superresolution is a technique that can produce images of a higher resolution than that of the originally captured ones. Nevertheless, improvement in resolution using such a technique is very limited in practice. This makes it significant to study the problem: "Do fundamental limits exist for superresolution?" In this paper, we focus on a major class of superresolution algorithms, called the reconstruction-based algorithms, which compute high-resolution images by simulating the image formation process. Assuming local translation among low-resolution images, this paper is the first attempt to determine the explicit limits of reconstruction-based algorithms, under both real and synthetic conditions. Based on the perturbation theory of linear systems, we obtain the superresolution limits from the conditioning analysis of the coefficient matrix. Moreover, we determine the number of low-resolution images that are sufficient to achieve the limit. Both real and synthetic experiments are carried out to verify our analysis.

  7. iMAGE cloud: medical image processing as a service for regional healthcare in a hybrid cloud environment.

    PubMed

    Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen

    2016-11-01

    To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.

  8. DREAMS and IMAGE: A Model and Computer Implementation for Concurrent, Life-Cycle Design of Complex Systems

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.

  9. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation dose efficiency improvement in multi-energy photon-counting CT, and can mitigate scatter-induced shading artifacts in cone-beam CT in full-fan and half-fan modes.

  10. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  11. UV-sensitive scientific CCD image sensors

    NASA Astrophysics Data System (ADS)

    Vishnevsky, Grigory I.; Kossov, Vladimir G.; Iblyaminova, A. F.; Lazovsky, Leonid Y.; Vydrevitch, Michail G.

    1997-06-01

    An investigation of probe laser irradiation interaction with substances containing in an environment has long since become a recognized technique for contamination detection and identification. For this purpose, a near and midrange-IR laser irradiation is traditionally used. However, as many works presented on last ecology monitoring conferences show, in addition to traditional systems, rapidly growing are systems with laser irradiation from near-UV range (250 - 500 nm). Use of CCD imagers is one of the prerequisites for this allowing the development of a multi-channel computer-based spectral research system. To identify and analyze contaminating impurities on an environment, such methods as laser fluorescence analysis, UV absorption and differential spectroscopy, Raman scattering are commonly used. These methods are used to identify a large number of impurities (petrol, toluene, Xylene isomers, SO2, acetone, methanol), to detect and identify food pathogens in real time, to measure a concentration of NH3, SO2 and NO in combustion outbursts, to detect oil products in a water, to analyze contaminations in ground waters, to define ozone distribution in the atmosphere profile, to monitor various chemical processes including radioactive materials manufacturing, heterogeneous catalytic reactions, polymers production etc. Multi-element image sensor with enhanced UV sensitivity, low optical non-uniformity, low intrinsic noise and high dynamic range is a key element of all above systems. Thus, so called Virtual Phase (VP) CCDs possessing all these features, seems promising for ecology monitoring spectral measuring systems. Presently, a family of VP CCDs with different architecture and number of pixels is developed and being manufactured. All CCDs from this family are supported with a precise slow-scan digital image acquisition system that can be used in various image processing systems in astronomy, biology, medicine, ecology etc. An image is displayed directly on a PC monitor through a software support.

  12. A new concept of real-time security camera monitoring with privacy protection by masking moving objects

    NASA Astrophysics Data System (ADS)

    Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa

    2006-02-01

    Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.

  13. Thermoelectric infrared imager and automotive applications

    NASA Astrophysics Data System (ADS)

    Hirota, Masaki; Satou, Fuminori; Saito, Masanori; Kishi, Youichi; Nakajima, Yasushi; Uchiyama, Makato

    2001-10-01

    This paper describes a newly developed thermoelectric infrared imager having a 48 X 32 element thermoelectric focal plane array (FPA) and an experimental vehicle featuring a blind spot pedestrian warning system, which employs four infrared imagers. The imager measures 100 mm in width, 60 mm in height and 80 mm in depth, weighs 400 g, and has an overall field of view (FOV) of 40 deg X 20 deg. The power consumption of the imager is 3 W. The pedestrian detection program is stored in a CPU chip on a printed circuit board (PCB). The FPA provides high responsivity of 2,100 V/W, a time constant of 25 msec, and a low cost potential. Each element has external dimensions of 190 μm x 190 μm, and consists of six pairs of thermocouples and an Au-black absorber that is precisely patterned by low-pressure evaporation and lift-off technologies. The experimental vehicle is called the Nissan ASV-2 (Advanced Safety Vehicle-2), which incorporates a wide range of integrated technologies aimed at reducing traffic accidents. The blind spot pedestrian warning system alerts the driver to the presence of a pedestrian in a blind spot by detecting the infrared radiation emitted from the person's body. This system also prevents the vehicle from moving in the direction of the pedestrian.

  14. Vision 20/20: Simultaneous CT-MRI — Next chapter of multimodality imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ge, E-mail: wangg6@rpi.edu; Xi, Yan; Gjesteby, Lars

    Multimodality imaging systems such as positron emission tomography-computed tomography (PET-CT) and MRI-PET are widely available, but a simultaneous CT-MRI instrument has not been developed. Synergies between independent modalities, e.g., CT, MRI, and PET/SPECT can be realized with image registration, but such postprocessing suffers from registration errors that can be avoided with synchronized data acquisition. The clinical potential of simultaneous CT-MRI is significant, especially in cardiovascular and oncologic applications where studies of the vulnerable plaque, response to cancer therapy, and kinetic and dynamic mechanisms of targeted agents are limited by current imaging technologies. The rationale, feasibility, and realization of simultaneous CT-MRImore » are described in this perspective paper. The enabling technologies include interior tomography, unique gantry designs, open magnet and RF sequences, and source and detector adaptation. Based on the experience with PET-CT, PET-MRI, and MRI-LINAC instrumentation where hardware innovation and performance optimization were instrumental to construct commercial systems, the authors provide top-level concepts for simultaneous CT-MRI to meet clinical requirements and new challenges. Simultaneous CT-MRI fills a major gap of modality coupling and represents a key step toward the so-called “omnitomography” defined as the integration of all relevant imaging modalities for systems biology and precision medicine.« less

  15. A signature dissimilarity measure for trabecular bone texture in knee radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.

    Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size,more » anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND-CHARM system (64.2%). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less

  16. Coronal Loops Reveal Magnetic Dance

    NASA Image and Video Library

    2015-01-20

    Magnetic Dance: Solar material traces out giant magnetic fields soaring through the sun to create what's called coronal loops. Here they can be seen as white lines in a sharpened AIA image from Oct. 24, 2014, laid over data from SDO's Helioseismic Magnetic Imager, which shows magnetic fields on the sun's surface in false color. Credit: NASA/SDO/HMI/AIA/LMSAL Read more: www.nasa.gov/content/goddard/sdo-telescope-collects-its-1... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  17. Particle image velocimetry for the Surface Tension Driven Convection Experiment using a particle displacement tracking technique

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Pline, Alexander D.

    1991-01-01

    The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the USML-1 Spacelab mission planned for 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electronic, two-dimensional particle image velocimetry technique called particle displacement tracking (PDT) which uses a simple space domain particle tracking algorithm. The PDT system is successful in producing velocity vector fields from the raw video data. Application of the PDT technique to a sample data set yielded 1606 vectors in 30 seconds of processing time. A bottom viewing optical arrangement is used to image the illuminated plane, which causes keystone distortion in the final recorded image. A coordinate transformation was incorporated into the system software to correct this viewing angle distortion. PDT processing produced 1.8 percent false identifications, due to random particle locations. A highly successful routine for removing the false identifications was also incorporated, reducing the number of false identifications to 0.2 percent.

  18. Ambulatory diffuse optical tomography and multimodality physiological monitoring system for muscle and exercise applications

    NASA Astrophysics Data System (ADS)

    Hu, Gang; Zhang, Quan; Ivkovic, Vladimir; Strangman, Gary E.

    2016-09-01

    Ambulatory diffuse optical tomography (aDOT) is based on near-infrared spectroscopy (NIRS) and enables three-dimensional imaging of regional hemodynamics and oxygen consumption during a person's normal activities. Although NIRS has been previously used for muscle assessment, it has been notably limited in terms of the number of channels measured, the extent to which subjects can be ambulatory, and/or the ability to simultaneously acquire synchronized auxiliary data such as electromyography (EMG) or electrocardiography (ECG). We describe the development of a prototype aDOT system, called NINscan-M, capable of ambulatory tomographic imaging as well as simultaneous auxiliary multimodal physiological monitoring. Powered by four AA size batteries and weighing 577 g, the NINscan-M prototype can synchronously record 64-channel NIRS imaging data, eight channels of EMG, ECG, or other analog signals, plus force, acceleration, rotation, and temperature for 24+ h at up to 250 Hz. We describe the system's design, characterization, and performance characteristics. We also describe examples of isometric, cycle ergometer, and free-running ambulatory exercise to demonstrate tomographic imaging at 25 Hz. NINscan-M represents a multiuse tool for muscle physiology studies as well as clinical muscle assessment.

  19. Particle image velocimetry for the surface tension driven convection experiment using a particle displacement tracking technique

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Pline, Alexander D.

    1991-01-01

    The Surface Tension Driven Convection Experiment (STDCE) is a Space Transportation System flight experiment to study both transient and steady thermocapillary fluid flows aboard the USML-1 Spacelab mission planned for 1992. One of the components of data collected during the experiment is a video record of the flow field. This qualitative data is then quantified using an all electronic, two-dimensional particle image velocimetry technique called particle displacement tracking (PDT) which uses a simple space domain particle tracking algorithm. The PDT system is successful in producing velocity vector fields from the raw video data. Application of the PDT technique to a sample data set yielded 1606 vectors in 30 seconds of processing time. A bottom viewing optical arrangement is used to image the illuminated plane, which causes keystone distortion in the final recorded image. A coordinate transformation was incorporated into the system software to correct this viewing angle distortion. PDT processing produced 1.8 percent false identifications, due to random particle locations. A highly successful routine for removing the false identifications was also incorporated, reducing the number of false identifications to 0.2 percent.

  20. Low-dose magnetic-field-immune biplanar fluoroscopy for neurosurgery

    NASA Astrophysics Data System (ADS)

    Ramos, P. A.; Lawson, Michael A.; Wika, Kevin G.; Allison, Stephen W.; Quate, E. G.; Molloy, J. A.; Ritter, Rogers C.; Gilles, George T.

    1991-07-01

    The imaging chain of a bi-planar fluoroscopic system is described for a new neurosurgical technique: the Video Tumor Fighter (VTF). The VTF manipulates a small intracranially implanted magnet, called a thermoseed, by a large external magnetic field gradient. The thermoseed is heated by rf-induction to kill proximal tumor cells. For accurately guiding the seed through the brain, the x-ray tubes are alternately pulsed up to four times per second, each for as much as two hours. Radio-opaque reference markers, attached to the skull, enable the thermoseed's three dimensional position to be determined and then projected onto a displayed MRI brain scan. The imaging approach, similar to systems at the University of Arizona and the Mayo Clinic, includes a 20 cm diameter phosphor screen viewed by a proximity focused microchannel plate image intensifier coupled via fiberoptic taper to a solid state camera. The most important performance specifications are magnetic field immunity and, due to the procedure duration, low dosage per image. A preliminary arrangement designed in the laboratories yielded usable images at approximately 100 (mu) R exposure per frame. In this paper, the results of a series of studies of the effects of magnetic fields on microchannel plate image intensifiers used in the image detection chain are presented.

  1. Microscopy using source and detector arrays

    NASA Astrophysics Data System (ADS)

    Sheppard, Colin J. R.; Castello, Marco; Vicidomini, Giuseppe; Duocastella, Martí; Diaspro, Alberto

    2016-03-01

    There are basically two types of microscope, which we call conventional and scanning. The former type is a full-field imaging system. In the latter type, the object is illuminated with a probe beam, and a signal detected. We can generalize the probe to a patterned illumination. Similarly we can generalize the detection to a patterned detection. Combining these we get a range of different modalities: confocal microscopy, structured illumination (with full-field imaging), spinning disk (with multiple illumination points), and so on. The combination allows the spatial frequency bandwidth of the system to be doubled. In general we can record a four dimensional (4D) image of a 2D object (or a 6D image from a 3D object, using an acoustic tuneable lens). The optimum way to directly reconstruct the resulting image is by image scanning microscopy (ISM). But the 4D image is highly redundant, so deconvolution-based approaches are also relevant. ISM can be performed in fluorescence, bright field or interference microscopy. Several different implementations have been described, with associated advantages and disadvantages. In two-photon microscopy, the illumination and detection point spread functions are very different. This is also the case when using pupil filters or when there is a large Stokes shift.

  2. Enhanced facial recognition for thermal imagery using polarimetric imaging.

    PubMed

    Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W

    2014-07-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.

  3. Neural network for intelligent query of an FBI forensic database

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.; Rainey, Timothy G.; Balasubramanian, Uma; Brettle, Dean W.; Weingard, Fred; Sibert, Robert W.; Birnbaum, Eric

    1997-02-01

    Examiner is an automated fired cartridge case identification system utilizing a dual-use neural network pattern recognition technology, called the statistical-multiple object detection and location system (S-MODALS) developed by Booz(DOT)Allen & Hamilton, Inc. in conjunction with Rome Laboratory. S-MODALS was originally designed for automatic target recognition (ATR) of tactical and strategic military targets using multisensor fusion [electro-optical (EO), infrared (IR), and synthetic aperture radar (SAR)] sensors. Since S-MODALS is a learning system readily adaptable to problem domains other than automatic target recognition, the pattern matching problem of microscopic marks for firearms evidence was analyzed using S-MODALS. The physics; phenomenology; discrimination and search strategies; robustness requirements; error level and confidence level propagation that apply to the pattern matching problem of military targets were found to be applicable to the ballistic domain as well. The Examiner system uses S-MODALS to rank a set of queried cartridge case images from the most similar to the least similar image in reference to an investigative fired cartridge case image. The paper presents three independent tests and evaluation studies of the Examiner system utilizing the S-MODALS technology for the Federal Bureau of Investigation.

  4. A spatial data handling system for retrieval of images by unrestricted regions of user interest

    NASA Technical Reports Server (NTRS)

    Dorfman, Erik; Cromp, Robert F.

    1992-01-01

    The Intelligent Data Management (IDM) project at NASA/Goddard Space Flight Center has prototyped an Intelligent Information Fusion System (IIFS), which automatically ingests metadata from remote sensor observations into a large catalog which is directly queryable by end-users. The greatest challenge in the implementation of this catalog was supporting spatially-driven searches, where the user has a possible complex region of interest and wishes to recover those images that overlap all or simply a part of that region. A spatial data management system is described, which is capable of storing and retrieving records of image data regardless of their source. This system was designed and implemented as part of the IIFS catalog. A new data structure, called a hypercylinder, is central to the design. The hypercylinder is specifically tailored for data distributed over the surface of a sphere, such as satellite observations of the Earth or space. Operations on the hypercylinder are regulated by two expert systems. The first governs the ingest of new metadata records, and maintains the efficiency of the data structure as it grows. The second translates, plans, and executes users' spatial queries, performing incremental optimization as partial query results are returned.

  5. Is there a need for biomedical CBIR systems in clinical practice? Outcomes from a usability study

    NASA Astrophysics Data System (ADS)

    Antani, Sameer; Xue, Zhiyun; Long, L. Rodney; Bennett, Deborah; Ward, Sarah; Thoma, George R.

    2011-03-01

    Articles in the literature routinely describe advances in Content Based Image Retrieval (CBIR) and its potential for improving clinical practice, biomedical research and education. Several systems have been developed to address particular needs, however, surprisingly few are found to be in routine practical use. Our collaboration with the National Cancer Institute (NCI) has identified a need to develop tools to annotate and search a collection of over 100,000 cervigrams and related, anonymized patient data. One such tool developed for a projected need for retrieving similar patient images is the prototype CBIR system, called CervigramFinder, which retrieves images based on the visual similarity of particular regions on the cervix. In this article we report the outcomes from a usability study conducted at a primary meeting of practicing experts. We used the study to not only evaluate the system for software errors and ease of use, but also to explore its "user readiness", and to identify obstacles that hamper practical use of such systems, in general. Overall, the participants in the study found the technology interesting and bearing great potential; however, several challenges need to be addressed before the technology can be adopted.

  6. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    PubMed

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Mediaprocessors in medical imaging for high performance and flexibility

    NASA Astrophysics Data System (ADS)

    Managuli, Ravi; Kim, Yongmin

    2002-05-01

    New high performance programmable processors, called mediaprocessors, have been emerging since the early 1990s for various digital media applications, such as digital TV, set-top boxes, desktop video conferencing, and digital camcorders. Modern mediaprocessors, e.g., TI's TMS320C64x and Hitachi/Equator Technologies MAP-CA, can offer high performance utilizing both instruction-level and data-level parallelism. During this decade, with continued performance improvement and cost reduction, we believe that the mediaprocessors will become a preferred choice in designing imaging and video systems due to their flexibility in incorporating new algorithms and applications via programming and faster-time-to-market. In this paper, we will evaluate the suitability of these mediaprocessors in medical imaging. We will review the core routines of several medical imaging modalities, such as ultrasound and DR, and present how these routines can be mapped to mediaprocessors and their resultant performance. We will analyze the architecture of several leading mediaprocessors. By carefully mapping key imaging routines, such as 2D convolution, unsharp masking, and 2D FFT, to the mediaprocessor, we have been able to achieve comparable (if not better) performance to that of traditional hardwired approaches. Thus, we believe that future medical imaging systems will benefit greatly from these advanced mediaprocessors, offering significantly increased flexibility and adaptability, reducing the time-to-market, and improving the cost/performance ratio compared to the existing systems while meeting the high computing requirements.

  8. SPARX, a new environment for Cryo-EM image processing.

    PubMed

    Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

    2007-01-01

    SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.

  9. Hard and soft nanoparticles for image-guided surgery in nanomedicine

    NASA Astrophysics Data System (ADS)

    Locatelli, Erica; Monaco, Ilaria; Comes Franchini, Mauro

    2015-08-01

    The use of hard and/or soft nanoparticles for therapy, collectively called nanomedicine, has great potential in the battle against cancer. Major research efforts are underway in this area leading to development of new drug delivery approaches and imaging techniques. Despite this progress, the vast majority of patients who are affected by cancer today sadly still need surgical intervention, especially in the case of solid tumors. An important perspective for researchers is therefore to provide even more powerful tools to the surgeon for pre- and post-operative approaches. In this context, image-guided surgery, in combination with nanotechnology, opens a new strategy to win this battle. In this perspective, we will analyze and discuss the recent progress with nanoparticles of both metallic and biomaterial composition, and their use to develop powerful systems to be applied in image-guided surgery.

  10. Going "open" with mesoscopy: a new dimension on multi-view imaging.

    PubMed

    Gualda, Emilio; Moreno, Nuno; Tomancak, Pavel; Martins, Gabriel G

    2014-03-01

    OpenSPIM and OpenSpinMicroscopy emerged as open access platforms for Light Sheet and Optical Projection Imaging, often called as optical mesoscopy techniques. Both projects can be easily reproduced using comprehensive online instructions that should foster the implementation and further development of optical imaging techniques with sample rotation control. This additional dimension in an open system offers the possibility to make multi-view microscopy easily modified and will complement the emerging commercial solutions. Furthermore, it is deeply based on other open platforms such as MicroManager and Arduino, enabling development of tailored setups for very specific biological questions. In our perspective, the open access principle of OpenSPIM and OpenSpinMicroscopy is a game-changer, helping the concepts of light sheet and optical projection tomography (OPT) to enter the mainstream of biological imaging.

  11. Prostate tissue characterization/classification in 144 patient population using wavelet and higher order spectra features from transrectal ultrasound images.

    PubMed

    Pareek, Gyan; Acharya, U Rajendra; Sree, S Vinitha; Swapna, G; Yantri, Ratna; Martis, Roshan Joy; Saba, Luca; Krishnamurthi, Ganapathy; Mallarini, Giorgio; El-Baz, Ayman; Al Ekish, Shadi; Beland, Michael; Suri, Jasjit S

    2013-12-01

    In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.

  12. Web conferencing systems: Skype and MSN in telepathology

    PubMed Central

    Klock, Clóvis; Gomes, Regina de Paula Xavier

    2008-01-01

    Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. Alive image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology. PMID:18673501

  13. Web conferencing systems: Skype and MSN in telepathology.

    PubMed

    Klock, Clóvis; Gomes, Regina de Paula Xavier

    2008-07-15

    Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. A live image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology.

  14. Towards establishing compact imaging spectrometer standards

    USGS Publications Warehouse

    Slonecker, E. Terrence; Allen, David W.; Resmini, Ronald G.

    2016-01-01

    Remote sensing science is currently undergoing a tremendous expansion in the area of hyperspectral imaging (HSI) technology. Spurred largely by the explosive growth of Unmanned Aerial Vehicles (UAV), sometimes called Unmanned Aircraft Systems (UAS), or drones, HSI capabilities that once required access to one of only a handful of very specialized and expensive sensor systems are now miniaturized and widely available commercially. Small compact imaging spectrometers (CIS) now on the market offer a number of hyperspectral imaging capabilities in terms of spectral range and sampling. The potential uses of HSI/CIS on UAVs/UASs seem limitless. However, the rapid expansion of unmanned aircraft and small hyperspectral sensor capabilities has created a number of questions related to technological, legal, and operational capabilities. Lightweight sensor systems suitable for UAV platforms are being advertised in the trade literature at an ever-expanding rate with no standardization of system performance specifications or terms of reference. To address this issue, both the U.S. Geological Survey and the National Institute of Standards and Technology are eveloping draft standards to meet these issues. This paper presents the outline of a combined USGS/NIST cooperative strategy to develop and test a characterization methodology to meet the needs of a new and expanding UAV/CIS/HSI user community.

  15. How to Directly Image a Habitable Planet Around Alpha Centauri with a 30-45 cm Space Telescope

    NASA Technical Reports Server (NTRS)

    Belikov, Ruslan; Bendek, Eduardo; Thomas, Sandrine; Males, Jared

    2015-01-01

    Several mission concepts are being studied to directly image planets around nearby stars. It is commonly thought that directly imaging a potentially habitable exoplanet around a Sun-like star requires space telescopes with apertures of at least 1m. A notable exception to this is Alpha Centauri (A and B), which is an extreme outlier among FGKM stars in terms of apparent habitable zone size: the habitable zones are approximately 3x wider in apparent size than around any other FGKM star. This enables a approximately 30-45cm visible light space telescope equipped with a modern high performance coronagraph or star shade to resolve the habitable zone at high contrast and directly image any potentially habitable planet that may exist in the system. The raw contrast requirements for such an instrument can be relaxed to 1e-8 if the mission spends 2 years collecting tens of thousands of images on the same target, enabling a factor of 500-1000 speckle suppression in post processing using a new technique called Orbital Difference Imaging (ODI). The raw light leak from both stars is controllable with a special wave front control algorithm known as Multi-Star Wave front Control (MSWC), which independently suppresses diffraction and aberrations from both stars using independent modes on the deformable mirror. This paper will present an analysis of the challenges involved with direct imaging of Alpha Centauri with a small telescope and how the above technologies are used together to solve them. We also show an example of a small coronagraphic mission concepts to take advantage of this opportunity called "ACESat: Alpha Centauri Exoplanet Satellite" submitted to NASA's small Explorer (SMEX) program in December of 2014.

  16. Imaging Tests for Lower Back Pain: When You Need Them -- and When You Don't

    MedlinePlus

    ... Geriatric Imaging Tests for Lower-Back Back Pain Imaging Tests for Lower-Back Pain You probably do ... X-rays, CT scans, and MRIs are called imaging tests because they take pictures, or images, of ...

  17. Overview of LBTI: A Multipurpose Facility for High Spatial Resolution Observations

    NASA Technical Reports Server (NTRS)

    Hinz, P. M.; Defrere, D.; Skemer, A.; Bailey, V.; Stone, J.; Spalding, E.; Vaz, A.; Pinna, E.; Puglisi, A.; Esposito, S.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer (LBTI) is a high spatial resolution instrument developed for coherent imaging and nulling interferometry using the 14.4 m baseline of the 2x8.4 m LBT. The unique telescope design, comprising of the dual apertures on a common elevation-azimuth mount, enables a broad use of observing modes. The full system is comprised of dual adaptive optics systems, a near-infrared phasing camera, a 1-5 micrometer camera (called LMIRCam), and an 8-13 micrometer camera (called NOMIC). The key program for LBTI is the Hunt for Observable Signatures of Terrestrial planetary Systems (HOSTS), a survey using nulling interferometry to constrain the typical brightness from exozodiacal dust around nearby stars. Additional observations focus on the detection and characterization of giant planets in the thermal infrared, high spatial resolution imaging of complex scenes such as Jupiter's moon, Io, planets forming in transition disks, and the structure of active Galactic Nuclei (AGN). Several instrumental upgrades are currently underway to improve and expand the capabilities of LBTI. These include: Improving the performance and limiting magnitude of the parallel adaptive optics systems; quadrupling the field of view of LMIRcam (increasing to 20"x20"); adding an integral field spectrometry mode; and implementing a new algorithm for path length correction that accounts for dispersion due to atmospheric water vapor. We present the current architecture and performance of LBTI, as well as an overview of the upgrades.

  18. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  19. NDSI products system based on Hadoop platform

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Jiang, He; Yang, Xiaoxia; Geng, Erhui

    2015-12-01

    Snow is solid state of water resources on earth, and plays an important role in human life. Satellite remote sensing is significant in snow extraction with the advantages of cyclical, macro, comprehensiveness, objectivity, timeliness. With the continuous development of remote sensing technology, remote sensing data access to the trend of multiple platforms, multiple sensors and multiple perspectives. At the same time, in view of the remote sensing data of compute-intensive applications demand increase gradually. However, current the producing system of remote sensing products is in a serial mode, and this kind of production system is used for professional remote sensing researchers mostly, and production systems achieving automatic or semi-automatic production are relatively less. Facing massive remote sensing data, the traditional serial mode producing system with its low efficiency has been difficult to meet the requirements of mass data timely and efficient processing. In order to effectively improve the production efficiency of NDSI products, meet the demand of large-scale remote sensing data processed timely and efficiently, this paper build NDSI products production system based on Hadoop platform, and the system mainly includes the remote sensing image management module, NDSI production module, and system service module. Main research contents and results including: (1)The remote sensing image management module: includes image import and image metadata management two parts. Import mass basis IRS images and NDSI product images (the system performing the production task output) into HDFS file system; At the same time, read the corresponding orbit ranks number, maximum/minimum longitude and latitude, product date, HDFS storage path, Hadoop task ID (NDSI products), and other metadata information, and then create thumbnails, and unique ID number for each record distribution, import it into base/product image metadata database. (2)NDSI production module: includes the index calculation, production tasks submission and monitoring two parts. Read HDF images related to production task in the form of a byte stream, and use Beam library to parse image byte stream to the form of Product; Use MapReduce distributed framework to perform production tasks, at the same time monitoring task status; When the production task complete, calls remote sensing image management module to store NDSI products. (3)System service module: includes both image search and DNSI products download. To image metadata attributes described in JSON format, return to the image sequence ID existing in the HDFS file system; For the given MapReduce task ID, package several task output NDSI products into ZIP format file, and return to the download link (4)System evaluation: download massive remote sensing data and use the system to process it to get the NDSI products testing the performance, and the result shows that the system has high extendibility, strong fault tolerance, fast production speed, and the image processing results with high accuracy.

  20. A system and methodology for high-content visual screening of individual intact living cells in suspension

    NASA Astrophysics Data System (ADS)

    Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer

    2007-02-01

    Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.

  1. Sound imaging of nocturnal animal calls in their natural habitat.

    PubMed

    Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G

    2011-09-01

    We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.

  2. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment.

    PubMed

    Mezgec, Simon; Koroušić Seljak, Barbara

    2017-06-27

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.

  3. DICOM: a standard for medical imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Bidgood, W. Dean

    1993-01-01

    Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.

  4. Simulation of the Inferior Mirage

    ERIC Educational Resources Information Center

    Branca, Mario

    2010-01-01

    A mirage can occur when a continuous variation in the refractive index of the air causes light rays to follow a curved path. As a result, the image we see is displaced from the location of the object. If the image appears higher in the air than the object, it is called a "superior" mirage, while if it appears lower it is called an "inferior"…

  5. Technical design and system implementation of region-line primitive association framework

    NASA Astrophysics Data System (ADS)

    Wang, Min; Xing, Jinjin; Wang, Jie; Lv, Guonian

    2017-08-01

    Apart from regions, image edge lines are an important information source, and they deserve more attention in object-based image analysis (OBIA) than they currently receive. In the region-line primitive association framework (RLPAF), we promote straight-edge lines as line primitives to achieve powerful OBIAs. Along with regions, straight lines become basic units for subsequent extraction and analysis of OBIA features. This study develops a new software system called remote-sensing knowledge finder (RSFinder) to implement RLPAF for engineering application purposes. This paper introduces the extended technical framework, a comprehensively designed feature set, key technology, and software implementation. To our knowledge, RSFinder is the world's first OBIA system based on two types of primitives, namely, regions and lines. It is fundamentally different from other well-known region-only-based OBIA systems, such as eCogntion and ENVI feature extraction module. This paper has important reference values for the development of similarly structured OBIA systems and line-involved extraction algorithms of remote sensing information.

  6. Programmable spectral engine design of hyperspectral image projectors based on digital micro-mirror device (DMD)

    NASA Astrophysics Data System (ADS)

    Wang, Xicheng; Gao, Jiaobo; Wu, Jianghui; Li, Jianjun; Cheng, Hongliang

    2017-02-01

    Recently, hyperspectral image projectors (HIP) have been developed in the field of remote sensing. For the advanced performance of system-level validation, target detection and hyperspectral image calibration, HIP has great possibility of development in military, medicine, commercial and so on. HIP is based on the digital micro-mirror device (DMD) and projection technology, which is capable to project arbitrary programmable spectra (controlled by PC) into the each pixel of the IUT1 (instrument under test), such that the projected image could simulate realistic scenes that hyperspectral image could be measured during its use and enable system-level performance testing and validation. In this paper, we built a visible hyperspectral image projector also called the visible target simulator with double DMDs, which the first DMD is used to product the selected monochromatic light from the wavelength of 410 to 720 um, and the light come to the other one. Then we use computer to load image of realistic scenes to the second DMD, so that the target condition and background could be project by the second DMD with the selected monochromatic light. The target condition can be simulated and the experiment could be controlled and repeated in the lab, making the detector instrument could be tested in the lab. For the moment, we make the focus on the spectral engine design include the optical system, research of DMD programmable spectrum and the spectral resolution of the selected spectrum. The detail is shown.

  7. Design and evaluation of web-based image transmission and display with different protocols

    NASA Astrophysics Data System (ADS)

    Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo

    2011-03-01

    There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.

  8. ADMultiImg: a novel missing modality transfer learning based CAD system for diagnosis of MCI due to AD using incomplete multi-modality imaging data

    NASA Astrophysics Data System (ADS)

    Liu, Xiaonan; Chen, Kewei; Wu, Teresa; Weidman, David; Lure, Fleming; Li, Jing

    2018-02-01

    Alzheimer's Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient's likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called "ADMultiImg" to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.

  9. Quickbird Satellite in-orbit Modulation Transfer Function (MTF) Measurement Using Edge, Pulse and Impulse Methods for Summer 2003

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Choi, Taeyoung; Rangaswamy, Manjunath

    2005-01-01

    The spatial characteristics of an imaging system cannot be expressed by a single number or simple statement. However, the Modulation Transfer Function (MTF) is one approach to measure the spatial quality of an imaging system. Basically, MTF is the normalized spatial frequency response of an imaging system. The frequency response of the system can be evaluated by applying an impulse input. The resulting impulse response is termed the Point Spread function (PSF). This function is a measure of the amount of blurring present in the imaging system and is itself a useful measure of spatial quality. An underlying assumption is that the imaging system is linear and shift-independent. The Fourier transform of the PSF is called the Optical Transfer Function (OTF) and the normalized magnitude of the OTF is the MTF. In addition to using an impulse input, a knife-edge in technique has also been used in this project. The sharp edge exercises an imaging system at all spatial frequencies. The profile of an edge response from an imaging system is called an Edge Spread Function (ESF). Differentiation of the ESF results in a one-dimensional version of the Point Spread Function (PSF). Finally, MTF can be calculated through use of Fourier transform of the PSF as stated previously. Every image includes noise in some degree which makes MTF of PSF estimation more difficult. To avoid the noise effects, many MTF estimation approaches use smooth numerical models. Historically, Gaussian models and Fermi functions were applied to reduce the random noise in the output profiles. The pulse-input method was used to measure the MTF of the Landsat Thematic Mapper (TM) using 8th order even functions over the San Mateo Bridge in San Francisco, California. Because the bridge width was smaller than the 30-meter ground sample distance (GSD) of the TM, the Nyquist frequency was located before the first zero-crossing point of the sinc function from the Fourier transformation of the bridge pulse. To avoid the zero-crossing points in the frequency domain from a pulse, the pulse width should be less than the width of two pixels (or 2 GSD's), but the short extent of the pulse results in a poor signal-to-noise ratio. Similarly, for a high-resolution satellite imaging system such as Quickbird, the input pulse width was critical because of the zero crossing points and noise present in the background area. It is important, therefore, that the width of the input pulse be appropriately sized. Finally, the MTF was calculated by taking ratio between Fourier transform of output and Fourier transform of input. Regardless of whether the edge, pulse and impulse target method is used, the orientation of the targets is critical in order to obtain uniformly spaced sub-pixel data points. When the orientation is incorrect, sample data points tend to be located in clusters that result in poor reconstruction of the edge or pulse profiles. Thus, a compromise orientation must be selected so that all spectral bands can be accommodated. This report continues by outlining the objectives in Section 2, procedures followed in Section 3, descriptions of the field campaigns in Section 4, results in Section 5, and a brief summary in Section 6.

  10. Computer-aided diagnostic system for detection of Hashimoto thyroiditis on ultrasound images from a Polish population.

    PubMed

    Acharya, U Rajendra; Sree, S Vinitha; Krishnan, M Muthu Rama; Molinari, Filippo; Zieleźnik, Witold; Bardales, Ricardo H; Witkowska, Agnieszka; Suri, Jasjit S

    2014-02-01

    Computer-aided diagnostic (CAD) techniques aid physicians in better diagnosis of diseases by extracting objective and accurate diagnostic information from medical data. Hashimoto thyroiditis is the most common type of inflammation of the thyroid gland. The inflammation changes the structure of the thyroid tissue, and these changes are reflected as echogenic changes on ultrasound images. In this work, we propose a novel CAD system (a class of systems called ThyroScan) that extracts textural features from a thyroid sonogram and uses them to aid in the detection of Hashimoto thyroiditis. In this paradigm, we extracted grayscale features based on stationary wavelet transform from 232 normal and 294 Hashimoto thyroiditis-affected thyroid ultrasound images obtained from a Polish population. Significant features were selected using a Student t test. The resulting feature vectors were used to build and evaluate the following 4 classifiers using a 10-fold stratified cross-validation technique: support vector machine, decision tree, fuzzy classifier, and K-nearest neighbor. Using 7 significant features that characterized the textural changes in the images, the fuzzy classifier had the highest classification accuracy of 84.6%, sensitivity of 82.8%, specificity of 87.0%, and a positive predictive value of 88.9%. The proposed ThyroScan CAD system uses novel features to noninvasively detect the presence of Hashimoto thyroiditis on ultrasound images. Compared to manual interpretations of ultrasound images, the CAD system offers a more objective interpretation of the nature of the thyroid. The preliminary results presented in this work indicate the possibility of using such a CAD system in a clinical setting after evaluating it with larger databases in multicenter clinical trials.

  11. Applications and challenges of digital pathology and whole slide imaging.

    PubMed

    Higgins, C

    2015-07-01

    Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.

  12. Juno on Jupiter Doorstep

    NASA Image and Video Library

    2016-06-24

    NASA's Juno spacecraft obtained this color view on June 21, 2016, at a distance of 6.8 million miles (10.9 million kilometers) from Jupiter. As Juno makes its initial approach, the giant planet's four largest moons -- Io, Europa, Ganymede and Callisto -- are visible, and the alternating light and dark bands of the planet's clouds are just beginning to come into view. Juno is approaching over Jupiter's north pole, affording the spacecraft a unique perspective on the Jupiter system. Previous missions that imaged Jupiter on approach saw the system from much lower latitudes, closer to the planet's equator. The scene was captured by the mission's imaging camera, called JunoCam, which is designed to acquire high resolution views of features in Jupiter's atmosphere from very close to the planet. http://photojournal.jpl.nasa.gov/catalog/PIA20701

  13. Forward light scatter analysis of the eye in a spatially-resolved double-pass optical system.

    PubMed

    Nam, Jayoung; Thibos, Larry N; Bradley, Arthur; Himebaugh, Nikole; Liu, Haixia

    2011-04-11

    An optical analysis is developed to separate forward light scatter of the human eye from the conventional wavefront aberrations in a double pass optical system. To quantify the separate contributions made by these micro- and macro-aberrations, respectively, to the spot image blur in the Shark-Hartmann aberrometer, we develop a metric called radial variance for spot blur. We prove an additivity property for radial variance that allows us to distinguish between spot blurs from macro-aberrations and micro-aberrations. When the method is applied to tear break-up in the human eye, we find that micro-aberrations in the second pass accounts for about 87% of the double pass image blur in the Shack-Hartmann wavefront aberrometer under our experimental conditions. © 2011 Optical Society of America

  14. Feasibility study of imaging spectroscopy to monitor the quality of online welding.

    PubMed

    Mirapeix, Jesús; García-Allende, P Beatriz; Cobo, Adolfo; Conde, Olga M; López-Higuera, José M

    2009-08-20

    An online welding quality system based on the use of imaging spectroscopy is proposed and discussed. Plasma optical spectroscopy has already been successfully applied in this context by establishing a direct correlation between some spectroscopic parameters, e.g., the plasma electronic temperature and the resulting seam quality. Given that the use of the so-called hyperspectral devices provides both spatial and spectral information, we propose their use for the particular case of arc welding quality monitoring in an attempt to determine whether this technique would be suitable for this industrial situation. Experimental welding tests are presented, and the ability of the proposed solution to identify simulated defects is proved. Detailed spatial analyses suggest that this additional dimension can be used to improve the performance of the entire system.

  15. An Approach Using Parallel Architecture to Storage DICOM Images in Distributed File System

    NASA Astrophysics Data System (ADS)

    Soares, Tiago S.; Prado, Thiago C.; Dantas, M. A. R.; de Macedo, Douglas D. J.; Bauer, Michael A.

    2012-02-01

    Telemedicine is a very important area in medical field that is expanding daily motivated by many researchers interested in improving medical applications. In Brazil was started in 2005, in the State of Santa Catarina has a developed server called the CyclopsDCMServer, which the purpose to embrace the HDF for the manipulation of medical images (DICOM) using a distributed file system. Since then, many researches were initiated in order to seek better performance. Our approach for this server represents an additional parallel implementation in I/O operations since HDF version 5 has an essential feature for our work which supports parallel I/O, based upon the MPI paradigm. Early experiments using four parallel nodes, provide good performance when compare to the serial HDF implemented in the CyclopsDCMServer.

  16. Invisible Cirrus Clouds

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team

  17. Venus - Lakshmi Planum

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image is a full-resolution mosaic of several Magellan images and is centered at 61 degrees north latitude and 341 degrees east longitude. The image is 250 kilometers wide (150 miles). The radar smooth region in the northern part of the image is Lakshmi Planum, a high plateau region roughly 3.5 kilometers (2.2 miles) above the mean planetary radius. Lakshmi Planum is ringed by intensely deformed terrain, some of which is shown in the southern portion of the image and is called Clotho Tessera. The 64-kilometer (40 mile) diameter circular feature in the image is a depression called Siddons and may be a volcanic caldera. This view is supported by the collapsed lava tubes surrounding the feature. By carefully studying this and other surrounding images scientists hope to discover what tectonic and volcanic processes formed this complex region. The solid black parts of the image represent data gaps that may be filled in by the Magellan extended mission.

  18. SERODS optical data storage with parallel signal transfer

    DOEpatents

    Vo-Dinh, Tuan

    2003-09-02

    Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.

  19. SERODS optical data storage with parallel signal transfer

    DOEpatents

    Vo-Dinh, Tuan

    2003-06-24

    Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.

  20. A digital-signal-processor-based optical tomographic system for dynamic imaging of joint diseases

    NASA Astrophysics Data System (ADS)

    Lasker, Joseph M.

    Over the last decade, optical tomography (OT) has emerged as viable biomedical imaging modality. Various imaging systems have been developed that are employed in preclinical as well as clinical studies, mostly targeting breast imaging, brain imaging, and cancer related studies. Of particular interest are so-called dynamic imaging studies where one attempts to image changes in optical properties and/or physiological parameters as they occur during a system perturbation. To successfully perform dynamic imaging studies, great effort is put towards system development that offers increasingly enhanced signal-to-noise performance at ever shorter data acquisition times, thus capturing high fidelity tomographic data within narrower time periods. Towards this goal, I have developed in this thesis a dynamic optical tomography system that is, unlike currently available analog instrumentation, based on digital data acquisition and filtering techniques. At the core of this instrument is a digital signal processor (DSP) that collects, collates, and processes the digitized data set. Complementary protocols between the DSP and a complex programmable logic device synchronizes the sampling process and organizes data flow. Instrument control is implemented through a comprehensive graphical user interface which integrates automated calibration, data acquisition, and signal post-processing. Real-time data is generated at frame rates as high as 140 Hz. An extensive dynamic range (˜190 dB) accommodates a wide scope of measurement geometries and tissue types. Performance analysis demonstrates very low system noise (˜1 pW rms noise equivalent power), excellent signal precision (˜0.04%--0.2%) and long term system stability (˜1% over 40 min). Experiments on tissue phantoms validate spatial and temporal accuracy of the system. As a potential new application of dynamic optical imaging I present the first application of this method to use vascular hemodynamics as a means of characterizing joint diseases, especially effects of rheumatoid arthritis (RA) in the proximal interphalangeal finger joints. Using a dual-wavelength tomographic imaging system and previously implemented reconstruction scheme, I have performed initial dynamic imaging case studies on healthy volunteers and patients diagnosed with RA. These studies support our hypothesis that differences in the vascular and metabolic reactivity exist between affected and unaffected joints and can be used for diagnostic purposes.

  1. Function representation with circle inversion map systems

    NASA Astrophysics Data System (ADS)

    Boreland, Bryson; Kunze, Herb

    2017-01-01

    The fractals literature develops the now well-known concept of local iterated function systems (using affine maps) with grey-level maps (LIFSM) as an approach to function representation in terms of the associated fixed point of the so-called fractal transform. While originally explored as a method to achieve signal (and 2-D image) compression, more recent work has explored various aspects of signal and image processing using this machinery. In this paper, we develop a similar framework for function representation using circle inversion map systems. Given a circle C with centre õ and radius r, inversion with respect to C transforms the point p˜ to the point p˜', such that p˜ and p˜' lie on the same radial half-line from õ and d(õ, p˜)d(õ, p˜') = r2, where d is Euclidean distance. We demonstrate the results with an example.

  2. Autostereoscopic 3D display system with dynamic fusion of the viewing zone under eye tracking: principles, setup, and evaluation [Invited].

    PubMed

    Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu

    2018-01-01

    We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.

  3. Cognition-based development and evaluation of ergonomic user interfaces for medical image processing and archiving systems.

    PubMed

    Demiris, A M; Meinzer, H P

    1997-01-01

    Whether or not a computerized system enhances the conditions of work in the application domain, very much demands on the user interface. Graphical user interfaces seem to attract the interest of the users but mostly ignore some basic rules of visual information processing thus leading to systems which are difficult to use, lowering productivity and increasing working stress (cognitive and work load). In this work we present some fundamental ergonomic considerations and their application to the medical image processing and archiving domain. We introduce the extensions to an existing concept needed to control and guide the development of GUIs with respect to domain specific ergonomics. The suggested concept, called Model-View-Controller Constraints (MVCC), can be used to programmatically implement ergonomic constraints, and thus has some advantages over written style guides. We conclude with the presentation of existing norms and methods to evaluate user interfaces.

  4. New imaging algorithm in diffusion tomography

    NASA Astrophysics Data System (ADS)

    Klibanov, Michael V.; Lucas, Thomas R.; Frank, Robert M.

    1997-08-01

    A novel imaging algorithm for diffusion/optical tomography is presented for the case of the time dependent diffusion equation. Numerical tests are conducted for ranges of parameters realistic for applications to an early breast cancer diagnosis using ultrafast laser pulses. This is a perturbation-like method which works for both homogeneous a heterogeneous background media. Its main innovation lies in a new approach for a novel linearized problem (LP). Such an LP is derived and reduced to a boundary value problem for a coupled system of elliptic partial differential equations. As is well known, the solution of such a system amounts to the factorization of well conditioned, sparse matrices with few non-zero entries clustered along the diagonal, which can be done very rapidly. Thus, the main advantages of this technique are that it is fast and accurate. The authors call this approach the elliptic systems method (ESM). The ESM can be extended for other data collection schemes.

  5. Imaging in Central Nervous System Drug Discovery.

    PubMed

    Gunn, Roger N; Rabiner, Eugenii A

    2017-01-01

    The discovery and development of central nervous system (CNS) drugs is an extremely challenging process requiring large resources, timelines, and associated costs. The high risk of failure leads to high levels of risk. Over the past couple of decades PET imaging has become a central component of the CNS drug-development process, enabling decision-making in phase I studies, where early discharge of risk provides increased confidence to progress a candidate to more costly later phase testing at the right dose level or alternatively to kill a compound through failure to meet key criteria. The so called "3 pillars" of drug survival, namely; tissue exposure, target engagement, and pharmacologic activity, are particularly well suited for evaluation by PET imaging. This review introduces the process of CNS drug development before considering how PET imaging of the "3 pillars" has advanced to provide valuable tools for decision-making on the critical path of CNS drug development. Finally, we review the advances in PET science of biomarker development and analysis that enable sophisticated drug-development studies in man. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Telemedicine in acute plastic surgical trauma and burns.

    PubMed Central

    Jones, S. M.; Milroy, C.; Pickford, M. A.

    2004-01-01

    BACKGROUND: Telemedicine is a relatively new development within the UK, but is increasingly useful in many areas of medicine including plastic surgery. Plastic surgery centres often work on a hub-and-spoke basis with many district hospitals referring to one tertiary centre. The Queen Victoria Hospital is one such centre receiving calls from more than 28 hospitals in the Southeast of England resulting in approximately 20 referrals a day. OBJECTIVE: A telemedicine system was developed to improve trauma management. This study was designed to establish whether digital images were sufficiently accurate enough to aid decision-making. A store-and-forward telemedicine system was devised and the images of 150 trauma referrals evaluated in terms of injury severity and operative priority by each member of the plastic surgical team. RESULTS: Correlation scores for assessed images were high. Accuracy of "transmitted image" in comparison to injury on examination scored > 97%. Operative priority scores tended to be higher than injury severity. CONCLUSIONS: Telemedicine is an accurate method by which to transfer information on plastic surgical trauma including burns. PMID:15239862

  7. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    PubMed

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  8. CARES: Completely Automated Robust Edge Snapper for carotid ultrasound IMT measurement on a multi-institutional database of 300 images: a two stage system combining an intensity-based feature approach with first order absolute moments

    NASA Astrophysics Data System (ADS)

    Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.

    2011-03-01

    The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-

  9. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  10. On some dynamical chameleon systems

    NASA Astrophysics Data System (ADS)

    Burkin, I. M.; Kuznetsova, O. I.

    2018-03-01

    It is now well known that dynamical systems can be categorized into systems with self-excited attractors and systems with hidden attractors. A self-excited attractor has a basin of attraction that is associated with an unstable equilibrium, while a hidden attractor has a basin of attraction that does not intersect with small neighborhoods of any equilibrium points. Hidden attractors play the important role in engineering applications because they allow unexpected and potentially disastrous responses to perturbations in a structure like a bridge or an airplane wing. In addition, complex behaviors of chaotic systems have been applied in various areas from image watermarking, audio encryption scheme, asymmetric color pathological image encryption, chaotic masking communication to random number generator. Recently, researchers have discovered the so-called “chameleon systems”. These systems were so named because they demonstrate self-excited or hidden oscillations depending on the value of parameters. The present paper offers a simple algorithm of synthesizing one-parameter chameleon systems. The authors trace the evolution of Lyapunov exponents and the Kaplan-Yorke dimension of such systems which occur when parameters change.

  11. Arcsecond and Sub-arcsedond Imaging with X-ray Multi-Image Interferometer and Imager for (very) small sattelites

    NASA Astrophysics Data System (ADS)

    Hayashida, K.; Kawabata, T.; Nakajima, H.; Inoue, S.; Tsunemi, H.

    2017-10-01

    The best angular resolution of 0.5 arcsec is realized with the X-ray mirror onborad the Chandra satellite. Nevertheless, further better or comparable resolution is anticipated to be difficult in near future. In fact, the goal of ATHENA telescope is 5 arcsec in the angular resolution. We propose a new type of X-ray interferometer consisting simply of an X-ray absorption grating and an X-ray spectral imaging detector, such as X-ray CCDs or new generation CMOS detectors, by stacking the multi images created with the Talbot interferenece (Hayashida et al. 2016). This system, now we call Multi Image X-ray Interferometer Module (MIXIM) enables arcseconds resolution with very small satellites of 50cm size, and sub-arcseconds resolution with small sattellites. We have performed ground experiments, in which a micro-focus X-ray source, grating with pitch of 4.8μm, and 30 μm pixel detector placed about 1m from the source. We obtained the self-image (interferometirc fringe) of the grating for wide band pass around 10keV. This result corresponds to about 2 arcsec resolution for parrallel beam incidence. The MIXIM is usefull for high angular resolution imaging of relatively bright sources. Search for super massive black holes and resolving AGN torus would be the targets of this system.

  12. The Topo-trigger: a new concept of stereo trigger system for imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    López-Coto, R.; Mazin, D.; Paoletti, R.; Blanch Bigas, O.; Cortina, J.

    2016-04-01

    Imaging atmospheric Cherenkov telescopes (IACTs) such as the Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes endeavor to reach the lowest possible energy threshold. In doing so the trigger system is a key element. Reducing the trigger threshold is hampered by the rapid increase of accidental triggers generated by ambient light (the so-called Night Sky Background NSB). In this paper we present a topological trigger, dubbed Topo-trigger, which rejects events on the basis of their relative orientation in the telescope cameras. We have simulated and tested the trigger selection algorithm in the MAGIC telescopes. The algorithm was tested using MonteCarlo simulations and shows a rejection of 85% of the accidental stereo triggers while preserving 99% of the gamma rays. A full implementation of this trigger system would achieve an increase in collection area between 10 and 20% at the energy threshold. The analysis energy threshold of the instrument is expected to decrease by ~ 8%. The selection algorithm was tested on real MAGIC data taken with the current trigger configuration and no γ-like events were found to be lost.

  13. The infrared video image pseudocolor processing system

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Zhang, JiangLing

    2003-11-01

    The infrared video image pseudo-color processing system, emphasizing on the algorithm and its implementation for measured object"s 2D temperature distribution using pseudo-color technology, is introduced in the paper. The data of measured object"s thermal image is the objective presentation of its surface temperature distribution, but the color has a close relationship with people"s subjective cognition. The so-called pseudo-color technology cross the bridge between subjectivity and objectivity, and represents the measured object"s temperature distribution in reason and at first hand. The algorithm of pseudo-color is based on the distance of IHS space. Thereby the definition of pseudo-color visual resolution is put forward. Both the software (which realize the map from the sample data to the color space) and the hardware (which carry out the conversion from the color space to palette by HDL) co-operate. Therefore the two levels map which is logic map and physical map respectively is presented. The system has been used abroad in failure diagnose of electric power devices, fire protection for lifesaving and even SARS detection in CHINA lately.

  14. Image Filtering with Boolean and Statistical Operators.

    DTIC Science & Technology

    1983-12-01

    S3(2) COMPLEX AMAT(256, 4). BMAT (256. 4). CMAT(256. 4) CALL IOF(3. MAIN. AFLNM. DFLNI, CFLNM. MS., 82, S3) CALL OPEN(1.AFLNM* 1.IER) CALL CHECKC!ER...RDBLK(2. 6164. MAT. 16, IER) CALL CHECK(IER) DO I K-1. 4 DO I J-1.256 CMAT(J. K)-AMAT(J. K)’. BMAT (J. K) I CONTINUE S CALL WRBLK(3. 164!. CMAT. 16. IER

  15. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  16. On the Development of a Computing Infrastructure that Facilitates IPPD from a Decision-Based Design Perspective

    NASA Technical Reports Server (NTRS)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.

  17. Obstacles encountered in the development of the low vision enhancement system.

    PubMed

    Massof, R W; Rickman, D L

    1992-01-01

    The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.

  18. Development of RT-components for the M-3 Strawberry Harvesting Robot

    NASA Astrophysics Data System (ADS)

    Yamashita, Tomoki; Tanaka, Motomasa; Yamamoto, Satoshi; Hayashi, Shigehiko; Saito, Sadafumi; Sugano, Shigeki

    We are now developing the strawberry harvest robot called “M-3” prototype robot system under the 4th urgent project of MAFF. In order to develop the control software of the M-3 robot more efficiently, we innovated the RT-middleware “OpenRTM-aist” software platform. In this system, we developed 9 kind of RT-Components (RTC): Robot task sequence player RTC, Proxy RTC for image processing software, DC motor controller RTC, Arm kinematics RTC, and so on. In this paper, we discuss advantages of RT-middleware developing system and problems about operating the RTC-configured robotic system by end-users.

  19. Selective visual region of interest to enhance medical video conferencing

    NASA Astrophysics Data System (ADS)

    Bonneau, Walt, Jr.; Read, Christopher J.; Shirali, Girish

    1998-06-01

    The continued economic pressure that is being placed upon the healthcare industry creates both challenge and opportunity to develop cost effective healthcare tools. Tools that provide improvements in the quality of medical care at the same time improve the distribution of efficient care will create product demand. Video Conferencing systems are one of the latest product technologies that are evolving their way into healthcare applications. The systems that provide quality Bi- directional video and imaging at the lowest system and communication cost are creating many possible options for the healthcare industry. A method to use only 128k bits/sec. of ISDN bandwidth while providing quality video images in selected regions will be applied to echocardiograms using a low cost video conferencing system operating within a basic rate ISDN line bandwidth. Within a given display area (frame) it has been observed that only selected informational areas of the frame of are of value when viewing for detail and precision within an image. Much in the same manner that a photograph is cropped. If a method to accomplish Region Of Interest (ROI) was applied to video conferencing using H.320 with H.263 (compression) and H.281 (camera control) international standards, medical image quality could be achieved in a cost-effective manner. For example, the cardiologist could be provided with a selectable three to eight end-point viewable ROI polygon that defines the ROI in the image. This is achieved by the video system calculating the selected regional end-points and creating an alpha mask to signify the importance of the ROI to the compression processor. This region is then applied to the compression algorithm in a manner that the majority of the video conferencing processor cycles are focused on the ROI of the image. An occasional update of the non-ROI area is processed to maintain total image coherence. The user could control the non-ROI area updates. Providing encoder side ROI specification is of value. However, the power of this capability is improved if remote access and selection of the ROI is also provided. Using the H.281 camera standard and proposing an additional option to the standard to allow for remote ROI selection would make this possible. When ROI is applied the ability to reach the equivalent of 384K bits/sec ISDN rates may be achieved or exceeded depending upon the size of the selected ROI using 128K bits/sec. This opens additional opportunity to establish international calling and reduced call rates by up to sixty- six percent making reoccurring communication costs attractive. Rates of twenty to thirty quality ROI updates could be achieved. It is however important to understand that this technique is still under development.

  20. Snowy CME

    NASA Image and Video Library

    2017-12-08

    A solar flare associated with the coronal mass ejection seen in this image generated a flurry of fast-moving solar protons. As each one hits the CCD camera on SOHO, it produces a brief snow-like speckle in the image. Credit: NASA/SOHO CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system...NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Combined Images

    NASA Image and Video Library

    2017-12-08

    Four different instruments on SOHO show a large CME on Nov. 6, 1997. The sun is at the center, with three coronagraph images of different sizes around it. The streaks of white light are from protons hitting the SOHO cameras producing a snowy effect typical of a significant flare. ..Credit: NASA/SOHO..---..CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system...NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Automatic Generation of Issue Maps: Structured, Interactive Outputs for Complex Information Needs

    DTIC Science & Technology

    2012-09-01

    much can result in behaviour similar to the shortest-path chains. 24 Ronald Goldman Neil Lewis Judge Lance Ito 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 jury...Connecting the Dots has also been explored in non-textual domains. The authors of [ Heath et al., 2010] propose building graphs, called Image Webs, to...could imagine a metro map summarizing a dataset of medical records. 2. Images: In [ Heath et al., 2010], Heath et al build graphs called Image Webs to rep

  3. Image correlation microscopy for uniform illumination.

    PubMed

    Gaborski, T R; Sealander, M N; Ehrenberg, M; Waugh, R E; McGrath, J L

    2010-01-01

    Image cross-correlation microscopy is a technique that quantifies the motion of fluorescent features in an image by measuring the temporal autocorrelation function decay in a time-lapse image sequence. Image cross-correlation microscopy has traditionally employed laser-scanning microscopes because the technique emerged as an extension of laser-based fluorescence correlation spectroscopy. In this work, we show that image correlation can also be used to measure fluorescence dynamics in uniform illumination or wide-field imaging systems and we call our new approach uniform illumination image correlation microscopy. Wide-field microscopy is not only a simpler, less expensive imaging modality, but it offers the capability of greater temporal resolution over laser-scanning systems. In traditional laser-scanning image cross-correlation microscopy, lateral mobility is calculated from the temporal de-correlation of an image, where the characteristic length is the illuminating laser beam width. In wide-field microscopy, the diffusion length is defined by the feature size using the spatial autocorrelation function. Correlation function decay in time occurs as an object diffuses from its original position. We show that theoretical and simulated comparisons between Gaussian and uniform features indicate the temporal autocorrelation function depends strongly on particle size and not particle shape. In this report, we establish the relationships between the spatial autocorrelation function feature size, temporal autocorrelation function characteristic time and the diffusion coefficient for uniform illumination image correlation microscopy using analytical, Monte Carlo and experimental validation with particle tracking algorithms. Additionally, we demonstrate uniform illumination image correlation microscopy analysis of adhesion molecule domain aggregation and diffusion on the surface of human neutrophils.

  4. X-ray ‘ghost images’ could cut radiation doses

    NASA Astrophysics Data System (ADS)

    Chen, Sophia

    2018-03-01

    On its own, a single-pixel camera captures pictures that are pretty dull: squares that are completely black, completely white, or some shade of gray in between. All it does, after all, is detect brightness. Yet by connecting a single-pixel camera to a patterned light source, a team of physicists in China has made detailed x-ray images using a statistical technique called ghost imaging, first pioneered 20 years ago in infrared and visible light. Researchers in the field say future versions of this system could take clear x-ray photographs with cheap cameras—no need for lenses and multipixel detectors—and less cancer-causing radiation than conventional techniques.

  5. Thermo-elastic optical coherence tomography.

    PubMed

    Wang, Tianshi; Pfeiffer, Tom; Wu, Min; Wieser, Wolfgang; Amenta, Gaetano; Draxinger, Wolfgang; van der Steen, Antonius F W; Huber, Robert; Soest, Gijs van

    2017-09-01

    The absorption of nanosecond laser pulses induces rapid thermo-elastic deformation in tissue. A sub-micrometer scale displacement occurs within a few microseconds after the pulse arrival. In this Letter, we investigate the laser-induced thermo-elastic deformation using a 1.5 MHz phase-sensitive optical coherence tomography (OCT) system. A displacement image can be reconstructed, which enables a new modality of phase-sensitive OCT, called thermo-elastic OCT. An analysis of the results shows that the optical absorption is a dominating factor for the displacement. Thermo-elastic OCT is capable of visualizing inclusions that do not appear on the structural OCT image, providing additional tissue type information.

  6. SKETCH 4B: An Image Understanding Operating System

    DTIC Science & Technology

    1989-06-14

    LISP Nlambda Function] EQUIVALENT TO: Standard FRANZ liszt , but modified so that it may be called with no arguments, will read and execute...WESTERN ELECTRIC DEVICE INDEPENDENT TROFF (UNLESS YOU DO NOT WANT TO PRINT DOCUMENTATION) 4. FRANZ LISP FROM FRANZ INC. IF SUN3 (NOT NECESSARY IF VAX...RESERVED. DEVELOPED AT LINCOLN LABORATORY. CHAPTERS 1 INTRODUCTION. 2. LISP TUTORIAL. 3. FRANZ EXTENSIONS. 4. ATOMS. 5. OBJECTS. 6. CATALOGS

  7. Telecommunications and data acquisition systems support for the Viking 1975 mission to Mars

    NASA Technical Reports Server (NTRS)

    Mudgway, D. J.

    1983-01-01

    The background for the Viking Lander Monitor Mission (VLMM) is given, and the technical and operational aspects of the tracking and data acquisition support that the Network was called upon to provide are described. An overview of the science results obtained from the imaging, meteorological, and radio science data is also given. The intensive efforts that were made to recover the mission are described.

  8. Circumventing Therapeutic Resistance and the Emergence of Disseminated Breast Cancer Cells Through Non-Invasive Optical Imaging

    DTIC Science & Technology

    2015-06-01

    NOTES 14. ABSTRACT Herein we explore a series of optically distinct near infrared emissive polymersomes (NIREPs; biodegradable polymer vesicles that...characterized into subtypes, without the need for a biopsy. Our system uses non-toxic, biodegradable nanoparticles (called “NIREPs”), which when injected...natural photosynthesis, which exploits NIR-absorbing dyes such as chlorophylls and pheophytins [11]. Relative to the tremendous attention that has

  9. Experimental demonstration of reduced tilt-to-length coupling by using imaging systems in precision interferometers

    NASA Astrophysics Data System (ADS)

    Tröbs, M.; Chwalla, M.; Danzmann, K.; Fernández Barránco, G.; Fitzsimons, E.; Gerberding, O.; Heinzel, G.; Killow, C. J.; Lieser, M.; Perreur-Lloyd, M.; Robertson, D. I.; Schuster, S.; Schwarze, T. S.; Ward, H.; Zwetz, M.

    2017-09-01

    Angular misalignment of one of the interfering beams in laser interferometers can couple into the interferometric length measurement and is called tilt-to-length (TTL) coupling in the following. In the noise budget of the planned space-based gravitational-wave detector evolved Laser Interferometer Space Antenna (eLISA) [1, 2] TTL coupling is the second largest noise source after shot noise [3].

  10. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  11. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  12. Interferometric synthetic aperture radar: Building tomorrow's tools today

    USGS Publications Warehouse

    Lu, Zhong

    2006-01-01

    A synthetic aperture radar (SAR) system transmits electromagnetic (EM) waves at a wavelength that can range from a few millimeters to tens of centimeters. The radar wave propagates through the atmosphere and interacts with the Earth’s surface. Part of the energy is reflected back to the SAR system and recorded. Using a sophisticated image processing technique, called SAR processing (Curlander and McDonough, 1991), both the intensity and phase of the reflected (or backscattered) signal of each ground resolution element (a few meters to tens of meters) can be calculated in the form of a complex-valued SAR image representing the reflectivity of the ground surface. The amplitude or intensity of the SAR image is determined primarily by terrain slope, surface roughness, and dielectric constants, whereas the phase of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets, slowing of the signal by the atmosphere, and the interaction of EM waves with ground surface. Interferometric SAR (InSAR) imaging, a recently developed remote sensing technique, utilizes the interaction of EM waves, referred to as interference, to measure precise distances. Very simply, InSAR involves the use of two or more SAR images of the same area to extract landscape topography and its deformation patterns.

  13. Comparative study between the radiopacity levels of high viscosity and of flowable composite resins, using digital imaging.

    PubMed

    Arita, Emiko S; Silveira, Gilson P; Cortes, Arthur R; Brucoli, Henrique C

    2012-01-01

    The development of countless types and trends of high viscosite and flowable composite resins, with different physical and chemical properties applicable to their broad use in dental clinics calls for further studies regarding their radiopacity level. The aim of this study was to evaluate the radiopacity levels of high viscosity and the flowable composite resins, using digital imaging. 96 composite resin discs 5 mm in diameter and 3 mm thick were radiographed and analyzed. The image acquisition system used was the Digora® Phosphor Storage System and the images were analyzed with the Digora software for Windows. The exposure conditions were: 70 kVp, 8 mA, and 0.2 s. The focal distance was 40 cm. The image densities were obtained with the pixel values of the materials in the digital image. Most of the high viscosity composite resins presented higher radiopacity levels than the flowable composite resins, with statistically significant differences between the trends and groups analyzed (P < 0.05). Among the high viscosity composite resins, Tetric®Ceram presented the highest radiopacity levels and Glacier® presented the lowest. Among the flowable composite resins, Tetric®Flow presented the highest radiopacity levels and Wave® presented the lowest.

  14. Optoelectronic image processing for cervical cancer screening

    NASA Astrophysics Data System (ADS)

    Narayanswamy, Ramkumar; Sharpe, John P.; Johnson, Kristina M.

    1994-05-01

    Automation of the Pap-smear cervical screening method is highly desirable as it relieves tedium for the human operators, reduces cost and should increase accuracy and provide repeatability. We present here the design for a high-throughput optoelectronic system which forms the first stage of a two stage system to automate pap-smear screening. We use a mathematical morphological technique called the hit-or-miss transform to identify the suspicious areas on a pap-smear slide. This algorithm is implemented using a VanderLugt architecture and a time-sequential ANDing smart pixel array.

  15. A digital system for surface reconstruction

    USGS Publications Warehouse

    Zhou, Weiyang; Brock, Robert H.; Hopkins, Paul F.

    1996-01-01

    A digital photogrammetric system, STEREO, was developed to determine three dimensional coordinates of points of interest (POIs) defined with a grid on a textureless and smooth-surfaced specimen. Two CCD cameras were set up with unknown orientation and recorded digital images of a reference model and a specimen. Points on the model were selected as control or check points for calibrating or assessing the system. A new algorithm for edge-detection called local maximum convolution (LMC) helped extract the POIs from the stereo image pairs. The system then matched the extracted POIs and used a least squares “bundle” adjustment procedure to solve for the camera orientation parameters and the coordinates of the POIs. An experiment with STEREO found that the standard deviation of the residuals at the check points was approximately 24%, 49% and 56% of the pixel size in the X, Y and Z directions, respectively. The average of the absolute values of the residuals at the check points was approximately 19%, 36% and 49% of the pixel size in the X, Y and Z directions, respectively. With the graphical user interface, STEREO demonstrated a high degree of automation and its operation does not require special knowledge of photogrammetry, computers or image processing.

  16. Improving LUC estimation accuracy with multiple classification system for studying impact of urbanization on watershed flood

    NASA Astrophysics Data System (ADS)

    Dou, P.

    2017-12-01

    Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).

  17. Earth resources shuttle imaging radar. [systems analysis and design analysis of pulse radar for earth resources information system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A report is presented on a preliminary design of a Synthetic Array Radar (SAR) intended for experimental use with the space shuttle program. The radar is called Earth Resources Shuttle Imaging Radar (ERSIR). Its primary purpose is to determine the usefulness of SAR in monitoring and managing earth resources. The design of the ERSIR, along with tradeoffs made during its evolution is discussed. The ERSIR consists of a flight sensor for collecting the raw radar data and a ground sensor used both for reducing these radar data to images and for extracting earth resources information from the data. The flight sensor consists of two high powered coherent, pulse radars, one that operates at L and the other at X-band. Radar data, recorded on tape can be either transmitted via a digital data link to a ground terminal or the tape can be delivered to the ground station after the shuttle lands. A description of data processing equipment and display devices is given.

  18. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  19. Permission to Ponder

    ERIC Educational Resources Information Center

    Cutler, Kay M.; Moeller, Mary R.

    2017-01-01

    "In many ways, images are the vehicle of comprehension, thought, and action. We integrate parts of images, we remember images, we manipulate images." This quote from James E. Zull clarifies the rationale for a discussion protocol called Visual Thinking Strategies (VTS), in which teachers focus students' attention on an image and ask…

  20. Optical images of visible and invisible percepts in the primary visual cortex of primates

    PubMed Central

    Macknik, Stephen L.; Haglund, Michael M.

    1999-01-01

    We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363

  1. Hubble Team Unveils Most Colorful View of Universe Captured by Space Telescope

    NASA Image and Video Library

    2014-06-04

    Astronomers using NASA's Hubble Space Telescope have assembled a comprehensive picture of the evolving universe – among the most colorful deep space images ever captured by the 24-year-old telescope. Researchers say the image, in new study called the Ultraviolet Coverage of the Hubble Ultra Deep Field, provides the missing link in star formation. The Hubble Ultra Deep Field 2014 image is a composite of separate exposures taken in 2003 to 2012 with Hubble's Advanced Camera for Surveys and Wide Field Camera 3. Credit: NASA/ESA Read more: 1.usa.gov/1neD0se NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. Flight Results from the HST SM4 Relative Navigation Sensor System

    NASA Technical Reports Server (NTRS)

    Naasz, Bo; Eepoel, John Van; Queen, Steve; Southward, C. Michael; Hannah, Joel

    2010-01-01

    On May 11, 2009, Space Shuttle Atlantis roared off of Launch Pad 39A enroute to the Hubble Space Telescope (HST) to undertake its final servicing of HST, Servicing Mission 4. Onboard Atlantis was a small payload called the Relative Navigation Sensor experiment, which included three cameras of varying focal ranges, avionics to record images and estimate, in real time, the relative position and attitude (aka "pose") of the telescope during rendezvous and deploy. The avionics package, known as SpaceCube and developed at the Goddard Space Flight Center, performed image processing using field programmable gate arrays to accelerate this process, and in addition executed two different pose algorithms in parallel, the Goddard Natural Feature Image Recognition and the ULTOR Passive Pose and Position Engine (P3E) algorithms

  3. Namibia and Central Angola

    Atmospheric Science Data Center

    2013-04-15

    ... The images on the left are natural color (red, green, blue) images from MISR's vertical-viewing (nadir) camera. The images on the ... one of MISR's derived surface products. The radiance (light intensity) in each pixel of the so-called "top-of-atmosphere" images on ...

  4. Overall design of imaging spectrometer on-board light aircraft

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhongqi, H.; Zhengkui, C.; Changhua, C.

    1996-11-01

    Aerial remote sensing is the earliest remote sensing technical system and has gotten rapid development in recent years. The development of aerial remote sensing was dominated by high to medium altitude platform in the past, and now it is characterized by the diversity platform including planes of high-medium-low flying altitude, helicopter, airship, remotely controlled airplane, glider, and balloon. The widely used and rapidly developed platform recently is light aircraft. Early in the close of 1970s, Beijing Research Institute of Uranium Geology began aerial photography and geophysical survey using light aircraft, and put forward the overall design scheme of light aircraftmore » imaging spectral application system (LAISAS) in 19905. LAISAS is comprised of four subsystem. They are called measuring platform, data acquiring subsystem, ground testing and data processing subsystem respectively. The principal instruments of LAISAS include measuring platform controlled by inertia gyroscope, aerial spectrometer with high spectral resolution, imaging spectrometer, 3-channel scanner, 128-channel imaging spectrometer, GPS, illuminance-meter, and devices for atmospheric parameters measuring, ground testing, data correction and processing. LAISAS has the features of integrity from data acquisition to data processing and to application; of stability which guarantees the image quality and is comprised of measuring, ground testing device, and in-door data correction system; of exemplariness of integrated the technology of GIS, GPS, and Image Processing System; of practicality which embodied LAISAS with flexibility and high ratio of performance to cost. So, it can be used in the fields of fundamental research of Remote Sensing and large-scale mapping for resource exploration, environmental monitoring, calamity prediction, and military purpose.« less

  5. Detection of small surface vessels in near, medium, and far infrared spectral bands

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Milewski, S.; Kastek, M.; Trzaskawka, P.; Szustakowski, M.; Ciurapinski, W.; Zyczkowski, M.

    2011-11-01

    Protection of naval bases and harbors requires close co-operation between security and access control systems covering land areas and those monitoring sea approach routes. The typical location of naval bases and harbors - usually next to a large city - makes it difficult to detect and identify a threat in the dense regular traffic of various sea vessels (i.e. merchant ships, fishing boats, tourist ships). Due to the properties of vessel control systems, such as AIS (Automatic Identification System), and the effectiveness of radar and optoelectronic systems against different targets it seems that fast motor boats called RIB (Rigid Inflatable Boat) could be the most serious threat to ships and harbor infrastructure. In the paper the process and conditions for the detection and identification of high-speed boats in the areas of ports and naval bases in the near, medium and far infrared is presented. Based on the results of measurements and recorded thermal images the actual temperature contrast delta T (RIB / sea) will be determined, which will further allow to specify the theoretical ranges of detection and identification of the RIB-type targets for an operating security system. The data will also help to determine the possible advantages of image fusion where the component images are taken in different spectral ranges. This will increase the probability of identifying the object by the multi-sensor security system equipped additionally with the appropriate algorithms for detecting, tracking and performing the fusion of images from the visible and infrared cameras.

  6. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  7. D Modeling of Industrial Heritage Building Using COTSs System: Test, Limits and Performances

    NASA Astrophysics Data System (ADS)

    Piras, M.; Di Pietra, V.; Visintini, D.

    2017-08-01

    The role of UAV systems in applied geomatics is continuously increasing in several applications as inspection, surveying and geospatial data. This evolution is mainly due to two factors: new technologies and new algorithms for data processing. About technologies, from some years ago there is a very wide use of commercial UAV even COTSs (Commercial On-The-Shelf) systems. Moreover, these UAVs allow to easily acquire oblique images, giving the possibility to overcome the limitations of the nadir approach related to the field of view and occlusions. In order to test potential and issue of COTSs systems, the Italian Society of Photogrammetry and Topography (SIFET) has organised the SBM2017, which is a benchmark where all people can participate in a shared experience. This benchmark, called "Photogrammetry with oblique images from UAV: potentialities and challenges", permits to collect considerations from the users, highlight the potential of these systems, define the critical aspects and the technological challenges and compare distinct approaches and software. The case study is the "Fornace Penna" in Scicli (Ragusa, Italy), an inaccessible monument of industrial architecture from the early 1900s. The datasets (images and video) have been acquired from three different UAVs system: Parrot Bebop 2, DJI Phantom 4 and Flytop Flynovex. The aim of this benchmark is to generate the 3D model of the "Fornace Penna", making an analysis considering different software, imaging geometry and processing strategies. This paper describes the surveying strategies, the methodologies and five different photogrammetric obtained results (sensor calibration, external orientation, dense point cloud and two orthophotos), using separately - the single images and the frames extracted from the video - acquired with the DJI system.

  8. Efficacy of a novel IGS system in atrial septal defect repair

    NASA Astrophysics Data System (ADS)

    Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.

    2013-03-01

    Congenital heart disease occurs in 107.6 out of 10,000 live births, with Atrial Septal Defects (ASD) accounting for 10% of these conditions. Historically, ASDs were treated with open heart surgery using cardiopulmonary bypass, allowing a patch to be sewn over the defect. In 1976, King et al. demonstrated use of a transcatheter occlusion procedure, thus reducing the invasiveness of ASD repair. Localization during these catheter based procedures traditionally has relied on bi-plane fluoroscopy; more recently trans-esophageal echocardiography (TEE) and intra-cardiac echocardiography (ICE) have been used to navigate these procedures. Although there is a high success rate using the transcatheter occlusion procedure, fluoroscopy poses radiation dose risk to both patient and clinician. The impact of this dose to the patients is important as many of those undergoing this procedure are children, who have an increased risk associated with radiation exposure. Their longer life expectancy than adults provides a larger window of opportunity for expressing the damaging effects of ionizing radiation. In addition, epidemiologic studies of exposed populations have demonstrated that children are considerably more sensitive to the carcinogenic effects radiation. Image-guided surgery (IGS) uses pre-operative and intra-operative images to guide surgery or an interventional procedure. Central to every IGS system is a software application capable of processing and displaying patient images, registration between multiple coordinate systems, and interfacing with a tool tracking system. We have developed a novel image-guided surgery framework called Kit for Navigation by Image Focused Exploration (KNIFE). In this work we assess the efficacy of this image-guided navigation system for ASD repair using a series of mock clinical experiments designed to simulate ASD repair device deployment.

  9. Black light - How sensors filter spectral variation of the illuminant

    NASA Technical Reports Server (NTRS)

    Brainard, David H.; Wandell, Brian A.; Cowan, William B.

    1989-01-01

    Visual sensor responses may be used to classify objects on the basis of their surface reflectance functions. In a color image, the image data are represented as a vector of sensor responses at each point in the image. This vector depends both on the surface reflectance functions and on the spectral power distribution of the ambient illumination. Algorithms designed to classify objects on the basis of their surface reflectance functions typically attempt to overcome the dependence of the sensor responses on the illuminant by integrating sensor data collected from multiple surfaces. In machine vision applications, it is shown that it is often possible to design the sensor spectral responsivities so that the vector direction of the sensor responses does not depend upon the illuminant. The conditions under which this is possible are given and an illustrative calculation is performed. In biological systems, where the sensor responsivities are fixed, it is shown that some changes in the illumination cause no change in the sensor responses. Such changes in illuminant are called black illuminants. It is possible to express any illuminant as the sum of two unique components. One component is a black illuminant. The second component is called the visible component. The visible component of an illuminant completely characterizes the effect of the illuminant on the vector of sensor responses.

  10. A spatially-variant deconvolution method based on total variation for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza

    2017-03-01

    Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.

  11. LandingNav: a precision autonomous landing sensor for robotic platforms on planetary bodies

    NASA Astrophysics Data System (ADS)

    Katake, Anup; Bruccoleri, Chrisitian; Singla, Puneet; Junkins, John L.

    2010-01-01

    Increased interest in the exploration of extra terrestrial planetary bodies calls for an increase in the number of spacecraft landing on remote planetary surfaces. Currently, imaging and radar based surveys are used to determine regions of interest and a safe landing zone. The purpose of this paper is to introduce LandingNav, a sensor system solution for autonomous landing on planetary bodies that enables landing on unknown terrain. LandingNav is based on a novel multiple field of view imaging system that leverages the integration of different state of the art technologies for feature detection, tracking, and 3D dense stereo map creation. In this paper we present the test flight results of the LandingNav system prototype. Sources of errors due to hardware limitations and processing algorithms were identified and will be discussed. This paper also shows that addressing the issues identified during the post-flight test data analysis will reduce the error down to 1-2%, thus providing for a high precision 3D range map sensor system.

  12. A New Concept for Geothermal Energy Extraction: The Radiator - Enhanced Geothermal System

    NASA Astrophysics Data System (ADS)

    Hilpert, M.; Geiser, P.; Marsh, B. D.; Malin, P. E.; Moore, S.

    2014-12-01

    Enhanced Geothermal Systems (EGS) in hot dry rock frequently underperform or fail due to insufficient reservoir characterization and poorly controlled permeability stimulation. Our new EGS design is based on the concept of a cooling radiator of an internal combustion engine, which we call the Radiator EGS (RAD-EGS). Within a hot sedimentary aquifer, we propose to construct vertically extensive heat exchanger vanes, which consist of rubblized zones of high permeability and which emulate a hydrothermal system. A "crows-foot" lateral drilling pattern at multiple levels is used to form a vertical array that includes S1 and Shmax. To create the radiator, we propose to use propellant fracing. System cool-down is delayed by regional background flow and induced upward flow of the coolant which initially heats the rock. Tomographic Fracture Imaging is used to image and control the permeability field changes. Preliminary heat transfer calculations suggest that the RAD-EGS will allow for commercial electricity production for at least several tens of years.

  13. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2013-11-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  14. Effective 3-D surface modeling for geographic information systems

    NASA Astrophysics Data System (ADS)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  15. Statistical framework for the utilization of simultaneous pupil plane and focal plane telemetry for exoplanet imaging. I. Accounting for aberrations in multiple planes.

    PubMed

    Frazin, Richard A

    2016-04-01

    A new generation of telescopes with mirror diameters of 20 m or more, called extremely large telescopes (ELTs), has the potential to provide unprecedented imaging and spectroscopy of exoplanetary systems, if the difficulties in achieving the extremely high dynamic range required to differentiate the planetary signal from the star can be overcome to a sufficient degree. Fully utilizing the potential of ELTs for exoplanet imaging will likely require simultaneous and self-consistent determination of both the planetary image and the unknown aberrations in multiple planes of the optical system, using statistical inference based on the wavefront sensor and science camera data streams. This approach promises to overcome the most important systematic errors inherent in the various schemes based on differential imaging, such as angular differential imaging and spectral differential imaging. This paper is the first in a series on this subject, in which a formalism is established for the exoplanet imaging problem, setting the stage for the statistical inference methods to follow in the future. Every effort has been made to be rigorous and complete, so that validity of approximations to be made later can be assessed. Here, the polarimetric image is expressed in terms of aberrations in the various planes of a polarizing telescope with an adaptive optics system. Further, it is shown that current methods that utilize focal plane sensing to correct the speckle field, e.g., electric field conjugation, rely on the tacit assumption that aberrations on multiple optical surfaces can be represented as aberration on a single optical surface, ultimately limiting their potential effectiveness for ground-based astronomy.

  16. Synergetic computer and holonics - information dynamics of a semantic computer

    NASA Astrophysics Data System (ADS)

    Shimizu, H.; Yamaguchi, Y.

    1987-12-01

    The dynamics of semantic information in biosystem is studied based on holons, generators of mutual relations. Any biosystem has an internal world, a so-called "self", which has an intrinsic purpose rendering the system continuously alive and developed as much as possible against a fluctuating external world. External signals to the system through sensory organs are classified by the self into two basic categories, semantic information with some meaning and value for the purpose and inputs from background and noise sources. Due to this breaking of semantic symmetry, any input signals are transformed into a figure and background, respectively. As a typical example, the visual perception of vertebrates is studied. For such semantic transformation the external signal is first decomposed and converted into a number of elementary signs named "syntons" which are then transmitted into a sensory area of cortex corresponding to an image synthesizer. The synthesizer is a sort of autonomic parallel processor composed of autonomic units, "holons", which are characterized by many internal modes. Syntons are fed into the holons one by one. A set of the elementary meanings, the so-called "semons", provided to the synton are encoded in the internal modes of the holon; that is, each internal mode encodes a semon. A dynamic information theory for the transformation of external signals to semantic information is developed based on our model which we call holovision. Holovision is a dynamic model of visual perception that processes an autonomic ability to self-organize visual images. Autonomous oscillators are utilized as the line processors to encode line elements with specific orientations in their phases as semons. An information space is defined according to the assembly of holons; the spatial plane on which holons are arranged is a syntactic subspace while the internal modes of the holons span a semantic subspace in the orthogonal direction. In this information space, the image of a figure is self-organized - as a sort of spatiotemporal pattern - by autonomic coordinations of the holons that select relevant internal modes, accompanied with compression of irrelevant syntons that correspond to the background. Holons coded by a synton are relevantly connected by means of coherent relations, i.e., dynamic connections with time-coherence, in order to represent the image that varies in time depending on the instantaneous state of the external object. These connections depend on the internal modes that are cooperatively selectively selected by the holons. The image is regarded as a bridge between the external and internal world that has both external and internal consistency. The meaning of the image, i.e., transformed semantic information, is spontaneously transferred from semantic items that have a coherent relation with the image, and the external signal is perceived by the self through the image. We demonstrate that images are indeed self-organized in holovision in the previously described sense. Simulated processes of the creation of semantic information in holovision are shown to display typical features of the forgoing steps of information compression. Based on these results, we propose quantitative indices that represent the value of semantic information in the image processor as well as in the memory.

  17. Dental magnetic resonance imaging: making the invisible visible.

    PubMed

    Idiyatullin, Djaudat; Corum, Curt; Moeller, Steen; Prasad, Hari S; Garwood, Michael; Nixdorf, Donald R

    2011-06-01

    Clinical dentistry is in need of noninvasive and accurate diagnostic methods to better evaluate dental pathosis. The purpose of this work was to assess the feasibility of a recently developed magnetic resonance imaging (MRI) technique, called SWeep Imaging with Fourier Transform (SWIFT), to visualize dental tissues. Three in vitro teeth, representing a limited range of clinical conditions of interest, imaged using a 9.4T system with scanning times ranging from 100 seconds to 25 minutes. In vivo imaging of a subject was performed using a 4T system with a 10-minute scanning time. SWIFT images were compared with traditional two-dimensional radiographs, three-dimensional cone-beam computed tomography (CBCT) scanning, gradient-echo MRI technique, and histological sections. A resolution of 100 μm was obtained from in vitro teeth. SWIFT also identified the presence and extent of dental caries and fine structures of the teeth, including cracks and accessory canals, which are not visible with existing clinical radiography techniques. Intraoral positioning of the radiofrequency coil produced initial images of multiple adjacent teeth at a resolution of 400 μm. SWIFT MRI offers simultaneous three-dimensional hard- and soft-tissue imaging of teeth without the use of ionizing radiation. Furthermore, it has the potential to image minute dental structures within clinically relevant scanning times. This technology has implications for endodontists because it offers a potential method to longitudinally evaluate teeth where pulp and root structures have been regenerated. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  18. Computerized follow-up of discrepancies in image interpretation between emergency and radiology departments.

    PubMed

    Siegel, E; Groleau, G; Reiner, B; Stair, T

    1998-08-01

    Radiographs are ordered and interpreted for immediate clinical decisions 24 hours a day by emergency physicians (EP's). The Joint Commission for Accreditation of Health Care Organizations requires that all these images be reviewed by radiologists and that there be some mechanism for quality improvement (QI) for discrepant readings. There must be a log of discrepancies and documentation of follow up activities, but this alone does not guarantee effective Q.I. Radiologists reviewing images from the previous day and night often must guess at the preliminary interpretation of the EP and whether follow up action is necessary. EP's may remain ignorant of the final reading and falsely assume the initial diagnosis and treatment were correct. Some hospitals use a paper system in which the EP writes a preliminary interpretation on the requisition slip, which will be available when the radiologist dictates the final reading. Some hospitals use a classification of discrepancies based on clinical import and urgency, and communicated to the EP on duty at the time of the official reading, but may not communicate discrepancies to the EP's who initial read the images. Our computerized radiology department and picture archiving and communications system have increased technologist and radiologist productivity, and decreased retakes and lost films. There are fewer face-to-face consultants of radiologists and clinicians, but more communication by telephone and electronic annotation of PACS images. We have integrated the QI process for emergency department (ED) images into the PACS, and gained advantages over the traditional discrepancy log. Requisitions including clinical indications are entered into the Hospital Information System and then appear on the PACS along with images on readings. The initial impression, time of review, and the initials of the EP are available to the radiologist dictating the official report. The radiologist decides if there is a discrepancy, and whether it is category I (potentially serious, needs immediate follow-up), category II (moderate risk, follow-up in one day), or category III (low risk, follow-up in several days). During the working day, the radiologist calls immediately for category I discrepancies. Those noted from the evening, night, or weekend before are called to the EP the next morning. All discrepancies with the preliminary interpretation are communicated to the EP and are kept in a computerized log for review by a radiologist at a weekly ED teaching conference. This system has reduced the need for the radiologist to ask or guess what the impression was in the ED the night before. It has reduced the variability in recording of impressions by EP's, in communication back from radiologists, in the clinical] follow-up made, and in the documentation of the whole QI process. This system ensures that EP's receive notification of their discrepant readings, and provides continuing education to all the EP's on interpreting images on their patients.

  19. Lucky imaging multiplicity studies of exoplanet host stars

    NASA Astrophysics Data System (ADS)

    Ginski, C.; Mugrauer, M.; Neuhäuser, R.

    2014-03-01

    The multiplicity of stars is an important parameter in order to understand star and planet formation. In the past decades extrasolar planets have been discovered around more than 600 stars with the radial velocity and transit techniques. Many of these systems present extreme cases of massive planetary objects at very close separations to their primary stars. To explain the configurations of such systems is hence a continued challenge in the development of formation theories. It will be very interesting to determine if there are significant differences between planets in single and multiple star systems. In our ongoing study we use high resolution imaging techniques to clarify the multiplicity status of nearby (within 250 pc) planet host stars. For targets on the northern hemisphere we employ the lucky imaging instrument Astralux at the 2.2 m telescope of the Calar Alto Observatory. The lucky imaging approach consists of taking several thousand short images with integration times shorter than the speckle coherence time, to sample the speckle variations during the observation window. We then only choose the so called "lucky shots" with a very high Strehl ratio in one of the speckles, to shift and add, resulting in a final image with the highest possible Strehl ratio and therefore highest possible angular resolution. We will present recent results of our study at the Calar Alto Observatory, as well as observations undertaken with the RTK camera at the 20 cm guiding telescope in our own observatory in Großschwabhausen.

  20. Blueberries Inside Popcorn

    NASA Image and Video Library

    2004-08-18

    This view from the microscopic imager on NASA Mars Exploration Rover Opportunity shows a type of light-colored, rough-textured spherules scientists call popcorn in contrast to the darker, smoother spherules called blueberries.

  1. SUMIRAD: a near real-time MMW radiometer imaging system for threat detection in an urban environment

    NASA Astrophysics Data System (ADS)

    Dill, Stephan; Peichl, Markus; Rudolf, Daniel

    2012-10-01

    The armed forces are nowadays confronted with a wide variety of types of operations. During peace keeping missions in an urban environment, where small units patrol the streets with armored vehicles, the team leader is confronted with a very complex threat situation. The asymmetric imminence arises in most cases from so called IEDs (Improvised explosive devices) which are found in a multitude of versions. In order to avoid risky situations the early detection of possible threats due to advanced reconnaissance and surveillance sensors will provide an important advantage. A European consortium consisting of GMV S.A. (Spain, "Grupo Tecnològico e Industrial"), RMA (Belgium, "Royal Military Academy"), TUM ("Technische Universität München") and DLR (Germany, "Deutsches Zentrum für Luft- und Raumfahrt") developed in the SUM project (Surveillance in an urban environment using mobile sensors) a low-cost multi-sensor vehicle based surveillance system in order to enhance situational awareness for moving security and military patrols as well as for static checkpoints. The project was funded by the European Defense Agency (EDA) in the Joint Investment Program on Force Protection (JIP-FP). The SUMIRAD (SUM imaging radiometer) system, developed by DLR, is a fast radiometric imager and part of the SUM sensor suite. This paper will present the principle of the SUMIRAD system and its key components. Furthermore the image processing will be described. Imaging results from several measurement campaigns will be presented. The overall SUM system and the individual subsystems are presented in more detail in separate papers during this conference.

  2. Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation

    PubMed Central

    Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.

    2013-01-01

    This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809

  3. Fires in Australia's Northern Territory and Bathurst Island

    NASA Image and Video Library

    2017-12-08

    The Aqua satellite collected this natural-color image of fires in Australia with the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on June 30, 2017. The image looks at multiple fires and smoke from those fires burning in northern Australia and the island of Bathurst on June 30, 2017. The Northern Territory fire incident map does show some incidents of grass and shrub fires, in the past 24 hours, but it also shows areas of what are called "strategic fires" which are those set by fire experts to rid an area of overgrowth, brush, dead grass and shrubs to prevent fires from spreading in the event of a lightning strike. NASA image courtesy Jeff Schmaltz, MODIS Rapid Response Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  4. Hubble Solves Mystery on Source of Supernova in Nearby Galaxy

    NASA Image and Video Library

    2017-12-08

    NASA image release January 11, 2012 Using NASA's Hubble Space Telescope, astronomers have solved a longstanding mystery on the type of star, or so-called progenitor, that caused a supernova in a nearby galaxy. The finding yields new observational data for pinpointing one of several scenarios that could trigger such outbursts. Based on previous observations from ground-based telescopes, astronomers knew that a kind of supernova called a Type Ia created a remnant named SNR 0509-67.5, which lies 170,000 light-years away in the Large Magellanic Cloud galaxy. The type of system that leads to this kind of supernova explosion has long been a high importance problem with various proposed solutions but no decisive answer. All these solutions involve a white dwarf star that somehow increases in mass to the highest limit. Astronomers failed to find any companion star near the center of the remnant, and this rules out all but one solution, so the only remaining possibility is that this one Type Ia supernova came from a pair of white dwarfs in close orbit. To read more go to: www.nasa.gov/mission_pages/hubble/science/supernova-sourc... Image Credit: NASA, ESA, CXC, SAO, the Hubble Heritage Team (STScI/AURA), and J. Hughes (Rutgers University) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  5. Designing and researching of the virtual display system based on the prism elements

    NASA Astrophysics Data System (ADS)

    Vasilev, V. N.; Grimm, V. A.; Romanova, G. E.; Smirnov, S. A.; Bakholdin, A. V.; Grishina, N. Y.

    2014-05-01

    Problems of designing of systems for virtual display systems for augmented reality placed near the observers eye (so called head worn displays) with the light guide prismatic elements are considered. Systems of augmented reality is the complex consists of the image generator (most often it's the microdisplay with the illumination system if the display is not self-luminous), the objective which forms the display image practically in infinity and the combiner which organizes the light splitting so that an observer could see the information of the microdisplay and the surrounding environment as the background at the same time. This work deals with the system with the combiner based on the composite structure of the prism elements. In the work three cases of the prism combiner design are considered and also the results of the modeling with the optical design software are presented. In the model the question of the large pupil zone was analyzed and also the discontinuous character (mosaic structure) of the angular field in transmission of the information from the microdisplay to the observer's eye with the prismatic structure are discussed.

  6. Fluoromodule-based reporter/probes designed for in vivo fluorescence imaging

    PubMed Central

    Zhang, Ming; Chakraborty, Subhasish K.; Sampath, Padma; Rojas, Juan J.; Hou, Weizhou; Saurabh, Saumya; Thorne, Steve H.; Bruchez, Marcel P.; Waggoner, Alan S.

    2015-01-01

    Optical imaging of whole, living animals has proven to be a powerful tool in multiple areas of preclinical research and has allowed noninvasive monitoring of immune responses, tumor and pathogen growth, and treatment responses in longitudinal studies. However, fluorescence-based studies in animals are challenging because tissue absorbs and autofluoresces strongly in the visible light spectrum. These optical properties drive development and use of fluorescent labels that absorb and emit at longer wavelengths. Here, we present a far-red absorbing fluoromodule–based reporter/probe system and show that this system can be used for imaging in living mice. The probe we developed is a fluorogenic dye called SC1 that is dark in solution but highly fluorescent when bound to its cognate reporter, Mars1. The reporter/probe complex, or fluoromodule, produced peak emission near 730 nm. Mars1 was able to bind a variety of structurally similar probes that differ in color and membrane permeability. We demonstrated that a tool kit of multiple probes can be used to label extracellular and intracellular reporter–tagged receptor pools with 2 colors. Imaging studies may benefit from this far-red excited reporter/probe system, which features tight coupling between probe fluorescence and reporter binding and offers the option of using an expandable family of fluorogenic probes with a single reporter gene. PMID:26348895

  7. Fast Imaging Solar Spectrograph System in New Solar Telescope

    NASA Astrophysics Data System (ADS)

    Park, Y.-D.; Kim, Y. H.; Chae, J.; Goode, P. R.; Cho, K. S.; Park, H. M.; Nah, J. K.; Jang, B. H.

    2010-12-01

    In 2004, Big Bear Solar Observatory in California, USA launched a project for construction of the world's largest aperture solar telescope (D = 1.6m) called New Solar Telescope(NST). University of Hawaii (UH) and Korea Astronomy and Space Science Institute(KASI) partly collaborate on the project. NST is a designed off-axis parabolic Gregorian reflector with very high spatial resolution(0.07 arcsec at 5000A) and is equipped with several scientific instruments such as Visible Imaging Magnetograph (VIM), InfraRed Imaging Magnetograph IRIM), and so on. Since these scientific instruments are focused on studies of the solar photosphere, we need a post-focus instrument for the NST to study the fine structures and dynamic patterns of the solar chromosphere and low Transition Region (TR) layer, including filaments/prominences, spicules, jets, micro flares, etc. For this reason, we developed and installed a fast imaging solar spectrograph(FISS) system on the NST withadvantages of achieving compact design with high spectral resolution and small aberration as well as recording many solar spectral lines in a single and/or dual band mode. FISS was installed in May, 2010 and now we carry out a test observation. In this talk, we introduce the FISS system and the results of the test observation after FISS installation.

  8. Portable image-manipulation software: what is the extra development cost?

    PubMed

    Ligier, Y; Ratib, O; Funk, M; Perrier, R; Girard, C; Logean, M

    1992-08-01

    A hospital-wide picture archiving and communication system (PACS) project is currently under development at the University Hospital of Geneva. The visualization and manipulation of images provided by different imaging modalities constitutes one of the most challenging component of a PACS. It was necessary to provide this visualization software on a number of types of workstations because of the varying requirements imposed by the range of clinical uses it must serve. The user interface must be the same, independent of the underlying workstation. In addition to a standard set of image-manipulation and processing tools, there is a need for more specific clinical tools that can be easily adapted to specific medical requirements. To achieve this goal, it was elected to develop a modular and portable software called OSIRIS. This software is available on two different operating systems (the UNIX standard X-11/OSF-Motif based workstations and the Macintosh family) and can be easily ported to other systems. The extra effort required to design such software in a modular and portable way was worthwhile because it resulted in a platform that can be easily expanded and adapted to a variety of specific clinical applications. Its portability allows users to benefit from the rapidly evolving workstation technology and to adapt the performance to suit their needs.

  9. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment

    PubMed Central

    Koroušić Seljak, Barbara

    2017-01-01

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients. PMID:28653995

  10. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  11. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  12. Power cavitation-guided blood-brain barrier opening with focused ultrasound and microbubbles

    NASA Astrophysics Data System (ADS)

    Burgess, M. T.; Apostolakis, I.; Konofagou, E. E.

    2018-03-01

    Image-guided monitoring of microbubble-based focused ultrasound (FUS) therapies relies on the accurate localization of FUS-stimulated microbubble activity (i.e. acoustic cavitation). Passive cavitation imaging with ultrasound arrays can achieve this, but with insufficient spatial resolution. In this study, we address this limitation and perform high-resolution monitoring of acoustic cavitation-mediated blood-brain barrier (BBB) opening with a new technique called power cavitation imaging. By synchronizing the FUS transmit and passive receive acquisition, high-resolution passive cavitation imaging was achieved by using delay and sum beamforming with absolute time delays. Since the axial image resolution is now dependent on the duration of the received acoustic cavitation emission, short pulses of FUS were used to limit its duration. Image sets were acquired at high-frame rates for calculation of power cavitation images analogous to power Doppler imaging. Power cavitation imaging displays the mean intensity of acoustic cavitation over time and was correlated with areas of acoustic cavitation-induced BBB opening. Power cavitation-guided BBB opening with FUS could constitute a standalone system that may not require MRI guidance during the procedure. The same technique can be used for other acoustic cavitation-based FUS therapies, for both safety and guidance.

  13. Power cavitation-guided blood-brain barrier opening with focused ultrasound and microbubbles.

    PubMed

    Burgess, M T; Apostolakis, I; Konofagou, E E

    2018-03-15

    Image-guided monitoring of microbubble-based focused ultrasound (FUS) therapies relies on the accurate localization of FUS-stimulated microbubble activity (i.e. acoustic cavitation). Passive cavitation imaging with ultrasound arrays can achieve this, but with insufficient spatial resolution. In this study, we address this limitation and perform high-resolution monitoring of acoustic cavitation-mediated blood-brain barrier (BBB) opening with a new technique called power cavitation imaging. By synchronizing the FUS transmit and passive receive acquisition, high-resolution passive cavitation imaging was achieved by using delay and sum beamforming with absolute time delays. Since the axial image resolution is now dependent on the duration of the received acoustic cavitation emission, short pulses of FUS were used to limit its duration. Image sets were acquired at high-frame rates for calculation of power cavitation images analogous to power Doppler imaging. Power cavitation imaging displays the mean intensity of acoustic cavitation over time and was correlated with areas of acoustic cavitation-induced BBB opening. Power cavitation-guided BBB opening with FUS could constitute a standalone system that may not require MRI guidance during the procedure. The same technique can be used for other acoustic cavitation-based FUS therapies, for both safety and guidance.

  14. A digital video tracking system

    NASA Astrophysics Data System (ADS)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  15. Earth Observing System Data Gateway

    NASA Technical Reports Server (NTRS)

    Pfister, Robin; McMahon, Joe; Amrhein, James; Sefert, Ed; Marsans, Lorena; Solomon, Mark; Nestler, Mark

    2006-01-01

    The Earth Observing System Data Gateway (EDG) software provides a "one-stop-shopping" standard interface for exploring and ordering Earth-science data stored at geographically distributed sites. EDG enables a user to do the following: 1) Search for data according to high-level criteria (e.g., geographic location, time, or satellite that acquired the data); 2) Browse the results of a search, viewing thumbnail sketches of data that satisfy the user s criteria; and 3) Order selected data for delivery to a specified address on a chosen medium (e.g., compact disk or magnetic tape). EDG consists of (1) a component that implements a high-level client/server protocol, and (2) a collection of C-language libraries that implement the passing of protocol messages between an EDG client and one or more EDG servers. EDG servers are located at sites usually called "Distributed Active Archive Centers" (DAACs). Each DAAC may allow access to many individual data items, called "granules" (e.g., single Landsat images). Related granules are grouped into collections called "data sets." EDG enables a user to send a search query to multiple DAACs simultaneously, inspect the resulting information, select browseable granules, and then order selected data from the different sites in a seamless fashion.

  16. Effect analysis of oil paint on the space optical contamination

    NASA Astrophysics Data System (ADS)

    Lu, Chun-lian; Lv, He; Han, Chun-xu; Wei, Hai-Bin

    2013-08-01

    The space contamination of spacecraft surface is a hot topic in the spacecraft environment project and environment safeguard for spacecraft. Since the 20th century, many American satellites have had malfunction for space contamination. The space optical systems are usually exposed to the external space environment. The particulate contamination of optical systems will degrade the detection ability. We call the optical damage. It also has a bad influence on the spectral imaging quality of the whole system. In this paper, effects of contamination on spectral imaging were discussed. The experiment was designed to observe the effect value. We used numeral curve fitting to analyze the relationship between the optical damage factor (Transmittance decay factor) and the contamination degree of the optical system. We gave the results of six specific wavelengths from 450 to 700nm and obtained the function of between the optical damage factor and contamination degree. We chose three colors of oil paint to be compared. Through the numeral curve fitting and processing data, we could get the mass thickness for different colors of oil paint when transmittance decreased to 50% and 30%. Some comparisons and research conclusions were given. From the comparisons and researches, we could draw the conclusions about contamination effects of oil paint on the spectral imaging system.

  17. Imaging Radar in the Mojave Desert-Death Valley Region

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.

    2001-01-01

    The Mojave Desert-Death Valley region has had a long history as a test bed for remote sensing techniques. Along with visible-near infrared and thermal IR sensors, imaging radars have flown and orbited over the area since the 1970's, yielding new insights into the geologic applications of these technologies. More recently, radar interferometry has been used to derive digital topographic maps of the area, supplementing the USGS 7.5' digital quadrangles currently available for nearly the entire area. As for their shorter-wavelength brethren, imaging radars were tested early in their civilian history in the Mojave Desert-Death Valley region because it contains a variety of surface types in a small area without the confounding effects of vegetation. The earliest imaging radars to be flown over the region included military tests of short-wavelength (3 cm) X-band sensors. Later, the Jet Propulsion Laboratory began its development of imaging radars with an airborne sensor, followed by the Seasat orbital radar in 1978. These systems were L-band (25 cm). Following Seasat, JPL embarked upon a series of Space Shuttle Imaging Radars: SIRA (1981), SIR-B (1984), and SIR-C (1994). The most recent in the series was the most capable radar sensor flown in space and acquired large numbers of data swaths in a variety of test areas around the world. The Mojave Desert-Death Valley region was one of those test areas, and was covered very well with 3 wavelengths, multiple polarizations, and at multiple angles. At the same time, the JPL aircraft radar program continued improving and collecting data over the Mojave Desert Death Valley region. Now called AIRSAR, the system includes 3 bands (P-band, 67 cm; L-band, 25 cm; C-band, 5 cm). Each band can collect all possible polarizations in a mode called polarimetry. In addition, AIRSAR can be operated in the TOPSAR mode wherein 2 antennas collect data interferometrically, yielding a digital elevation model (DEM). Both L-band and C-band can be operated in this way, with horizontal resolution of about 5 m and vertical errors less than 2 m. The findings and developments of these earlier investigations are discussed.

  18. Ultrasound-Mediated Biophotonic Imaging: A Review of Acousto-Optical Tomography and Photo-Acoustic Tomography

    PubMed Central

    Wang, Lihong V.

    2004-01-01

    This article reviews two types of ultrasound-mediated biophotonic imaging–acousto-optical tomography (AOT, also called ultrasound-modulated optical tomography) and photo-acoustic tomography (PAT, also called opto-acoustic or thermo-acoustic tomography)–both of which are based on non-ionizing optical and ultrasonic waves. The goal of these technologies is to combine the contrast advantage of the optical properties and the resolution advantage of ultrasound. In these two technologies, the imaging contrast is based primarily on the optical properties of biological tissues, and the imaging resolution is based primarily on the ultrasonic waves that either are provided externally or produced internally, within the biological tissues. In fact, ultrasonic mediation overcomes both the resolution disadvantage of pure optical imaging in thick tissues and the contrast and speckle disadvantages of pure ultrasonic imaging. In our discussion of AOT, the relationship between modulation depth and acoustic amplitude is clarified. Potential clinical applications of ultrasound-mediated biophotonic imaging include early cancer detection, functional imaging, and molecular imaging. PMID:15096709

  19. True 3D display and BeoWulf connectivity

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz P.; Kostrzewski, Andrew A.; Kupiec, Stephen A.; Yu, Kevin H.; Aye, Tin M.; Savant, Gajendra D.

    2003-09-01

    We propose a novel true 3-D display based on holographic optics, called HAD (Holographic Autostereoscopic Display), or Holographic Inverse Look-around and Autostereoscopic Reality (HILAR), its latest generation. It does not require goggles, unlike the state of the art 3-D system which do not work without goggles, and has a table-like 360° look-around capability. Also, novel 3-D image-rendering software, based on Beowulf PC cluster hardware is discussed.

  20. Astronaut Jerry Linenger with sheet of TIPS correspondence

    NASA Image and Video Library

    1994-09-15

    STS064-23-025 (9-20 Sept. 1994) --- With scissors in hand, astronaut Jerry M. Linenger, STS-64 mission specialist, prepares to cut off a lengthy sheet of correspondence from ground controllers. Called the Thermal Imaging Printing System (TIPS), the message center occupies a stowage locker on the space shuttle Discovery's middeck. Astronaut L. Blaine Hammond, pilot, retrieves a clothing item from a nearby locker. Photo credit: NASA or National Aeronautics and Space Administration

  1. Laser Research

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Eastman Kodak Company, Rochester, New York is a broad-based firm which produces photographic apparatus and supplies, fibers, chemicals and vitamin concentrates. Much of the company's research and development effort is devoted to photographic science and imaging technology, including laser technology. Eastman Kodak is using a COSMIC computer program called LACOMA in the analysis of laser optical systems and camera design studies. The company reports that use of the program has provided development time savings and reduced computer service fees.

  2. Iran: The Post-Revolutionary Evolution

    DTIC Science & Technology

    2008-12-01

    posts should have knowledge of Sharia law (Islamic law), and the country’s ruler should Figure 1: Iran’s Government Structure 7 be a faqih, someone...whose knowledge of Islamic law and justice was superior to that of others.2 Iran’s political system is a complex mix of Islamic theocracy and...relations theorist, attempts to understand the causes of international war. To do this, he considers three levels of analysis, what he calls “images

  3. Correcting spacecraft jitter in HiRISE images

    USGS Publications Warehouse

    Sutton, S. S.; Boyd, A.K.; Kirk, Randolph L.; Cook, Debbie; Backer, Jean; Fennema, A.; Heyd, R.; McEwen, A.S.; Mirchandani, S.D.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.

    2017-01-01

    Mechanical oscillations or vibrations on spacecraft, also called pointing jitter, cause geometric distortions and/or smear in high resolution digital images acquired from orbit. Geometric distortion is especially a problem with pushbroom type sensors, such as the High Resolution Imaging Science Experiment (HiRISE) instrument on board the Mars Reconnaissance Orbiter (MRO). Geometric distortions occur at a range of frequencies that may not be obvious in the image products, but can cause problems with stereo image correlation in the production of digital elevation models, and in measuring surface changes over time in orthorectified images. The HiRISE focal plane comprises a staggered array of fourteen charge-coupled devices (CCDs) with pixel IFOV of 1 microradian. The high spatial resolution of HiRISE makes it both sensitive to, and an excellent recorder of jitter. We present an algorithm using Fourier analysis to resolve the jitter function for a HiRISE image that is then used to update instrument pointing information to remove geometric distortions from the image. Implementation of the jitter analysis and image correction is performed on selected HiRISE images. Resulting corrected images and updated pointing information are made available to the public. Results show marked reduction of geometric distortions. This work has applications to similar cameras operating now, and to the design of future instruments (such as the Europa Imaging System).

  4. Super-Resolution Image Reconstruction Applied to Medical Ultrasound

    NASA Astrophysics Data System (ADS)

    Ellis, Michael

    Ultrasound is the preferred imaging modality for many diagnostic applications due to its real-time image reconstruction and low cost. Nonetheless, conventional ultrasound is not used in many applications because of limited spatial resolution and soft tissue contrast. Most commercial ultrasound systems reconstruct images using a simple delay-and-sum architecture on receive, which is fast and robust but does not utilize all information available in the raw data. Recently, more sophisticated image reconstruction methods have been developed that make use of far more information in the raw data to improve resolution and contrast. One such method is the Time-Domain Optimized Near-Field Estimator (TONE), which employs a maximum a priori estimation to solve a highly underdetermined problem, given a well-defined system model. TONE has been shown to significantly improve both the contrast and resolution of ultrasound images when compared to conventional methods. However, TONE's lack of robustness to variations from the system model and extremely high computational cost hinder it from being readily adopted in clinical scanners. This dissertation aims to reduce the impact of TONE's shortcomings, transforming it from an academic construct to a clinically viable image reconstruction algorithm. By altering the system model from a collection of individual hypothetical scatterers to a collection of weighted, diffuse regions, dTONE is able to achieve much greater robustness to modeling errors. A method for efficient parallelization of dTONE is presented that reduces reconstruction time by more than an order of magnitude with little loss in image fidelity. An alternative reconstruction algorithm, called qTONE, is also developed and is able to reduce reconstruction times by another two orders of magnitude while simultaneously improving image contrast. Each of these methods for improving TONE are presented, their limitations are explored, and all are used in concert to reconstruct in vivo images of a human testicle. In all instances, the methods presented here outperform conventional image reconstruction methods by a significant margin. As TONE and its variants are general image reconstruction techniques, the theories and research presented here have the potential to significantly improve not only ultrasound's clinical utility, but that of other imaging modalities as well.

  5. History and structures of telecommunication in pathology, focusing on open access platforms.

    PubMed

    Kayser, Klaus; Borkenfeld, Stephan; Djenouni, Amina; Kayser, Gian

    2011-11-07

    Telecommunication has matured to a broadly applied tool in diagnostic pathology. Contemporary with the development of fast electronic communication lines (Integrated digital network services (ISDN), broad band connections, and fibre optics, as well as the digital imaging technology (digital camera), telecommunication in tissue--based diagnosis (telepathology) has matured. Open access (internet) and server--based communication have induced the development of specific medical information platforms, such as iPATH, UICC-TPCC (telepathology consultation centre of the Union International against Cancer), or the Armed Forces Institute of Pathology (AFIP) teleconsultation system. They have been closed, and are subject to be replaced by specific open access forums (Medical Electronic Expert Communication System (MECES) with embedded virtual slide (VS) technology). MECES uses php language, data base driven mySqL architecture, X/L-AMPP infrastructure, and browser friendly W3C conform standards. The server--based medical communication systems (AFIP, iPATH, UICC-TPCC) have been reported to be a useful and easy to handle tool for expert consultation. Correct sampling and evaluation of transmitted still images by experts reported revealed no or only minor differences to the original images and good practice of the involved experts. β tests with the new generation medical expert consultation systems (MECES) revealed superior results in terms of performance, still image viewing, and system handling, especially as this is closely related to the use of so--called social forums (facebook, youtube, etc.). In addition to the acknowledged advantages of the former established systems (assistance of pathologists working in developing countries, diagnosis confirmation, international information exchange, etc.), the new generation offers additional benefits such as acoustic information transfer, assistance in image screening, VS technology, and teaching in diagnostic sampling, judgement, and verification.

  6. IoSiS: a radar system for imaging of satellites in space

    NASA Astrophysics Data System (ADS)

    Jirousek, M.; Anger, S.; Dill, S.; Schreiber, E.; Peichl, M.

    2017-05-01

    Space debris nowadays is one of the main threats for satellite systems especially in low earth orbit (LEO). More than 700,000 debris objects with potential to destroy or damage a satellite are estimated. The effects of an impact often are not identifiable directly from ground. High-resolution radar images are helpful in analyzing a possible damage. Therefor DLR is currently developing a radar system called IoSiS (Imaging of Satellites in Space), being based on an existing steering antenna structure and our multi-purpose high-performance radar system GigaRad for experimental investigations. GigaRad is a multi-channel system operating at X band and using a bandwidth of up to 4.4 GHz in the IoSiS configuration, providing fully separated transmit (TX) and receive (RX) channels, and separated antennas. For the observation of small satellites or space debris a highpower traveling-wave-tube amplifier (TWTA) is mounted close to the TX antenna feed. For the experimental phase IoSiS uses a 9 m TX and a 1 m RX antenna mounted on a common steerable positioner. High-resolution radar images are obtained by using Inverse Synthetic Aperture Radar (ISAR) techniques. The guided tracking of known objects during overpass allows here wide azimuth observation angles. Thus high azimuth resolution comparable to the range resolution can be achieved. This paper outlines technical main characteristics of the IoSiS radar system including the basic setup of the antenna, the radar instrument with the RF error correction, and the measurement strategy. Also a short description about a simulation tool for the whole instrument and expected images is shown.

  7. Direct-conversion flat-panel imager with avalanche gain: Feasibility investigation for HARP-AMFPI

    PubMed Central

    Wronski, M. M.; Rowlands, J. A.

    2008-01-01

    The authors are investigating the concept of a direct-conversion flat-panel imager with avalanche gain for low-dose x-ray imaging. It consists of an amorphous selenium (a-Se) photoconductor partitioned into a thick drift region for x-ray-to-charge conversion and a relatively thin region called high-gain avalanche rushing photoconductor (HARP) in which the charge undergoes avalanche multiplication. An active matrix of thin film transistors is used to read out the electronic image. The authors call the proposed imager HARP active matrix flat panel imager (HARP-AMFPI). The key advantages of HARP-AMFPI are its high spatial resolution, owing to the direct-conversion a-Se layer, and its programmable avalanche gain, which can be enabled during low dose fluoroscopy to overcome electronic noise and disabled during high dose radiography to prevent saturation of the detector elements. This article investigates key design considerations for HARP-AMFPI. The effects of electronic noise on the imaging performance of HARP-AMFPI were modeled theoretically and system parameters were optimized for radiography and fluoroscopy. The following imager properties were determined as a function of avalanche gain: (1) the spatial frequency dependent detective quantum efficiency; (2) fill factor; (3) dynamic range and linearity; and (4) gain nonuniformities resulting from electric field strength nonuniformities. The authors results showed that avalanche gains of 5 and 20 enable x-ray quantum noise limited performance throughout the entire exposure range in radiography and fluoroscopy, respectively. It was shown that HARP-AMFPI can provide the required gain while maintaining a 100% effective fill factor and a piecewise dynamic range over five orders of magnitude (10−7–10−2 R∕frame). The authors have also shown that imaging performance is not significantly affected by the following: electric field strength nonuniformities, avalanche noise for x-ray energies above 1 keV and direct interaction of x rays in the gain region. Thus, HARP-AMFPI is a promising flat-panel imager structure that enables high-resolution fully quantum noise limited x-ray imaging over a wide exposure range. PMID:19175080

  8. Direct-conversion flat-panel imager with avalanche gain: Feasibility investigation for HARP-AMFPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wronski, M. M.; Rowlands, J. A.

    2008-12-15

    The authors are investigating the concept of a direct-conversion flat-panel imager with avalanche gain for low-dose x-ray imaging. It consists of an amorphous selenium (a-Se) photoconductor partitioned into a thick drift region for x-ray-to-charge conversion and a relatively thin region called high-gain avalanche rushing photoconductor (HARP) in which the charge undergoes avalanche multiplication. An active matrix of thin film transistors is used to read out the electronic image. The authors call the proposed imager HARP active matrix flat panel imager (HARP-AMFPI). The key advantages of HARP-AMFPI are its high spatial resolution, owing to the direct-conversion a-Se layer, and its programmablemore » avalanche gain, which can be enabled during low dose fluoroscopy to overcome electronic noise and disabled during high dose radiography to prevent saturation of the detector elements. This article investigates key design considerations for HARP-AMFPI. The effects of electronic noise on the imaging performance of HARP-AMFPI were modeled theoretically and system parameters were optimized for radiography and fluoroscopy. The following imager properties were determined as a function of avalanche gain: (1) the spatial frequency dependent detective quantum efficiency; (2) fill factor; (3) dynamic range and linearity; and (4) gain nonuniformities resulting from electric field strength nonuniformities. The authors results showed that avalanche gains of 5 and 20 enable x-ray quantum noise limited performance throughout the entire exposure range in radiography and fluoroscopy, respectively. It was shown that HARP-AMFPI can provide the required gain while maintaining a 100% effective fill factor and a piecewise dynamic range over five orders of magnitude (10{sup -7}-10{sup -2} R/frame). The authors have also shown that imaging performance is not significantly affected by the following: electric field strength nonuniformities, avalanche noise for x-ray energies above 1 keV and direct interaction of x rays in the gain region. Thus, HARP-AMFPI is a promising flat-panel imager structure that enables high-resolution fully quantum noise limited x-ray imaging over a wide exposure range.« less

  9. Amateurs to take a Crack at Juno Images

    NASA Image and Video Library

    2011-08-03

    Data from the camera onboard NASA Juno mission, called JunoCam, will be made available to the public for processing into their own images. Illustrated here with an image of Jupiter taken by NASA Voyager mission.

  10. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  11. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  12. Hubble snap a beautiful supernova explosion some 160,000 light-years from Earth

    NASA Image and Video Library

    2017-12-08

    Of all the varieties of exploding stars, the ones called Type Ia are perhaps the most intriguing. Their predictable brightness lets astronomers measure the expansion of the universe, which led to the discovery of dark energy. Yet the cause of these supernovae remains a mystery. Do they happen when two white dwarf stars collide? Or does a single white dwarf gorge on gases stolen from a companion star until bursting? If the second theory is true, the normal star should survive. Astronomers used NASA's Hubble Space Telescope to search the gauzy remains of a Type Ia supernova in a neighboring galaxy called the Large Magellanic Cloud. They found a sun-like star that showed signs of being associated with the supernova. Further investigations will be needed to learn if this star is truly the culprit behind a white dwarf's fiery demise. This image, taken with NASA's Hubble Space Telescope, shows the supernova remnant SNR 0509-68.7, also known as N103B. It is located 160,000 light-years from Earth in a neighboring galaxy called the Large Magellanic Cloud. N103B resulted from a Type Ia supernova, whose cause remains a mystery. One possibility would leave behind a stellar survivor, and astronomers have identified a possible candidate. The actual supernova remnant is the irregular shaped dust cloud, at the upper center of the image. The gas in the lower half of the image and the dense concentration of stars in the lower left are the outskirts of the star cluster NGC 1850. The Hubble image combines visible and near-infrared light taken by the Wide Field Camera 3 in June 2014. Credit: NASA, ESA and H.-Y. Chu (Academia Sinica, Taipei) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  13. A scalable multi-DLP pico-projector system for virtual reality

    NASA Astrophysics Data System (ADS)

    Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.

    2014-03-01

    Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.

  14. Some inversion formulas for the cone transform

    NASA Astrophysics Data System (ADS)

    Terzioglu, Fatma

    2015-11-01

    Several novel imaging applications have lead recently to a variety of Radon type transforms, where integration is made over a family of conical surfaces. We call them cone transforms (in 2D they are also called V-line or broken ray transforms). Most prominently, they are present in the so called Compton camera imaging that arises in medical diagnostics, astronomy, and lately in homeland security applications. Several specific incarnations of the cone transform have been considered separately. In this paper, we address the most general (and overdetermined) cone transform, obtain integral relations between cone and Radon transforms in {{{R}}}n, and a variety of inversion formulas. In many applications (e.g., in homeland security), the signal to noise ratio is very low. So, if overdetermined data is collected (as in the case of Compton imaging), attempts to reduce the dimensionality might lead to essential elimination of the signal. Thus, our main concentration is on obtaining formulas involving overdetermined data.

  15. Instrument performance enhancement and modification through an extended instrument paradigm

    NASA Astrophysics Data System (ADS)

    Mahan, Stephen Lee

    An extended instrument paradigm is proposed, developed and shown in various applications. The CBM (Chin, Blass, Mahan) method is an extension to the linear systems model of observing systems. In the most obvious and practical application of image enhancement of an instrument characterized by a time-invariant instrumental response function, CBM can be used to enhance images or spectra through a simple convolution application of the CBM filter for a resolution improvement of as much as a factor of two. The CBM method can be used in many applications. We discuss several within this work including imaging through turbulent atmospheres, or what we've called Adaptive Imaging. Adaptive Imaging provides an alternative approach for the investigator desiring results similar to those obtainable with adaptive optics, however on a minimal budget. The CBM method is also used in a backprojected filtered image reconstruction method for Positron Emission Tomography. In addition, we can use information theoretic methods to aid in the determination of model instrumental response function parameters for images having an unknown origin. Another application presented herein involves the use of the CBM method for the determination of the continuum level of a Fourier transform spectrometer observation of ethylene, which provides a means for obtaining reliable intensity measurements in an automated manner. We also present the application of CBM to hyperspectral image data of the comet Shoemaker-Levy 9 impact with Jupiter taken with an acousto-optical tunable filter equipped CCD camera to an adaptive optics telescope.

  16. Where on Earth...? MISR Mystery Image Quiz #8:Yarlung Tsangpo River, China

    NASA Image and Video Library

    2002-05-15

    The mighty river featured in this image is called the Yarlung Tsangpo in China, and is then known as the Dikrong during its passage through India state of Arunachal Pradesh. This image from NASA Terra satellite is MISR Mystery Image Quiz #8.

  17. First Imaging of Laser-Induced Spark on Mars

    NASA Image and Video Library

    2014-07-16

    NASA Curiosity Mars rover used the Mars Hand Lens Imager MAHLI camera on its arm to catch the first images of sparks produced by the rover laser being shot at a rock on Mars. The left image is from before the laser zapped this rock, called Nova.

  18. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  19. STEREO's View

    NASA Image and Video Library

    2017-12-08

    STEREO witnessed the March 5, 2013, CME from the side of the sun – Earth is far to the left of this picture. While the SOHO images show a halo CME, STEREO shows the CME clearly moving away from Earth. Credit: NASA/STEREO --- CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system. NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  20. Twisted Fields

    NASA Image and Video Library

    2017-12-08

    OHO captured this image of a CME from the side – but the structure looks much different from the classic light bulb CME. The filament of material bursting off the sun has a helical magnetic structure, which is unraveling like a piece of yarn during the eruption. Credit: ESA/NASA/SOHO..---..CME WEEK: What To See in CME Images Two main types of explosions occur on the sun: solar flares and coronal mass ejections. Unlike the energy and x-rays produced in a solar flare – which can reach Earth at the speed of light in eight minutes – coronal mass ejections are giant, expanding clouds of solar material that take one to three days to reach Earth. Once at Earth, these ejections, also called CMEs, can impact satellites in space or interfere with radio communications. During CME WEEK from Sept. 22 to 26, 2014, we explore different aspects of these giant eruptions that surge out from the star we live with. When a coronal mass ejection blasts off the sun, scientists rely on instruments called coronagraphs to track their progress. Coronagraphs block out the bright light of the sun, so that the much fainter material in the solar atmosphere -- including CMEs -- can be seen in the surrounding space. CMEs appear in these images as expanding shells of material from the sun's atmosphere -- sometimes a core of colder, solar material (called a filament) from near the sun's surface moves in the center. But mapping out such three-dimensional components from a two-dimensional image isn't easy. Watch the slideshow to find out how scientists interpret what they see in CME pictures. The images in the slideshow are from the three sets of coronagraphs NASA currently has in space. One is on the joint European Space Agency and NASA Solar and Heliospheric Observatory, or SOHO. SOHO launched in 1995, and sits between Earth and the sun about a million miles away from Earth. The other two coronagraphs are on the two spacecraft of the NASA Solar Terrestrial Relations Observatory, or STEREO, mission, which launched in 2006. The two STEREO spacecraft are both currently viewing the far side of the sun. Together these instruments help scientists create a three-dimensional model of any CME as its journey unfolds through interplanetary space. Such information can show why a given characteristic of a CME close to the sun might lead to a given effect near Earth, or any other planet in the solar system...NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. Look at my Arms!

    NASA Image and Video Library

    2005-07-25

    This image shows the hidden spiral arms that were discovered around the galaxy called NGC 4625 top by the ultraviolet eyes of NASA Galaxy Evolution Explorer. An armless companion galaxy called NGC 4618 is pictured below.

  2. Spatio-temporal Hotelling observer for signal detection from image sequences

    PubMed Central

    Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.

    2010-01-01

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494

  3. Spatio-temporal Hotelling observer for signal detection from image sequences.

    PubMed

    Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J

    2009-06-22

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

  4. Prospective motion correction of high-resolution magnetic resonance imaging data in children.

    PubMed

    Brown, Timothy T; Kuperman, Joshua M; Erhart, Matthew; White, Nathan S; Roddey, J Cooper; Shankaranarayanan, Ajit; Han, Eric T; Rettmann, Dan; Dale, Anders M

    2010-10-15

    Motion artifacts pose significant problems for the acquisition and analysis of high-resolution magnetic resonance imaging data. These artifacts can be particularly severe when studying pediatric populations, where greater patient movement reduces the ability to clearly view and reliably measure anatomy. In this study, we tested the effectiveness of a new prospective motion correction technique, called PROMO, as applied to making neuroanatomical measures in typically developing school-age children. This method attempts to address the problem of motion at its source by keeping the measurement coordinate system fixed with respect to the subject throughout image acquisition. The technique also performs automatic rescanning of images that were acquired during intervals of particularly severe motion. Unlike many previous techniques, this approach adjusts for both in-plane and through-plane movement, greatly reducing image artifacts without the need for additional equipment. Results show that the use of PROMO notably enhances subjective image quality, reduces errors in Freesurfer cortical surface reconstructions, and significantly improves the subcortical volumetric segmentation of brain structures. Further applications of PROMO for clinical and cognitive neuroscience are discussed. Copyright 2010 Elsevier Inc. All rights reserved.

  5. Using cellular automata to generate image representation for biological sequences.

    PubMed

    Xiao, X; Shao, S; Ding, Y; Huang, Z; Chen, X; Chou, K-C

    2005-02-01

    A novel approach to visualize biological sequences is developed based on cellular automata (Wolfram, S. Nature 1984, 311, 419-424), a set of discrete dynamical systems in which space and time are discrete. By transforming the symbolic sequence codes into the digital codes, and using some optimal space-time evolvement rules of cellular automata, a biological sequence can be represented by a unique image, the so-called cellular automata image. Many important features, which are originally hidden in a long and complicated biological sequence, can be clearly revealed thru its cellular automata image. With biological sequences entering into databanks rapidly increasing in the post-genomic era, it is anticipated that the cellular automata image will become a very useful vehicle for investigation into their key features, identification of their function, as well as revelation of their "fingerprint". It is anticipated that by using the concept of the pseudo amino acid composition (Chou, K.C. Proteins: Structure, Function, and Genetics, 2001, 43, 246-255), the cellular automata image approach can also be used to improve the quality of predicting protein attributes, such as structural class and subcellular location.

  6. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  7. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  8. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  9. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  10. Hole Lotta Grindin Going On

    NASA Image and Video Library

    2004-03-06

    The red marks in this image, taken by the Mars Exploration Rover Opportunity's panoramic camera, indicate holes made by the rover's rock abrasion tool, located on its instrument deployment device, or "arm." The lower hole, located on a target called "McKittrick," was made on the 30th martian day, or sol, of Opportunity's journey. The upper hole, located on a target called "Guadalupe" was made on sol 34 of the rover's mission. The mosaic image was taken using a blue filter at the "El Capitan" region of the Meridiani Planum, Mars, rock outcrop. The image, shown in a vertical-perspective map projection, consists of images acquired on sols 27, 29 and 30 of the rover's mission. http://photojournal.jpl.nasa.gov/catalog/PIA05513

  11. Polygon on Mars

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image shows a small-scale polygonal pattern in the ground near NASA's Phoenix Mars Lander. This pattern is similar in appearance to polygonal structures in icy ground in the arctic regions of Earth.

    Phoenix touched down on the Red Planet at 4:53 p.m. Pacific Time (7:53 p.m. Eastern Time), May 25, 2008, in an arctic region called Vastitas Borealis, at 68 degrees north latitude, 234 degrees east longitude.

    This image was acquired by the Surface Stereo Imager shortly after landing. On the Phoenix mission calendar, landing day is known as Sol 0, the first Martian day of the mission.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Detection and enforcement of failure-to-yield in an emergency vehicle preemption system

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron (Inventor); Wickline, Richard (Inventor)

    2007-01-01

    An intersection controlled by an intersection controller receives trigger signals from on-coming emergency vehicles responding to an emergency call. The intersection controller initiates surveillance of the intersection via cameras installed at the intersection in response to a received trigger signal. The surveillance may begin immediately upon receipt of the trigger signal from an emergency vehicle, or may wait until the intersection controller determines that the signaling emergency vehicle is in the field of view of the cameras at the intersection. Portions of the captured images are tagged by the intersection controller based on tag signals transmitted by the vehicle or based on detected traffic patterns that indicate a potential traffic violation. The captured images are downloaded to a processing facility that analyzes the images and automatically issues citations for captured traffic violations.

  13. A variable resolution x-ray detector for computed tomography: II. Imaging theory and performance.

    PubMed

    DiBianca, F A; Zou, P; Jordan, L M; Laughter, J S; Zeman, H D; Sebes, J

    2000-08-01

    A computed tomography (CT) imaging technique called variable resolution x-ray (VRX) detection provides variable image resolution ranging from that of clinical body scanning (1 cy/mm) to that of microscopy (100 cy/mm). In this paper, an experimental VRX CT scanner based on a rotating subject table and an angulated storage phosphor screen detector is described and tested. The measured projection resolution of the scanner is > or = 20 lp/mm. Using this scanner, 4.8-s CT scans are made of specimens of human extremities and of in vivo hamsters. In addition, the system's projected spatial resolution is calculated to exceed 100 cy/mm for a future on-line CT scanner incorporating smaller focal spots (0.1 mm) than those currently used and a 1008-channel VRX detector with 0.6-mm cell spacing.

  14. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    NASA Astrophysics Data System (ADS)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  15. How Phoenix Looks Under Itself

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This is an animation of NASA's Phoenix Mars Lander reaching with its Robotic Arm and taking a picture of the surface underneath the lander. The image at the conclusion of the animation was taken by Phoenix's Robotic Arm Camera (RAC) on the eighth Martian day of the mission, or Sol 8 (June 2, 2008). The light feature in the middle of the image below the leg is informally called 'Holy Cow.' The dust, shown in the dark foreground, has been blown off of 'Holy Cow' by Phoenix's thruster engines.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  17. Feasibility of digital imaging to characterize earth materials : part 2.

    DOT National Transportation Integrated Search

    2012-06-06

    This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...

  18. Feasibility of digital imaging to characterize earth materials : part 6.

    DOT National Transportation Integrated Search

    2012-06-06

    This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...

  19. Feasibility of digital imaging to characterize earth materials : part 3.

    DOT National Transportation Integrated Search

    2012-06-06

    This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...

  20. Image Analyzed by Mars Rover for Selection of Target

    NASA Image and Video Library

    2010-03-23

    NASA Opportunity used newly developed and uploaded software called AEGIS, to analyze images to identify features that best matched criteria for selecting an observation target; the criteria in this image -- rocks that are larger and darker than others.

Top