Science.gov

Sample records for acoustic video images

  1. Video Toroid Cavity Imager

    DOEpatents

    Gerald, II, Rex E.; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  2. Video image position determination

    DOEpatents

    Christensen, Wynn; Anderson, Forrest L.; Kortegaard, Birchard L.

    1991-01-01

    An optical beam position controller in which a video camera captures an image of the beam in its video frames, and conveys those images to a processing board which calculates the centroid coordinates for the image. The image coordinates are used by motor controllers and stepper motors to position the beam in a predetermined alignment. In one embodiment, system noise, used in conjunction with Bernoulli trials, yields higher resolution centroid coordinates.

  3. Ultrasound Imaging System Video

    NASA Technical Reports Server (NTRS)

    2002-01-01

    In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.

  4. Observation of hydrothermal flows with acoustic video camera

    NASA Astrophysics Data System (ADS)

    Mochizuki, M.; Asada, A.; Tamaki, K.; Scientific Team Of Yk09-13 Leg 1

    2010-12-01

    Ridge 18-20deg.S, where hydrothermal plume signatures were previously perceived. DIDSON was equipped on the top of Shinkai6500 in order to get acoustic video images of hydrothermal plumes. In this cruise, seven dives of Shinkai6500 were conducted. The acoustic video images of the hydrothermal plumes had been captured in three of seven dives. These are only a few acoustic video images of the hydrothermal plumes. Processing and analyzing the acoustic video image data are going on. We will report the overview of the acoustic video image of the hydrothermal plumes and discuss possibility of DIDSON as an observation tool for seafloor hydrothermal activity.

  5. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  6. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  7. Video and image quality

    NASA Astrophysics Data System (ADS)

    Aldridge, Jim

    1995-09-01

    This paper presents some of the results of a UK government research program into methods of improving the effectiveness of CCTV surveillance systems. The paper identifies the major components of video security systems and primary causes of unsatisfactory images. A method is outline for relating the picture detail limitations imposed by each system component on overall system performance. The paper also points out some possible difficulties arising from the use of emerging new technology.

  8. Non-intrusive telemetry applications in the oilsands: from visible light and x-ray video to acoustic imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Shaw, John M.

    2013-06-01

    While the production, transport and refining of oils from the oilsands of Alberta, and comparable resources elsewhere is performed at industrial scales, numerous technical and technological challenges and opportunities persist due to the ill defined nature of the resource. For example, bitumen and heavy oil comprise multiple bulk phases, self-organizing constituents at the microscale (liquid crystals) and the nano scale. There are no quantitative measures available at the molecular level. Non-intrusive telemetry is providing promising paths toward solutions, be they enabling technologies targeting process design, development or optimization, or more prosaic process control or process monitoring applications. Operation examples include automated large object and poor quality ore during mining, and monitoring the thickness and location of oil water interfacial zones within separation vessels. These applications involve real-time video image processing. X-ray transmission video imaging is used to enumerate organic phases present within a vessel, and to detect individual phase volumes, densities and elemental compositions. This is an enabling technology that provides phase equilibrium and phase composition data for production and refining process development, and fluid property myth debunking. A high-resolution two-dimensional acoustic mapping technique now at the proof of concept stage is expected to provide simultaneous fluid flow and fluid composition data within porous inorganic media. Again this is an enabling technology targeting visualization of diverse oil production process fundamentals at the pore scale. Far infrared spectroscopy coupled with detailed quantum mechanical calculations, may provide characteristic molecular motifs and intermolecular association data required for fluid characterization and process modeling. X-ray scattering (SAXS/WAXS/USAXS) provides characteristic supramolecular structure information that impacts fluid rheology and process

  9. Acoustic imaging system

    DOEpatents

    Smith, Richard W.

    1979-01-01

    An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.

  10. Electromagnetic acoustic imaging.

    PubMed

    Emerson, Jane F; Chang, David B; McNaughton, Stuart; Jeong, Jong Seob; Shung, K K; Cerwin, Stephen A

    2013-02-01

    Electromagnetic acoustic imaging (EMAI) is a new imaging technique that uses long-wavelength RF electromagnetic (EM) waves to induce ultrasound emission. Signal intensity and image contrast have been found to depend on spatially varying electrical conductivity of the medium in addition to conventional acoustic properties. The resultant conductivity- weighted ultrasound data may enhance the diagnostic performance of medical ultrasound in cancer and cardiovascular applications because of the known changes in conductivity of malignancy and blood-filled spaces. EMAI has a potential advantage over other related imaging techniques because it combines the high resolution associated with ultrasound detection with the generation of the ultrasound signals directly related to physiologically important electrical properties of the tissues. Here, we report the theoretical development of EMAI, implementation of a dual-mode EMAI/ultrasound apparatus, and successful demonstrations of EMAI in various phantoms designed to establish feasibility of the approach for eventual medical applications.

  11. Synergy of seismic, acoustic, and video signals in blast analysis

    SciTech Connect

    Anderson, D.P.; Stump, B.W.; Weigand, J.

    1997-09-01

    The range of mining applications from hard rock quarrying to coal exposure to mineral recovery leads to a great variety of blasting practices. A common characteristic of many of the sources is that they are detonated at or near the earth`s surface and thus can be recorded by camera or video. Although the primary interest is in the seismic waveforms that these blasts generate, the visual observations of the blasts provide important constraints that can be applied to the physical interpretation of the seismic source function. In particular, high speed images can provide information on detonation times of individuals charges, the timing and amount of mass movement during the blasting process and, in some instances, evidence of wave propagation away from the source. All of these characteristics can be valuable in interpreting the equivalent seismic source function for a set of mine explosions and quantifying the relative importance of the different processes. This paper documents work done at the Los Alamos National Laboratory and Southern Methodist University to take standard Hi-8 video of mine blasts, recover digital images from them, and combine them with ground motion records for interpretation. The steps in the data acquisition, processing, display, and interpretation are outlined. The authors conclude that the combination of video with seismic and acoustic signals can be a powerful diagnostic tool for the study of blasting techniques and seismology. A low cost system for generating similar diagnostics using consumer-grade video camera and direct-to-disk video hardware is proposed. Application is to verification of the Comprehensive Test Ban Treaty.

  12. Normalization method for video images

    SciTech Connect

    Donohoe, G.W.; Hush, D.R.

    1992-12-31

    The present invention relates to a method and apparatus for automatically and adaptively normalizing analog signals representative of video images in object detection systems. Such normalization maximizes the average information content of the video images and, thereby, provides optimal digitized images for object detection and identification. The present invention manipulates two system control signals -- gain control signal and offset control signal -- to convert an analog image signal into a transformed analog image signal, such that the corresponding digitized image contains the maximum amount of information achievable with a conventional object detection system. In some embodiments of the present invention, information content is measured using parameters selected from image entropy, image mean, and image variance.

  13. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  14. Imaging of Acoustic Waves in Sand

    SciTech Connect

    Deason, Vance Albert; Telschow, Kenneth Louis; Watson, Scott Marshall

    2003-08-01

    There is considerable interest in detecting objects such as landmines shallowly buried in loose earth or sand. Various techniques involving microwave, acoustic, thermal and magnetic sensors have been used to detect such objects. Acoustic and microwave sensors have shown promise, especially if used together. In most cases, the sensor package is scanned over an area to eventually build up an image or map of anomalies. We are proposing an alternate, acoustic method that directly provides an image of acoustic waves in sand or soil, and their interaction with buried objects. The INEEL Laser Ultrasonic Camera utilizes dynamic holography within photorefractive recording materials. This permits one to image and demodulate acoustic waves on surfaces in real time, without scanning. A video image is produced where intensity is directly and linearly proportional to surface motion. Both specular and diffusely reflecting surfaces can be accomodated and surface motion as small as 0.1 nm can be quantitatively detected. This system was used to directly image acoustic surface waves in sand as well as in solid objects. Waves as frequencies of 16 kHz were generated using modified acoustic speakers. These waves were directed through sand toward partially buried objects. The sand container was not on a vibration isolation table, but sat on the lab floor. Interaction of wavefronts with buried objects showed reflection, diffraction and interference effects that could provide clues to location and characteristics of buried objects. Although results are preliminary, success in this effort suggests that this method could be applied to detection of buried landmines or other near-surface items such as pipes and tanks.

  15. Image processing techniques for acoustic images

    NASA Astrophysics Data System (ADS)

    Murphy, Brian P.

    1991-06-01

    The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

  16. Nondestructive Acoustic Imaging Techniques

    NASA Astrophysics Data System (ADS)

    Schmitz, Volker

    Acoustic imaging techniques are used in the field of nondestructive testing of technical components to measure defects such as lack of side wall fusion or cracks in welded joints. Data acquisition is performed by a remote-controlled manipulator and a PC for the mass storage of the high-frequency time-of-flight data at each probe position. The quality of the acoustic images and the interpretation relies on the proper understanding of the transmitted wave fronts and the arrangement of the probes in pulse-echo mode or in pitch-and-catch arrangement. The use of the Synthetic Aperture Focusing Technique allows the depth-dependent resolution to be replaced by a depth-independent resolution and the signal-to-noise ratio to be improved. Examples with surface-connected cracks are shown to demonstrate the improved features. The localization accuracy could be improved by entering 2-dimensional or 3-dimensional reconstructed data into the environment of a 3-dimensional CAD drawing. The propagation of ultrasonic waves through austenitic welds is disturbed by the anisotropic and inhomogeneous structure of the material. The effect is more or less severe depending upon the longitudinal or shear wave modes. To optimize the performance of an inspection software tool, a 3-dimensional CAD-Ray program has been implemented, where the shape of the inhomogeneous part of a weld can be simulated together with the grain structure based on the elastic constants. Ray-tracing results are depicted for embedded and for surface-connected defects.

  17. Aerial Video Imaging

    NASA Technical Reports Server (NTRS)

    1991-01-01

    When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.

  18. Radiation effects on video imagers

    NASA Astrophysics Data System (ADS)

    Yates, G. J.; Bujnosek, J. J.; Jaramillo, S. A.; Walton, R. B.; Martinez, T. M.; Black, J. P.

    Radiation sensitivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analyzing stored photocharge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented.

  19. Radiation effects on video imagers

    NASA Astrophysics Data System (ADS)

    Yates, G. J.; Bujnosek, J. J.; Jaramillo, S. A.; Walton, R. B.; Martinez, T. M.

    1986-02-01

    Radiation senstivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analysing stored photo-charge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented.

  20. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  1. Acoustic subwavelength imaging of subsurface objects with acoustic resonant metalens

    SciTech Connect

    Cheng, Ying; Liu, XiaoJun; Zhou, Chen; Wei, Qi; Wu, DaJian

    2013-11-25

    Early research into acoustic metamaterials has shown the possibility of achieving subwavelength near-field acoustic imaging. However, a major restriction of acoustic metamaterials is that the imaging objects must be placed in close vicinity of the devices. Here, we present an approach for acoustic imaging of subsurface objects far below the diffraction limit. An acoustic metalens made of holey-structured metamaterials is used to magnify evanescent waves, which can rebuild an image at the central plane. Without changing the physical structure of the metalens, our proposed approach can image objects located at certain distances from the input surface, which provides subsurface signatures of the objects with subwavelength spatial resolution.

  2. Acoustic imaging system

    NASA Technical Reports Server (NTRS)

    Kendall, J. M., Jr.

    1977-01-01

    Tool detects noise sources by scanning sound "scene" and displaying relative location of noise-producing elements in area. System consists of ellipsoidal acoustic mirror and microphone and a display device.

  3. Video Snapshots: Creating High-Quality Images from Video Clips.

    PubMed

    Sunkavalli, Kalyan; Joshi, Neel; Kang, Sing Bing; Cohen, Michael F; Pfister, Hanspeter

    2012-11-01

    We describe a unified framework for generating a single high-quality still image ("snapshot") from a short video clip. Our system allows the user to specify the desired operations for creating the output image, such as super resolution, noise and blur reduction, and selection of best focus. It also provides a visual summary of activity in the video by incorporating saliency-based objectives in the snapshot formation process. We show examples on a number of different video clips to illustrate the utility and flexibility of our system.

  4. Real-time video image processing

    NASA Astrophysics Data System (ADS)

    Smedley, Kirk G.; Yool, Stephen R.

    1990-11-01

    Lockheed has designed and implemented a prototype real-time Video Enhancement Workbench (VEW) using commercial offtheshelf hardware and custom software. The hardware components include a Sun workstation Aspex PIPE image processor time base corrector VCR video camera and realtime disk subsystem. A cornprehensive set of image processing functions can be invoked by the analyst at any time during processing enabling interactive enhancement and exploitation of video sequences. Processed images can be transmitted and stored within the system in digital or video form. VEW also provides image output to a laser printer and to Interleaf technical publishing software.

  5. Enhancement system of nighttime infrared video image and visible video image

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  6. Acoustic waves in medical imaging and diagnostics.

    PubMed

    Sarvazyan, Armen P; Urban, Matthew W; Greenleaf, James F

    2013-07-01

    Up until about two decades ago acoustic imaging and ultrasound imaging were synonymous. The term ultrasonography, or its abbreviated version sonography, meant an imaging modality based on the use of ultrasonic compressional bulk waves. Beginning in the 1990s, there started to emerge numerous acoustic imaging modalities based on the use of a different mode of acoustic wave: shear waves. Imaging with these waves was shown to provide very useful and very different information about the biological tissue being examined. We discuss the physical basis for the differences between these two basic modes of acoustic waves used in medical imaging and analyze the advantages associated with shear acoustic imaging. A comprehensive analysis of the range of acoustic wavelengths, velocities and frequencies that have been used in different imaging applications is presented. We discuss the potential for future shear wave imaging applications.

  7. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  8. Acoustic imaging microscope

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2006-10-17

    An imaging system includes: an object wavefront source and an optical microscope objective all positioned to direct an object wavefront onto an area of a vibrating subject surface encompassed by a field of view of the microscope objective, and to direct a modulated object wavefront reflected from the encompassed surface area through a photorefractive material; and a reference wavefront source and at least one phase modulator all positioned to direct a reference wavefront through the phase modulator and to direct a modulated reference wavefront from the phase modulator through the photorefractive material to interfere with the modulated object wavefront. The photorefractive material has a composition and a position such that interference of the modulated object wavefront and modulated reference wavefront occurs within the photorefractive material, providing a full-field, real-time image signal of the encompassed surface area.

  9. Real-time video-image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Rayfield, M. J.; Yakimovsky, Y.

    1979-01-01

    Digitizer and storage system allow rapid random access to video data by computer. RAPID (random-access picture digitizer) uses two commercially-available, charge-injection, solid-state TV cameras as sensors. It can continuously update its memory with each frame of video signal, or it can hold given frame in memory. In either mode, it generates composite video output signal representing digitized image in memory.

  10. Video and acoustic camera techniques for studying fish under ice: a review and comparison

    SciTech Connect

    Mueller, Robert P.; Brown, Richard S.; Hop, Haakon H.; Moulton, Larry

    2006-09-05

    Researchers attempting to study the presence, abundance, size, and behavior of fish species in northern and arctic climates during winter face many challenges, including the presence of thick ice cover, snow cover, and, sometimes, extremely low temperatures. This paper describes and compares the use of video and acoustic cameras for determining fish presence and behavior in lakes, rivers, and streams with ice cover. Methods are provided for determining fish density and size, identifying species, and measuring swimming speed and successful applications of previous surveys of fish under the ice are described. These include drilling ice holes, selecting batteries and generators, deploying pan and tilt cameras, and using paired colored lasers to determine fish size and habitat associations. We also discuss use of infrared and white light to enhance image-capturing capabilities, deployment of digital recording systems and time-lapse techniques, and the use of imaging software. Data are presented from initial surveys with video and acoustic cameras in the Sagavanirktok River Delta, Alaska, during late winter 2004. These surveys represent the first known successful application of a dual-frequency identification sonar (DIDSON) acoustic camera under the ice that achieved fish detection and sizing at camera ranges up to 16 m. Feasibility tests of video and acoustic cameras for determining fish size and density at various turbidity levels are also presented. Comparisons are made of the different techniques in terms of suitability for achieving various fisheries research objectives. This information is intended to assist researchers in choosing the equipment that best meets their study needs.

  11. Reflective echo tomographic imaging using acoustic beams

    SciTech Connect

    Kisner, Roger; Santos-Villalobos, Hector J

    2014-11-25

    An inspection system includes a plurality of acoustic beamformers, where each of the plurality of acoustic beamformers including a plurality of acoustic transmitter elements. The system also includes at least one controller configured for causing each of the plurality of acoustic beamformers to generate an acoustic beam directed to a point in a volume of interest during a first time. Based on a reflected wave intensity detected at a plurality of acoustic receiver elements, an image of the volume of interest can be generated.

  12. Enhanced Video Surveillance (EVS) with speckle imaging

    SciTech Connect

    Carrano, C J

    2004-01-13

    Enhanced Video Surveillance (EVS) with Speckle Imaging is a high-resolution imaging system that substantially improves resolution and contrast in images acquired over long distances. This technology will increase image resolution up to an order of magnitude or greater for video surveillance systems. The system's hardware components are all commercially available and consist of a telescope or large-aperture lens assembly, a high-performance digital camera, and a personal computer. The system's software, developed at LLNL, extends standard speckle-image-processing methods (used in the astronomical community) to solve the atmospheric blurring problem associated with imaging over medium to long distances (hundreds of meters to tens of kilometers) through horizontal or slant-path turbulence. This novel imaging technology will not only enhance national security but also will benefit law enforcement, security contractors, and any private or public entity that uses video surveillance to protect their assets.

  13. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  14. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  15. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  16. Digital imaging and video: principles and applications.

    PubMed

    Rosen, Andrew L; Hausman, Michael

    2003-01-01

    Digital imaging has provided orthopaedic surgeons with new, powerful tools that offer a multitude of applications. Already integral to several common medical devices, digital images can be used for case documentation and presentation as well as for diagnostic and surgical patient care information. Educational presentation has been transformed by the use of computers and digital projectors. Understanding the basic foundations of digital imaging technology is important for effectively creating digital images, videos, and presentations.

  17. Acoustic emissions of digital data video projectors- Investigating noise sources and their change during product aging

    NASA Astrophysics Data System (ADS)

    White, Michael Shane

    2005-09-01

    Acoustic emission testing continues to be a growing part of IT and telecommunication product design, as product noise is increasingly becoming a differentiator in the marketplace. This is especially true for digital/video display companies, such as InFocus Corporation, considering the market shift of these products to the home entertainment consumer as retail prices drop and performance factors increase. Projectors and displays using Digital Light Processing(tm) [DLP(tm)] technology incorporate a device known as a ColorWheel(tm) to generate the colors displayed at each pixel in the image. These ColorWheel(tm) devices spin at very high speeds and can generate high-frequency tones not typically heard in liquid crystal displays and other display technologies. Also, acoustic emission testing typically occurs at the beginning of product life and is a measure of acoustic energy emitted at this point in the lifecycle. Since the product is designed to be used over a long period of time, there is concern as to whether the acoustic emissions change over the lifecycle of the product, whether these changes will result in a level of nuisance to the average customer, and does this nuisance begin to develop prior to the intended lifetime of the product.

  18. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  19. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  20. System identification by video image processing

    NASA Astrophysics Data System (ADS)

    Shinozuka, Masanobu; Chung, Hung-Chi; Ichitsubo, Makoto; Liang, Jianwen

    2001-07-01

    Emerging image processing techniques demonstrate their potential applications in earthquake engineering, particularly in the area of system identification. In this respect, the objectives of this research are to demonstrate the underlying principle that permits system identification, non-intrusively and remotely, with the aid of video camera and, for the purpose of the proof-of-concept, to apply the principle to a system identification problem involving relative motion, on the basis of the images. In structural control, accelerations at different stories of a building are usually measured and fed back for processing and control. As an alternative, this study attempts to identify the relative motion between different stories of a building for the purpose of on-line structural control by digitizing the images taken by video camera. For this purpose, the video image of the vibration of a structure base-isolated by a friction device under shaking-table was used successfully to observe relative displacement between the isolated structure and the shaking-table. This proof-of-concept experiment demonstrates that the proposed identification method based on digital image processing can be used with appropriate modifications to identify many other engineering-wise significant quantities remotely. In addition to the system identification study in the structural dynamics mentioned above, a result of preliminary study is described involving the video imaging of state of crack damage of road and highway pavement.

  1. Progress in video immersion using Panospheric imaging

    NASA Astrophysics Data System (ADS)

    Bogner, Stephen L.; Southwell, David T.; Penzes, Steven G.; Brosinsky, Chris A.; Anderson, Ron; Hanna, Doug M.

    1998-09-01

    Having demonstrated significant technical and marketplace advantages over other modalities for video immersion, PanosphericTM Imaging (PI) continues to evolve rapidly. This paper reports on progress achieved since AeroSense 97. The first practical field deployment of the technology occurred in June-August 1997 during the NASA-CMU 'Atacama Desert Trek' activity, where the Nomad mobile robot was teleoperated via immersive PanosphericTM imagery from a distance of several thousand kilometers. Research using teleoperated vehicles at DRES has also verified the exceptional utility of the PI technology for achieving high levels of situational awareness, operator confidence, and mission effectiveness. Important performance enhancements have been achieved with the completion of the 4th Generation PI DSP-based array processor system. The system is now able to provide dynamic full video-rate generation of spatial and computational transformations, resulting in a programmable and fully interactive immersive video telepresence. A new multi- CCD camera architecture has been created to exploit the bandwidth of this processor, yielding a well-matched PI system with greatly improved resolution. While the initial commercial application for this technology is expected to be video tele- conferencing, it also appears to have excellent potential for application in the 'Immersive Cockpit' concept. Additional progress is reported in the areas of Long Wave Infrared PI Imaging, Stereo PI concepts, PI based Video-Servoing concepts, PI based Video Navigation concepts, and Foveation concepts (to merge localized high-resolution views with immersive views).

  2. Intergraph video and images exploitation capabilities

    NASA Astrophysics Data System (ADS)

    Colla, Simone; Manesis, Charalampos

    2013-08-01

    The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.

  3. Acoustic Imaging of Combustion Noise

    NASA Technical Reports Server (NTRS)

    Ramohalli, K. N.; Seshan, P. K.

    1984-01-01

    Elliposidal acoustic mirror used to measure sound emitted at discrete points in burning turbulent jets. Mirror deemphasizes sources close to target source and excludes sources far from target. At acoustic frequency of 20 kHz, mirror resolves sound from region 1.25 cm wide. Currently used by NASA for research on jet flames. Produces clearly identifiable and measurable variation of acoustic spectral intensities along length of flame. Utilized in variety of monitoring or control systems involving flames or other reacting flows.

  4. Group-Based Image Retrieval Method for Video Image Annotation

    NASA Astrophysics Data System (ADS)

    Murabayashi, Noboru; Kurahashi, Setsuya; Yoshida, Kenichi

    This paper proposes a group-based image retrieval method for video image annotation systems. Although the wide spread use of video camera recorders has increased the demand for an automated annotation system for personal videos, conventional image retrieval methods cannot achieve enough accuracy to be used as an annotation engine. Recording conditions, such as change of the brightness by weather condition, shadow by the surroundings, and etc, affect the qualities of images recorded by the personal video camera recorders. The degraded image of personal video makes the retrieval task difficult. Furthermore, it is difficult to discriminate similar images without any auxiliary information. To cope with these difficulties, this paper proposes a group-based image retrieval method. Its characteristics are 1) the use of image similarity based on the wavelet transformation based features and the scale invariant feature transform based features, and 2) the pre-grouping of related images and screening using group information. Experimental results show that the proposed method can improve image retrieval accuracy to 90% up from the conventional method of 40%.

  5. Video surveillance with speckle imaging

    DOEpatents

    Carrano, Carmen J.; Brase, James M.

    2007-07-17

    A surveillance system looks through the atmosphere along a horizontal or slant path. Turbulence along the path causes blurring. The blurring is corrected by speckle processing short exposure images recorded with a camera. The exposures are short enough to effectively freeze the atmospheric turbulence. Speckle processing is used to recover a better quality image of the scene.

  6. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  7. Video guidance, landing, and imaging systems

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.; Tietz, J. C.; Grant, C.; Rice, R. B.; Moog, R. D.

    1975-01-01

    The adaptive potential of video guidance technology for earth orbital and interplanetary missions was explored. The application of video acquisition, pointing, tracking, and navigation technology was considered to three primary missions: planetary landing, earth resources satellite, and spacecraft rendezvous and docking. It was found that an imaging system can be mechanized to provide a spacecraft or satellite with a considerable amount of adaptability with respect to its environment. It also provides a level of autonomy essential to many future missions and enhances their data gathering ability. The feasibility of an autonomous video guidance system capable of observing a planetary surface during terminal descent and selecting the most acceptable landing site was successfully demonstrated in the laboratory. The techniques developed for acquisition, pointing, and tracking show promise for recognizing and tracking coastlines, rivers, and other constituents of interest. Routines were written and checked for rendezvous, docking, and station-keeping functions.

  8. Latino Film and Video Images.

    ERIC Educational Resources Information Center

    Vazquez, Blanca, Ed.

    1990-01-01

    This theme issue of the "Centro Bulletin" examines media stereotypes of Latinos and presents examples of alternatives. "From Assimilation to Annihilation: Puerto Rican Images in U.S. Films" (R. Perez) traces the representation of Puerto Ricans from the early days of television to the films of the 1970s. "The Latino 'Boom'…

  9. Pulsed-Source Interferometry in Acoustic Imaging

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill; Gutierrez, Roman; Tang, Tony K.

    2003-01-01

    A combination of pulsed-source interferometry and acoustic diffraction has been proposed for use in imaging subsurface microscopic defects and other features in such diverse objects as integrated-circuit chips, specimens of materials, and mechanical parts. A specimen to be inspected by this technique would be mounted with its bottom side in contact with an acoustic transducer driven by a continuous-wave acoustic signal at a suitable frequency, which could be as low as a megahertz or as high as a few hundred gigahertz. The top side of the specimen would be coupled to an object that would have a flat (when not vibrating) top surface and that would serve as the acoustical analog of an optical medium (in effect, an acoustical "optic").

  10. Magnetic resonance acoustic radiation force imaging.

    PubMed

    McDannold, Nathan; Maier, Stephan E

    2008-08-01

    Acoustic radiation force impulse imaging is an elastography method developed for ultrasound imaging that maps displacements produced by focused ultrasound pulses systematically applied to different locations. The resulting images are "stiffness weighted" and yield information about local mechanical tissue properties. Here, the feasibility of magnetic resonance acoustic radiation force imaging (MR-ARFI) was tested. Quasistatic MR elastography was used to measure focal displacements using a one-dimensional MRI pulse sequence. A 1.63 or 1.5 MHz transducer supplied ultrasound pulses which were triggered by the magnetic resonance imaging hardware to occur before a displacement-encoding gradient. Displacements in and around the focus were mapped in a tissue-mimicking phantom and in an ex vivo bovine kidney. They were readily observed and increased linearly with acoustic power in the phantom (R2=0.99). At higher acoustic power levels, the displacement substantially increased and was associated with irreversible changes in the phantom. At these levels, transverse displacement components could also be detected. Displacements in the kidney were also observed and increased after thermal ablation. While the measurements need validation, the authors have demonstrated the feasibility of detecting small displacements induced by low-power ultrasound pulses using an efficient magnetic resonance imaging pulse sequence that is compatible with tracking of a dynamically steered ultrasound focal spot, and that the displacement increases with acoustic power. MR-ARFI has potential for elastography or to guide ultrasound therapies that use low-power pulsed ultrasound exposures, such as drug delivery.

  11. Megapixel ion imaging with standard video

    SciTech Connect

    Li Wen; Chambreau, Steven D.; Lahankar, Sridhar A.; Suits, Arthur G.

    2005-06-15

    We present an ion imaging approach employing a real-time ion counting method with standard video. This method employs a center-of-mass calculation of each ion spot (more than 3x3 pixels spread) prior to integration. The results of this algorithm are subpixel precision position data of the corresponding ion spots. These addresses are then converted to the final image with user selected resolution, which can be up to ten times higher than the standard video camera resolution (640x480). This method removes the limiting factor imposed by the resolution of standard video cameras and does so at very low cost. The technique is used in conjunction with dc slice imaging, replacing the local maximum searching algorithm developed by Houston and co-workers [B. Y. Chang, R. C. Hoetzlein, J. A. Mueller, J. D. Geiser, and P. L. Houston, Rev. Sci. Instrum. 69, 1665 (1998)]. The performance is demonstrated using HBr and DBr photodissociation at 193 nm with 3+1 resonance enhanced multiphoton ionization detection of hydrogen and deuterium atom products. The measured velocity resolution for DBr dissociation is 0.50% ({delta}v/v), mainly limited in this case by the bandwidth of the photolysis laser. Issues affecting slice imaging resolution and performance are also discussed.

  12. Optical and opto-acoustic interventional imaging.

    PubMed

    Sarantopoulos, Athanasios; Beziere, Nicolas; Ntziachristos, Vasilis

    2012-02-01

    Many clinical interventional procedures, such as surgery or endoscopy, are today still guided by human vision and perception. Human vision however is not sensitive or accurate in detecting a large range of disease biomarkers, for example cellular or molecular processes characteristic of disease. For this reason advanced optical and opto-acoustic (photo-acoustic) methods are considered for enabling a more versatile, sensitive and accurate detection of disease biomarkers and complement human vision in clinical decision making during interventions. Herein, we outline developments in emerging fluorescence and opto-acoustic sensing and imaging techniques that can lead to practical implementations toward improving interventional vision.

  13. Computing Displacements And Strains From Video Images

    NASA Technical Reports Server (NTRS)

    Russell, Samuel S.; Mcneill, Stephen R.; Lansing, Matthew D.

    1996-01-01

    Subpixel digital video image correlation (SDVIC) technique for measuring in-plane displacements on surfaces of objects under loads, without contact. Used for analyses of experimental research specimens or actual service structures of virtually any size or material. Only minimal preparation of test objects needed, and no need to isolate test objects from minor vibrations or fluctuating temperatures. Technique implemented by SDVIC software, producing color-graduated, full-field representations of in-plane displacements and partial derivatives with respect to position along both principal directions in each image plane. From representations, linear strains, shear strains, and rotation fields determined. Written in C language.

  14. Feature-preserving image/video compression

    NASA Astrophysics Data System (ADS)

    Al-Jawad, Naseer; Jassim, Sabah

    2005-10-01

    Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.

  15. Image and video restorations via nonlocal kernel regression.

    PubMed

    Zhang, Haichao; Yang, Jianchao; Zhang, Yanning; Huang, Thomas S

    2013-06-01

    A nonlocal kernel regression (NL-KR) model is presented in this paper for various image and video restoration tasks. The proposed method exploits both the nonlocal self-similarity and local structural regularity properties in natural images. The nonlocal self-similarity is based on the observation that image patches tend to repeat themselves in natural images and videos, and the local structural regularity observes that image patches have regular structures where accurate estimation of pixel values via regression is possible. By unifying both properties explicitly, the proposed NL-KR framework is more robust in image estimation, and the algorithm is applicable to various image and video restoration tasks. In this paper, we apply the proposed model to image and video denoising, deblurring, and superresolution reconstruction. Extensive experimental results on both single images and realistic video sequences demonstrate that the proposed framework performs favorably with previous works both qualitatively and quantitatively.

  16. Underwater imaging with a moving acoustic lens.

    PubMed

    Kamgar-Parsi, B; Rosenblum, L J; Belcher, E O

    1998-01-01

    The acoustic lens is a high-resolution, forward-looking sonar for three dimensional (3-D) underwater imaging. We discuss processing the lens data for recreating and visualizing the scene. Acoustical imaging, compared to optical imaging, is sparse and low resolution. To achieve higher resolution, we obtain a denser sample by mounting the lens on a moving platform and passing over the scene. This introduces the problem of data fusion from multiple overlapping views for scene formation, which we discuss. We also discuss the improvements in object reconstruction by combining data from several passes over an object. We present algorithms for pass registration and show that this process can be done with enough accuracy to improve the image and provide greater detail about the object. The results of in-water experiments show the degree to which size and shape can be obtained under (nearly) ideal conditions.

  17. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  18. Accuracy of video imaging in mandibular surgery.

    PubMed

    Carter, A C; Larson, B E; Guenthner, T A

    1996-01-01

    Video imaging can simulate combined orthodontic-orthognathic surgical treatment to assist in treatment planning and patient education. Video imaging predictions were compared with actual posttreatment results for 18 patients who received orthodontic and mandibular orthognathic surgical treatments. Three untreated control subjects were also studied. The locations of 13 soft tissue landmarks relative to horizontal and vertical reference planes were compared between predictions and posttreatment photographs, and significant variation (+/- 5 mm) was found for many of the landmarks. Comparisons of various steps repeated during the prediction process were also completed to test for reproducibility. Relatively small differences, generally less than +/- 2 mm, were attributed to the process of linking the cephalogram and photograph and to the manual steps to create surgical treatment objectives. The largest proportion of the total variation, about 80%, was estimated to arise from inaccuracy inherent in the software program. Other contributions to the total variation likely came from physiologic facial changes over time and nonstandardized head positions in the photographs.

  19. Quantitative imaging of acoustic reflection and interference

    NASA Astrophysics Data System (ADS)

    Malkin, Robert; Todd, Thomas; Robert, Daniel

    2015-01-01

    This paper presents a method for time resolved quantitative imaging of acoustic waves. We present the theoretical background, the experimental method and the comparison between experimental and numerical reconstructions of acoustic reflection and interference. Laser Doppler vibrometry is used to detect the modulation of the propagation velocity of light, c, due to pressure-dependant changes in the refractive index of air. Variation in c is known to be proportional to variation in acoustic pressure and thus can be used to quantify sound pressure fluctuations. The method requires the laser beam to travel through the sound field, in effect integrating pressure along a transect line. We investigate the applicability of the method, in particular the effect of the geometry of the sound radiator on line integration. Both experimental and finite element reconstructions of the sound field are in good agreement, corroborating punctual pressure measurements from a precision microphone. Spatial limitations and accuracy of the method are presented and discussed.

  20. Full-Field Imaging of Acoustic Motion at Nanosecond Time and Micron Length Scales

    SciTech Connect

    Telschow, Kenneth Louis; Deason, Vance Albert; Cottle, David Lynn; Larson III, John D.

    2002-10-01

    A full-field view laser ultrasonic imaging method has been developed that measures acoustic motion at a surface without scanning. Images are recorded at normal video frame rates by employing dynamic holography using photorefractive interferometric detection. By extending the approach to ultra high frequencies, an acoustic microscope has been developed capable of operation on the nanosecond time and micron length scales. Both acoustic amplitude and phase are recorded allowing full calibration and determination of phases to within a single arbitrary constant. Results are presented of measurements at frequencies at 800-900 MHz illustrating a multitude of normal mode behavior in electrically driven thin film acoustic resonators. Coupled with microwave electrical impedance measurements, this imaging mode provides an exceptionally fast method for evaluation of electric to acoustic coupling and performance of these devices. Images of 256x240 pixels are recorded at 18Hz rates synchronized to obtain both in-phase and quadrature detection of the acoustic motion. Simple averaging provides sensitivity to the subnanometer level calibrated over the image using interferometry. Identification of specific acoustic modes and their relationship to electrical impedance characteristics show the advantages and overall high speed of the technique.

  1. Methane distribution in porewaters of the Eastern Siberian Shelf Sea - chemical, acoustic, and video observations

    NASA Astrophysics Data System (ADS)

    Bruchert, V.; Sawicka, J. E.; Samarkin, V.; Noormets, R.; Stockmann, G. J.; Bröder, L.; Rattray, J.; Steinbach, J.

    2015-12-01

    We present porewater methane and sulfate concentrations, and the isotope composition of carbon dioxide from 18 sites in areas of reported high methane water column concentrations on the Siberian shelf. Echosounder imaging and video imagery of the benthic environment were used to detect potential bubble emission from the sea bottom and to locate high methane emission areas. In areas where bubble flares were identified by acoustic echsounder imaging, recovered sediment cores provided evidence for slightly elevated porewater methane concentrations 10 cm below the sediment surface relative to sites without flares. Throughout the recovered sediment depth intervals porewater concentrations of methane were more than a factor 300 below the gas saturation limit at sea surface pressure. In addition, surface sediment video recordings provided no evidence for bubble emissions in the investigated methane hotspot areas although at nearby sites bubbles were detected higher in the water column. The conflicting observations of acoustic indications of rising bubbles and the absence of bubbles and methane oversaturation in any of the sediment cores during the whole SWERUS cruise suggest that advective methane seepage is a spatially limited phenomenon that is difficult to capture with routine ship-based core sampling methods in this field area. Recovery of a sediment core from one high-activity site indicated steep gradients in dissolved sulfate and methane in the first 8 cm of sediment pointing to the presence of anaerobic methane oxidation at a site with a high upward flux of methane. Based on the decrease of methane towards the sediment surface and the rates of sulfate reduction-coupled methane oxidation, most of the upward-transported methane was oxidized within the sediment. This conclusion is further supported by the stable isotope composition of dissolved carbon dioxide in porewaters and the precipitation of calcium carbonate minerals only found in sediment at this site

  2. VLSI-based Video Event Triggering for Image Data Compression

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    1994-01-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  3. Image and video fingerprinting: forensic applications

    NASA Astrophysics Data System (ADS)

    Lefebvre, Frédéric; Chupeau, Bertrand; Massoudi, Ayoub; Diehl, Eric

    2009-02-01

    Fighting movie piracy often requires automatic content identification. The most common technique to achieve this uses watermarking, but not all copyrighted content is watermarked. Video fingerprinting is an efficient alternative solution to identify content, to manage multimedia files in UGC sites or P2P networks and to register pirated copies with master content. When registering by matching copy fingerprints with master ones, a model of distortion can be estimated. In case of in-theater piracy, the model of geometric distortion allows the estimation of the capture location. A step even further is to determine, from passive image analysis only, whether different pirated versions were captured with the same camcorder. In this paper we present three such fingerprinting-based forensic applications: UGC filtering, estimation of capture location and source identification.

  4. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  5. Development and calibration of acoustic video camera system for moving vehicles

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Lian, Xiaomin

    2011-05-01

    In this paper, a new acoustic video camera system is developed and its calibration method is established. This system is built based on binocular vision and acoustical holography technology. With binocular vision method, the spatial distance between the microphone array and the moving vehicles is obtained, and the sound reconstruction plane can be established closely to the moving vehicle surface automatically. Then the sound video is regenerated closely to the moving vehicles accurately by acoustic holography method. With this system, the moving and stationary sound sources are treated differently and automatically, which makes the sound visualization of moving vehicles much quicker, more intuitively, and accurately. To verify this system, experiments for a stationary speaker and a non-stationary speaker are carried out. Further verification experiments for outdoor moving vehicle are also conducted. Successful video visualization results not only confirm the validity of the system but also suggest that this system can be a potential useful tool in vehicle's noise identification because it allows the users to find out the noise sources by the videos easily. We believe the newly developed system will be of great potential in moving vehicles' noise identification and control.

  6. Multiresolutional encoding and decoding in embedded image and video coders

    NASA Astrophysics Data System (ADS)

    Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.

    1998-07-01

    We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.

  7. Nonlinear ultrasound imaging of nanoscale acoustic biomolecules.

    PubMed

    Maresca, David; Lakshmanan, Anupama; Lee-Gosselin, Audrey; Melis, Johan M; Ni, Yu-Li; Bourdeau, Raymond W; Kochmann, Dennis M; Shapiro, Mikhail G

    2017-02-13

    Ultrasound imaging is widely used to probe the mechanical structure of tissues and visualize blood flow. However, the ability of ultrasound to observe specific molecular and cellular signals is limited. Recently, a unique class of gas-filled protein nanostructures called gas vesicles (GVs) was introduced as nanoscale (∼250 nm) contrast agents for ultrasound, accompanied by the possibilities of genetic engineering, imaging of targets outside the vasculature and monitoring of cellular signals such as gene expression. These possibilities would be aided by methods to discriminate GV-generated ultrasound signals from anatomical background. Here, we show that the nonlinear response of engineered GVs to acoustic pressure enables selective imaging of these nanostructures using a tailored amplitude modulation strategy. Finite element modeling predicted a strongly nonlinear mechanical deformation and acoustic response to ultrasound in engineered GVs. This response was confirmed with ultrasound measurements in the range of 10 to 25 MHz. An amplitude modulation pulse sequence based on this nonlinear response allows engineered GVs to be distinguished from linear scatterers and other GV types with a contrast ratio greater than 11.5 dB. We demonstrate the effectiveness of this nonlinear imaging strategy in vitro, in cellulo, and in vivo.

  8. Nonlinear ultrasound imaging of nanoscale acoustic biomolecules

    NASA Astrophysics Data System (ADS)

    Maresca, David; Lakshmanan, Anupama; Lee-Gosselin, Audrey; Melis, Johan M.; Ni, Yu-Li; Bourdeau, Raymond W.; Kochmann, Dennis M.; Shapiro, Mikhail G.

    2017-02-01

    Ultrasound imaging is widely used to probe the mechanical structure of tissues and visualize blood flow. However, the ability of ultrasound to observe specific molecular and cellular signals is limited. Recently, a unique class of gas-filled protein nanostructures called gas vesicles (GVs) was introduced as nanoscale (˜250 nm) contrast agents for ultrasound, accompanied by the possibilities of genetic engineering, imaging of targets outside the vasculature and monitoring of cellular signals such as gene expression. These possibilities would be aided by methods to discriminate GV-generated ultrasound signals from anatomical background. Here, we show that the nonlinear response of engineered GVs to acoustic pressure enables selective imaging of these nanostructures using a tailored amplitude modulation strategy. Finite element modeling predicted a strongly nonlinear mechanical deformation and acoustic response to ultrasound in engineered GVs. This response was confirmed with ultrasound measurements in the range of 10 to 25 MHz. An amplitude modulation pulse sequence based on this nonlinear response allows engineered GVs to be distinguished from linear scatterers and other GV types with a contrast ratio greater than 11.5 dB. We demonstrate the effectiveness of this nonlinear imaging strategy in vitro, in cellulo, and in vivo.

  9. Learning Multiscale Sparse Representations for Image and Video Restoration (PREPRINT)

    DTIC Science & Technology

    2007-07-01

    video denoising [35]. In this paper, we extend the basic K- SVD work, providing a framework for learning multiscale and sparse image representation. In... denoising algorithm [1], the extensions to color image denoising , non-homogeneous noise, and inpainting [25], and the K- SVD for denoising videos [35]. Section...improvements to the original single-scale K- SVD . Section 6 presents some applications of the multiscale K- SVD , covering grayscale and color image denoising

  10. Extended image differencing for change detection in UAV video mosaics

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang; Schumann, Arne

    2014-03-01

    Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.

  11. Acoustic imaging of subtle porosity variations in ceramics

    NASA Technical Reports Server (NTRS)

    Generazio, E. R.; Roth, D. J.; Baaklini, G. Y.

    1988-01-01

    Acoustic images of silicon carbide ceramic disks were obtained using a precision scanning contact pulse-echo technique. Phase and cross-correlation velocity and attenuation maps were used to form color images of microstructural variations. These acoustic images reveal microstructural variations not observable with X-radiography.

  12. Method and apparatus for acoustic imaging of objects in water

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2005-01-25

    A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.

  13. Video Imaging System Particularly Suited for Dynamic Gear Inspection

    NASA Technical Reports Server (NTRS)

    Broughton, Howard (Inventor)

    1999-01-01

    A digital video imaging system that captures the image of a single tooth of interest of a rotating gear is disclosed. The video imaging system detects the complete rotation of the gear and divide that rotation into discrete time intervals so that each tooth of interest of the gear is precisely determined when it is at a desired location that is illuminated in unison with a digital video camera so as to record a single digital image for each tooth. The digital images are available to provide instantaneous analysis of the tooth of interest, or to be stored and later provide images that yield a history that may be used to predict gear failure, such as gear fatigue. The imaging system is completely automated by a controlling program so that it may run for several days acquiring images without supervision from the user.

  14. VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL

    NASA Technical Reports Server (NTRS)

    Wall, R. J.

    1994-01-01

    VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image

  15. Acoustic and photoacoustic molecular imaging of cancer.

    PubMed

    Wilson, Katheryne E; Wang, Tzu Yin; Willmann, Jürgen K

    2013-11-01

    Ultrasound and combined optical and ultrasonic (photoacoustic) molecular imaging have shown great promise in the visualization and monitoring of cancer through imaging of vascular and extravascular molecular targets. Contrast-enhanced ultrasound with molecularly targeted microbubbles can detect early-stage cancer through the visualization of targets expressed on the angiogenic vasculature of tumors. Ultrasonic molecular imaging can be extended to the imaging of extravascular targets through use of nanoscale, phase-change droplets and photoacoustic imaging, which provides further molecular information on cancer given by the chemical composition of tissues and by targeted nanoparticles that can interact with extravascular tissues at the receptor level. A new generation of targeted contrast agents goes beyond merely increasing imaging signal at the site of target expression but shows activatable and differential contrast depending on their interactions with the tumor microenvironment. These innovations may further improve our ability to detect and characterize tumors. In this review, recent developments in acoustic and photoacoustic molecular imaging of cancer are discussed.

  16. Research on defogging technology of video image based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  17. Acoustic noise during functional magnetic resonance imaging.

    PubMed

    Ravicz, M E; Melcher, J R; Kiang, N Y

    2000-10-01

    Functional magnetic resonance imaging (fMRI) enables sites of brain activation to be localized in human subjects. For studies of the auditory system, acoustic noise generated during fMRI can interfere with assessments of this activation by introducing uncontrolled extraneous sounds. As a first step toward reducing the noise during fMRI, this paper describes the temporal and spectral characteristics of the noise present under typical fMRI study conditions for two imagers with different static magnetic field strengths. Peak noise levels were 123 and 138 dB re 20 microPa in a 1.5-tesla (T) and a 3-T imager, respectively. The noise spectrum (calculated over a 10-ms window coinciding with the highest-amplitude noise) showed a prominent maximum at 1 kHz for the 1.5-T imager (115 dB SPL) and at 1.4 kHz for the 3-T imager (131 dB SPL). The frequency content and timing of the most intense noise components indicated that the noise was primarily attributable to the readout gradients in the imaging pulse sequence. The noise persisted above background levels for 300-500 ms after gradient activity ceased, indicating that resonating structures in the imager or noise reverberating in the imager room were also factors. The gradient noise waveform was highly repeatable. In addition, the coolant pump for the imager's permanent magnet and the room air-handling system were sources of ongoing noise lower in both level and frequency than gradient coil noise. Knowledge of the sources and characteristics of the noise enabled the examination of general approaches to noise control that could be applied to reduce the unwanted noise during fMRI sessions.

  18. Imaging of acoustic fields using optical feedback interferometry.

    PubMed

    Bertling, Karl; Perchoux, Julien; Taimre, Thomas; Malkin, Robert; Robert, Daniel; Rakić, Aleksandar D; Bosch, Thierry

    2014-12-01

    This study introduces optical feedback interferometry as a simple and effective technique for the two-dimensional visualisation of acoustic fields. We present imaging results for several pressure distributions including those for progressive waves, standing waves, as well as the diffraction and interference patterns of the acoustic waves. The proposed solution has the distinct advantage of extreme optical simplicity and robustness thus opening the way to a low cost acoustic field imaging system based on mass produced laser diodes.

  19. Acoustical Imaging Cameras for the Inspection and Condition Assessment of Hydraulic Structures

    DTIC Science & Technology

    2010-08-01

    feasibility of using acoustical imaging for underwater inspection of structures. INTRODUCTION: Visibility in clear water for the human eye and optical ...but higher resolution than sidescan or multibeam acoustical images • Nonhomogeneity of returned signal caused by variation in angles of signals...acoustical imaging. To obtain higher resolutions than other acoustical imaging technologies such as multibeam and sidescan systems, acoustical camera

  20. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  1. Efficient block error concealment code for image and video transmission

    NASA Astrophysics Data System (ADS)

    Min, Jungki; Chan, Andrew K.

    1999-05-01

    Image and video compression standards such as JPEG, MPEG, H.263 are highly sensitive to error during transmission. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization produces the worst image degradation. Even an error of a single bit in block synchronization may result in data to be placed in wrong positions that is caused by spatial shifts. Our proposed efficient block error concealment code (EBECC) virtually guarantees block synchronization and it improves coding efficiency by several hundred folds over the error resilient entropy code (EREC), proposed by N. G. Kingsbury and D. W. Redmill, depending on the image format and size. In addition, the EBECC produces slightly better resolution on the reconstructed images or video frames than those from the EREC. Another important advantage of the EBECC is that it does not require redundancy contrasting to the EREC that requires 2-3 percent of redundancy. Our preliminary results show the EBECC is 240 times faster than EREC for encoding and 330 times for decoding based on the CIF format of H.263 video coding standard. The EBECC can be used on most of the popular image and video compression schemes such as JPEG, MPEG, and H.263. Additionally, it is especially useful to wireless networks in which the percentage of image and video data is high.

  2. Interpreting Underwater Acoustic Images of the Upper Ocean Boundary Layer

    ERIC Educational Resources Information Center

    Ulloa, Marco J.

    2007-01-01

    A challenging task in physical studies of the upper ocean using underwater sound is the interpretation of high-resolution acoustic images. This paper covers a number of basic concepts necessary for undergraduate and postgraduate students to identify the most distinctive features of the images, providing a link with the acoustic signatures of…

  3. High-Frequency Acoustic Impedance Imaging of Cancer Cells.

    PubMed

    Fadhel, Muhannad N; Berndl, Elizabeth S L; Strohm, Eric M; Kolios, Michael C

    2015-10-01

    Variations in the acoustic impedance throughout cells and tissue can be used to gain insight into cellular microstructures and the physiologic state of the cell. Ultrasound imaging can be used to create a map of the acoustic impedance, on which fluctuations can be used to help identify the dominant ultrasound scattering source in cells, providing information for ultrasound tissue characterization. The physiologic state of a cell can be inferred from the average acoustic impedance values, as many cellular physiologic changes are linked to an alteration in their mechanical properties. A recently proposed method, acoustic impedance imaging, has been used to measure the acoustic impedance maps of biological tissues, but the method has not been used to characterize individual cells. Using this method to image cells can result in more precise acoustic impedance maps of cells than obtained previously using time-resolved acoustic microscopy. We employed an acoustic microscope using a transducer with a center frequency of 375 MHz to calculate the acoustic impedance of normal (MCF-10 A) and cancerous (MCF-7) breast cells. The generated acoustic impedance maps and simulations suggest that the position of the nucleus with respect to the polystyrene substrate may have an effect on the measured acoustic impedance value of the cell. Fluorescence microscopy and confocal microscopy were used to correlate acoustic impedance images with the position of the nucleus within the cell. The average acoustic impedance statistically differed between normal and cancerous breast cells (1.636 ± 0.010 MRayl vs. 1.612 ± 0.006 MRayl), indicating that acoustic impedance could be used to differentiate between normal and cancerous cells.

  4. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode

  5. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  6. Method and apparatus for reading meters from a video image

    DOEpatents

    Lewis, Trevor J.; Ferguson, Jeffrey J.

    1997-01-01

    A method and system to enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  7. Method and apparatus for reading meters from a video image

    SciTech Connect

    Lewis, T.J.; Ferguson, J.J.

    1995-12-31

    A method and system enable acquisition of data about an environment from one or more meters using video images. One or more meters are imaged by a video camera and the video signal is digitized. Then, each region of the digital image which corresponds to the indicator of the meter is calibrated and the video signal is analyzed to determine the value indicated by each meter indicator. Finally, from the value indicated by each meter indicator in the calibrated region, a meter reading is generated. The method and system offer the advantages of automatic data collection in a relatively non-intrusive manner without making any complicated or expensive electronic connections, and without requiring intensive manpower.

  8. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  9. Acoustic images of gel dosimetry phantoms

    NASA Astrophysics Data System (ADS)

    Vieira, Silvio L.; Baggio, André; Kinnick, Randall R.; Fatemi, M.; Carneiro, Antonio Adilton O.

    2010-01-01

    This work presents Vibro-acoustography (VA) as a tool to visualize absorbed dose in a polymer gel dosimetry phantom. VA relies on the mechanical excitation introduced by the acoustic radiation force of focused modulated ultrasound in a small region of the object. A hydrophone or microphone is used to measure the sound emitted from the object in response to the excitation, and by using the amplitude or phase of this signal, an image of the object can be generated. To study the phenomena of dose distribution in a gel dosimetry phantom, continuous wave (CW), tone burst and multi-frequency VA were used to image this phantom. The phantom was designed using 'MAGIC' gel polymer with addition of glass microspheres at 2% w/w having an average diameter range between 40-75 μm. The gel was irradiated using conventional 10 MeV X-rays from a linear accelerator. The field size in the surface of the phantom was 1.0×1.0 cm2 and a source-surface distance (SSD) of 100 cm. The irradiated volume corresponds to an approximately 8.0 cm3, where a dose of 50 gray was delivered to the gel. Polymer gel dosimeters are sensitive to radiation-induced chemical changes that occur in the irradiated polymer. VA images of the gel dosimeter showed the irradiate area. It is concluded that VA imaging has potential to visualize dose distribution in a polymer gel dosimeter.

  10. Magneto-acoustic imaging by continuous-wave excitation.

    PubMed

    Shunqi, Zhang; Zhou, Xiaoqing; Tao, Yin; Zhipeng, Liu

    2017-04-01

    The electrical characteristics of tissue yield valuable information for early diagnosis of pathological changes. Magneto-acoustic imaging is a functional approach for imaging of electrical conductivity. This study proposes a continuous-wave magneto-acoustic imaging method. A kHz-range continuous signal with an amplitude range of several volts is used to excite the magneto-acoustic signal and improve the signal-to-noise ratio. The magneto-acoustic signal amplitude and phase are measured to locate the acoustic source via lock-in technology. An optimisation algorithm incorporating nonlinear equations is used to reconstruct the magneto-acoustic source distribution based on the measured amplitude and phase at various frequencies. Validation simulations and experiments were performed in pork samples. The experimental and simulation results agreed well. While the excitation current was reduced to 10 mA, the acoustic signal magnitude increased up to 10(-7) Pa. Experimental reconstruction of the pork tissue showed that the image resolution reached mm levels when the excitation signal was in the kHz range. The signal-to-noise ratio of the detected magneto-acoustic signal was improved by more than 25 dB at 5 kHz when compared to classical 1 MHz pulse excitation. The results reported here will aid further research into magneto-acoustic generation mechanisms and internal tissue conductivity imaging.

  11. Transthoracic Cardiac Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Bradway, David Pierson

    This dissertation investigates the feasibility of a real-time transthoracic Acoustic Radiation Force Impulse (ARFI) imaging system to measure myocardial function non-invasively in clinical setting. Heart failure is an important cardiovascular disease and contributes to the leading cause of death for developed countries. Patients exhibiting heart failure with a low left ventricular ejection fraction (LVEF) can often be identified by clinicians, but patients with preserved LVEF might be undetected if they do not exhibit other signs and symptoms of heart failure. These cases motivate development of transthoracic ARFI imaging to aid the early diagnosis of the structural and functional heart abnormalities leading to heart failure. M-Mode ARFI imaging utilizes ultrasonic radiation force to displace tissue several micrometers in the direction of wave propagation. Conventional ultrasound tracks the response of the tissue to the force. This measurement is repeated rapidly at a location through the cardiac cycle, measuring timing and relative changes in myocardial stiffness. ARFI imaging was previously shown capable of measuring myocardial properties and function via invasive open-chest and intracardiac approaches. The prototype imaging system described in this dissertation is capable of rapid acquisition, processing, and display of ARFI images and shear wave elasticity imaging (SWEI) movies. Also presented is a rigorous safety analysis, including finite element method (FEM) simulations of tissue heating, hydrophone intensity and mechanical index (MI) measurements, and thermocouple transducer face heating measurements. For the pulse sequences used in later animal and clinical studies, results from the safety analysis indicates that transthoracic ARFI imaging can be safely applied at rates and levels realizable on the prototype ARFI imaging system. Preliminary data are presented from in vivo trials studying changes in myocardial stiffness occurring under normal and abnormal

  12. Passive Imaging in Nondiffuse Acoustic Wavefields

    SciTech Connect

    Mulargia, Francesco; Castellaro, Silvia

    2008-05-30

    A main property of diffuse acoustic wavefields is that, taken any two points, each of them can be seen as the source of waves and the other as the recording station. This property is shown to follow simply from array azimuthal selectivity and Huygens principle in a locally isotropic wavefield. Without time reversal, this property holds approximately also in anisotropic azimuthally uniform wavefields, implying much looser constraints for undistorted passive imaging than those required by a diffuse field. A notable example is the seismic noise field, which is generally nondiffuse, but is found to be compatible with a finite aperture anisotropic uniform wavefield. The theoretical predictions were confirmed by an experiment on seismic noise in the mainland of Venice, Italy.

  13. Aerospace video imaging systems for rangeland management

    NASA Technical Reports Server (NTRS)

    Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.

    1990-01-01

    This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.

  14. Laser-induced acoustic imaging of underground objects

    NASA Astrophysics Data System (ADS)

    Li, Wen; DiMarzio, Charles A.; McKnight, Stephen W.; Sauermann, Gerhard O.; Miller, Eric L.

    1999-02-01

    This paper introduces a new demining technique based on the photo-acoustic interaction, together with results from photo- acoustic experiments. We have buried different types of targets (metal, rubber and plastic) in different media (sand, soil and water) and imaged them by measuring reflection of acoustic waves generated by irradiation with a CO2 laser. Research has been focused on the signal acquisition and signal processing. A deconvolution method using Wiener filters is utilized in data processing. Using a uniform spatial distribution of laser pulses at the ground's surface, we obtained 3D images of buried objects. The images give us a clear representation of the shapes of the underground objects. The quality of the images depends on the mismatch of acoustic impedance of the buried objects, the bandwidth and center frequency of the acoustic sensors and the selection of filter functions.

  15. Far-field image magnification for acoustic waves using anisotropic acoustic metamaterials.

    PubMed

    Ao, Xianyu; Chan, C T

    2008-02-01

    A kind of two-dimensional acoustic metamaterial is designed so that it exhibits strong anisotropy along two orthogonal directions. Based on the rectangular equal frequency contour of this metamaterial, magnifying lenses for acoustic waves, analogous to electromagnetic hyperlenses demonstrated recently in the optical regime, can be realized. Such metamaterial may offer applications in imaging for systems that obey scalar wave equations.

  16. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  17. Optimization of a biometric system based on acoustic images.

    PubMed

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced.

  18. Image Effects in the Appreciation of Video Rock.

    ERIC Educational Resources Information Center

    Zillman, Dolf; Mundorf, Norbert

    1987-01-01

    Assesses the addition of sexual and/or violent images to rock music videos whose originals were both nonsexual and nonviolent. Notes that sexual stimuli intensified music appreciation in males and females, that violent stimuli sometimes did the same, but that the combination of sexual and violent images failed to enhance appreciation of the music.…

  19. Transducer Arrays Suitable for Acoustic Imaging

    DTIC Science & Technology

    1978-06-01

    attention is placed on achieving high transduction efficiency and angular beam - widths of at least ±15°• T. Design techniques based on the transmission line...approximation so that the acoustic beam is caused to come to a focus in the exact analogue to a normal lens. The reference phase delays necessary to...fccus the acoustic beam are provided by a tapped surface acoustic wave delay line. A surface Ji acoustic wave is launched down the delay line with a

  20. Acoustic radiation force elasticity imaging in diagnostic ultrasound.

    PubMed

    Doherty, Joshua R; Trahey, Gregg E; Nightingale, Kathryn R; Palmeri, Mark L

    2013-04-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo; elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed noninvasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods.

  1. Magnetic resonance imaging of acoustic streaming: absorption coefficient and acoustic field shape estimation.

    PubMed

    Madelin, Guillaume; Grucker, Daniel; Franconi, Jean-Michel; Thiaudiere, Eric

    2006-07-01

    In this study, magnetic resonance imaging (MRI) is used to visualize acoustic streaming in liquids. A single-shot spin echo sequence (HASTE) with a saturation band perpendicular to the acoustic beam permits the acquisition of an instantaneous image of the flow due to the application of ultrasound. An average acoustic streaming velocity can be estimated from the MR images, from which the ultrasonic absorption coefficient and the bulk viscosity of different glycerol-water mixtures can be deduced. In the same way, this MRI method could be used to assess the acoustic field and time-average power of ultrasonic transducers in water (or other liquids with known physical properties), after calibration of a geometrical parameter that is dependent on the experimental setup.

  2. Perceptual watermarks for digital images and video

    NASA Astrophysics Data System (ADS)

    Wolfgang, Raymond B.; Podilchuk, Christine I.; Delp, Edward J., III

    1999-04-01

    The growth of new imaging technologies has created a need for techniques that can be used for copyright protection of digital images. One approach for copyright protection is to introduce an invisible signal known as a digital watermark in the image. In this paper, we describe digital image watermarking techniques known as perceptually watermarks that are designed to exploit aspects of the human visual system in order to produce a transparent, yet robust watermark.

  3. CO2 leak detection through acoustic sensing and infrared imaging

    NASA Astrophysics Data System (ADS)

    Cui, Xiwang; Yan, Yong; Ma, Lin; Ma, Yifan; Han, Xiaojuan

    2014-04-01

    When CO2 leakage occurs from a high pressure enclosure, the CO2 jet formed can produce fierce turbulent flow generating acoustic emission with possible phase change, depending on the pressure of the enclosure, and a significant temperature drop in the region close to the releasing point. Acoustic Emission (AE) and infrared imaging technologiesare promising methods for on-line monitoring of such accidental leakage. In this paper, leakage experiments were carried out with a CO2 container under well controlled conditions in a laboratory. Acoustic signals and temperature distribution at the leakage area were acquired using an acoustic sensor and an infraredthermalimaging camera. The acoustic signal was analyzed in both time and frequency domains. The characteristics of the signal frequencies areidentified, and their suitability for leakage detectionis investigated. The location of the leakage can be identified by seeking the lowest temperature area or point in the infrared image.

  4. Calibration method for video and radiation imagers

    DOEpatents

    Cunningham, Mark F.; Fabris, Lorenzo; Gee, Timothy F.; Goddard, Jr., James S.; Karnowski, Thomas P.; Ziock, Klaus-peter

    2011-07-05

    The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.

  5. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  6. High-sensitivity hyperspectral imager for biomedical video diagnostic applications

    NASA Astrophysics Data System (ADS)

    Leitner, Raimund; Arnold, Thomas; De Biasio, Martin

    2010-04-01

    Video endoscopy allows physicians to visually inspect inner regions of the human body using a camera and only minimal invasive optical instruments. It has become an every-day routine in clinics all over the world. Recently a technological shift was done to increase the resolution from PAL/NTSC to HDTV. But, despite a vast literature on invivo and in-vitro experiments with multi-spectral point and imaging instruments that suggest that a wealth of information for diagnostic overlays is available in the visible spectrum, the technological evolution from colour to hyper-spectral video endoscopy is overdue. There were two approaches (NBI, OBI) that tried to increase the contrast for a better visualisation by using more than three wavelengths. But controversial discussions about the real benefit of a contrast enhancement alone, motivated a more comprehensive approach using the entire spectrum and pattern recognition algorithms. Up to now the hyper-spectral equipment was too slow to acquire a multi-spectral image stack at reasonable video rates rendering video endoscopy applications impossible. Recently, the availability of fast and versatile tunable filters with switching times below 50 microseconds made an instrumentation for hyper-spectral video endoscopes feasible. This paper describes a demonstrator for hyper-spectral video endoscopy and the results of clinical measurements using this demonstrator for measurements after otolaryngoscopic investigations and thorax surgeries. The application investigated here is the detection of dysplastic tissue, although hyper-spectral video endoscopy is of course not limited to cancer detection. Other applications are the detection of dysplastic tissue or polyps in the colon or the gastrointestinal tract.

  7. Learning Multiscale Sparse Representations for Image and Video Restoration

    DTIC Science & Technology

    2007-07-01

    25], and more recently to video denoising [35]. In this paper, we extend the basic K- SVD work, providing a framework for learning multiscale and...The original K- SVD denoising algorithm [1], the extensions to color image denoising , non-homogeneous noise, and inpainting [25], and the K- SVD for...section, we briefly review these algorithms. 2.1. The grayscale image denoising K- SVD algorithm. We now briefly review the main ideas of the K- SVD

  8. Image-guided transorbital procedures with endoscopic video augmentation

    PubMed Central

    DeLisi, Michael P.; Mawn, Louise A.; Galloway, Robert L.

    2014-01-01

    Purpose: Surgical interventions to the orbital space behind the eyeball are limited to highly invasive procedures due to the confined nature of the region along with the presence of several intricate soft tissue structures. A minimally invasive approach to orbital surgery would enable several therapeutic options, particularly new treatment protocols for optic neuropathies such as glaucoma. The authors have developed an image-guided system for the purpose of navigating a thin flexible endoscope to a specified target region behind the eyeball. Navigation within the orbit is particularly challenging despite its small volume, as the presence of fat tissue occludes the endoscopic visual field while the surgeon must constantly be aware of optic nerve position. This research investigates the impact of endoscopic video augmentation to targeted image-guided navigation in a series of anthropomorphic phantom experiments. Methods: A group of 16 surgeons performed a target identification task within the orbits of four skull phantoms. The task consisted of identifying the correct target, indicated by the augmented video and the preoperative imaging frames, out of four possibilities. For each skull, one orbital intervention was performed with video augmentation, while the other was done with the standard image guidance technique, in random order. Results: The authors measured a target identification accuracy of 95.3% and 85.9% for the augmented and standard cases, respectively, with statistically significant improvement in procedure time (Z = −2.044, p = 0.041) and intraoperator mean procedure time (Z = 2.456, p = 0.014) when augmentation was used. Conclusions: Improvements in both target identification accuracy and interventional procedure time suggest that endoscopic video augmentation provides valuable additional orientation and trajectory information in an image-guided procedure. Utilization of video augmentation in transorbital interventions could further minimize

  9. Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging

    PubMed Central

    Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan

    2016-01-01

    Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l1-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging. PMID:27783040

  10. Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging.

    PubMed

    Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan

    2016-10-24

    Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l₁-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging.

  11. Incident Wave Removal for Defect Enhancement in Acoustic Wavefield Imaging

    NASA Astrophysics Data System (ADS)

    Master, Zubin M.; Michaels, Thomas E.; Michaels, Jennifer E.

    2007-03-01

    The method of Acoustic Wavefield Imaging (AWI) offers many advantages over conventional ultrasonic techniques for nondestructive evaluation, and also provides a means of incorporating fixed ultrasonic sensors used for structural health monitoring into subsequent inspections. AWI utilizes these fixed sensors as wave sources and an externally scanned ultrasonic transducer (or laser interferometer) as a receiver to acquire complete waveform data over the surface. When displayed as time-dependent images, these signals show the propagation of acoustic waves through a structure and subsequent interactions of these waves with both defects and structural geometry. Defect areas appear as stationary scattering sources on these images, but such scattered wave energy is often obscured by the stronger incident acoustic wavefield. The objective of the work presented here is to develop multidimensional signal processing algorithms to enhance the appearance of structural defects on wavefield images via removal of the incident wave. Results are presented for analysis of images from aluminum plate and solid laminate composite specimens.

  12. Shape-adaptable hyperlens for acoustic magnifying imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Hongkuan; Zhou, Xiaoming; Hu, Gengkai

    2016-11-01

    Previous prototypes of acoustic hyperlens consist of rigid channels, which are unable to adapt in shape to the object under detection. We propose to overcome this limitation by employing soft plastic tubes that could guide acoustics with robustness against bending deformation. Based on the idea of soft-tube acoustics, acoustic magnifying hyperlens with planar input and output surfaces has been fabricated and validated experimentally. The shape-adaption capability of the soft-tube hyperlens is demonstrated by a controlled experiment, in which the magnifying super-resolution images remain stable when the lens input surface is curved. Our study suggests a feasible route toward constructing the flexible channel-structured acoustic metamaterials with the shape-adaption capability, opening then an additional degree of freedom for full control of sound.

  13. Techniques for estimating blood pressure variation using video images.

    PubMed

    Sugita, Norihiro; Obara, Kazuma; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Homma, Noriyasu

    2015-01-01

    It is important to know about a sudden blood pressure change that occurs in everyday life and may pose a danger to human health. However, monitoring the blood pressure variation in daily life is difficult because a bulky and expensive sensor is needed to measure the blood pressure continuously. In this study, a new non-contact method is proposed to estimate the blood pressure variation using video images. In this method, the pulse propagation time difference or instantaneous phase difference is calculated between two pulse waves obtained from different parts of a subject's body captured by a video camera. The forehead, left cheek, and right hand are selected as regions to obtain pulse waves. Both the pulse propagation time difference and instantaneous phase difference were calculated from the video images of 20 healthy subjects performing the Valsalva maneuver. These indices are considered to have a negative correlation with the blood pressure variation because they approximate the pulse transit time obtained from a photoplethysmograph. However, the experimental results showed that the correlation coefficients between the blood pressure and the proposed indices were approximately 0.6 for the pulse wave obtained from the right hand. This result is considered to be due to the difference in the transmission depth into the skin between the green and infrared light used as light sources for the video image and conventional photoplethysmogram, respectively. In addition, the difference in the innervation of the face and hand may be related to the results.

  14. Investigation of an acoustical holography system for real-time imaging

    NASA Astrophysics Data System (ADS)

    Fecht, Barbara A.; Andre, Michael P.; Garlick, George F.; Shelby, Ronald L.; Shelby, Jerod O.; Lehman, Constance D.

    1998-07-01

    A new prototype imaging system based on ultrasound transmission through the object of interest -- acoustical holography -- was developed which incorporates significant improvements in acoustical and optical design. This system is being evaluated for potential clinical application in the musculoskeletal system, interventional radiology, pediatrics, monitoring of tumor ablation, vascular imaging and breast imaging. System limiting resolution was estimated using a line-pair target with decreasing line thickness and equal separation. For a swept frequency beam from 2.6 - 3.0 MHz, the minimum resolution was 0.5 lp/mm. Apatite crystals were suspended in castor oil to approximate breast microcalcifications. Crystals from 0.425 - 1.18 mm in diameter were well resolved in the acoustic zoom mode. Needle visibility was examined with both a 14-gauge biopsy needle and a 0.6 mm needle. The needle tip was clearly visible throughout the dynamic imaging sequence as it was slowly inserted into a RMI tissue-equivalent breast biopsy phantom. A selection of human images was acquired in several volunteers: a 25 year-old female volunteer with normal breast tissue, a lateral view of the elbow joint showing muscle fascia and tendon insertions, and the superficial vessels in the forearm. Real-time video images of these studies will be presented. In all of these studies, conventional sonography was used for comparison. These preliminary investigations with the new prototype acoustical holography system showed favorable results in comparison to state-of-the-art pulse-echo ultrasound and demonstrate it to be suitable for further clinical study. The new patient interfaces will facilitate orthopedic soft tissue evaluation, study of superficial vascular structures and potentially breast imaging.

  15. Neuroscience. Video game images persist despite amnesia.

    PubMed

    Helmuth, L

    2000-10-13

    On page 350, researchers report the results of new work in which they used the computer game Tetris--which involves using spatial reasoning to slot falling blocks strategically into place--to study how the brain reviews what it has learned. The researchers found that people who have just learned to play Tetris have vivid images of the game pieces floating before their eyes as they fall asleep, a phenomenon the researchers say is critical for building memories. Much more surprisingly, the team also found that the images appear to people with amnesia who have played the game--even though they have no recollection of having done so.

  16. Modeling quantization matrices for perceptual image / video encoding

    NASA Astrophysics Data System (ADS)

    Zhang, Huipin; Cote, Guy

    2008-01-01

    Quantization matrix is an important encoding tool for discrete cosine transform (DCT) based perceptual image / video encoding in that DCT coefficients can be quantized according to the sensitivity of the human visual system to the coefficients' corresponding spatial frequencies. A quadratic model is introduced to parameterize the quantization matrices. This model is then used to optimize quantization matrices for a specific bitrate or bitrate range by maximizing the expected encoding quality via a trial based multidimensional numerical search method. The model is simple yet it characterizes the slope and the convexity of the quantization matrices along the horizontal, the vertical and the diagonal directions. The advantage of the model for improving perceptual video encoding quality is demonstrated with simulations using H.264 / AVC video encoding.

  17. Robust web image/video super-resolution.

    PubMed

    Xiong, Zhiwei; Sun, Xiaoyan; Wu, Feng

    2010-08-01

    This paper proposes a robust single-image super-resolution method for enlarging low quality web image/video degraded by downsampling and compression. To simultaneously improve the resolution and perceptual quality of such web image/video, we bring forward a practical solution which combines adaptive regularization and learning-based super-resolution. The contribution of this work is twofold. First, we propose to analyze the image energy change characteristics during the iterative regularization process, i.e., the energy change ratio between primitive (e.g., edges, ridges and corners) and nonprimitive fields. Based on the revealed convergence property of the energy change ratio, appropriate regularization strength can then be determined to well balance compression artifacts removal and primitive components preservation. Second, we verify that this adaptive regularization can steadily and greatly improve the pair matching accuracy in learning-based super-resolution. Consequently, their combination effectively eliminates the quantization noise and meanwhile faithfully compensates the missing high-frequency details, yielding robust super-resolution performance in the compression scenario. Experimental results demonstrate that our solution produces visually pleasing enlargements for various web images/videos.

  18. Applying Image Matching to Video Analysis

    DTIC Science & Technology

    2010-09-01

    34American Classics VII: Don’t be a Chicken of Dumplings". The frames were extracted using the ffmpeg program [29]. The first two images from the set...F. " ffmpeg software". http://www.ffmpeg.org/. 30: Hess, R. "SIFT software". http://web.engr.oregonstate.edu/hess. 31: Bay, H., Van Gool, L. and

  19. Photoacoustic imaging using acoustic reflectors to enhance planar arrays.

    PubMed

    Ellwood, Robert; Zhang, Edward; Beard, Paul; Cox, Ben

    2014-12-01

    Planar sensor arrays have advantages when used for photoacoustic imaging: they do not require the imaging target to be enclosed, and they are easier to manufacture than curved arrays. However, planar arrays have a limited view of the acoustic field due to their finite size; therefore, not all of the acoustic waves emitted from a photoacoustic source can be recorded. This loss of data results in artifacts in the reconstructed photoacoustic image. A detection array configuration which combines a planar Fabry–Pérot sensor with perpendicular acoustic reflectors is described and experimentally implemented. This retains the detection advantages of the planar sensor while increasing the effective detection aperture in order to improve the reconstructed photoacoustic image.

  20. Acoustic imaging in a water filled metallic pipe

    SciTech Connect

    Kolbe, W.F.; Turko, B.T.; Leskovar, B.

    1984-04-01

    A method is described for the imaging of the interior of a water filled metallic pipe using acoustical techniques. The apparatus consists of an array of 20 acoustic transducers mounted circumferentially around the pipe. Each transducer is pulsed in sequence, and the echos resulting from bubbles in the interior are digitized and processed by a computer to generate an image. The electronic control and digitizing system and the software processing of the echo signals are described. The performance of the apparatus is illustrated by the imaging of simulated bubbles consisting of thin walled glass spheres suspended in the pipe.

  1. Assessing the variability in respiratory acoustic thoracic imaging (RATHI).

    PubMed

    Charleston-Villalobos, S; Torres-Jiménez, A; González-Camarena, R; Chi-Lem, G; Aljama-Corrales, T

    2014-02-01

    Multichannel analysis of lung sounds (LSs) has enabled the generation of a functional image for the temporal and spatial study of LS intensities in healthy and diseased subjects; this method is known as respiratory acoustic thoracic imaging (RATHI). This acoustic imaging technique has been applied to diverse pulmonary conditions, but it is important to contribute to the understanding of RATHI characteristics, such as acoustic spatial distribution, dependence on airflow and variability. The purpose of the current study is to assess the intra-subject and inter-subject RATHI variabilities in a cohort of 12 healthy male subjects (24.3±1.5 years) using diverse quantitative indices. The indices were obtained directly from the acoustic image and did not require scores from human raters, which helps to prevent inter-observer variability. To generate the acoustic image, LSs were acquired at 25 positions on the posterior thoracic surface by means of airborne sound sensors with a wide frequency band from 75 up to 1000 Hz under controlled airflow conditions at 1.0, 1.5 and 2.0 L/s. To assess intra-subject variability, the degree of similitude between inspiratory acoustic images was evaluated through quadratic mutual information based on the Cauchy-Schwartz inequality (I(CS)). The inter-subject variability was assessed by an image registration procedure between RATHIs and X-ray images to allow the computation of average and variance acoustic image in the same coordinate space. The results indicated that intra-subject RATHI similitude, reflected by I(CS-global), averaged 0.960±0.008, 0.958±0.008 and 0.960±0.007 for airflows of 1.0, 1.5, and 2L/s, respectively. As for the inter-subject variability, the variance image values for three airflow conditions indicated low image variability as they ranged from 0.01 to 0.04. In conclusion, the assessment of intra-subject and inter-subject variability by similitude indices indicated that the acoustic image pattern is repeatable along

  2. Time-Reversal Acoustics and Maximum-Entropy Imaging

    SciTech Connect

    Berryman, J G

    2001-08-22

    Target location is a common problem in acoustical imaging using either passive or active data inversion. Time-reversal methods in acoustics have the important characteristic that they provide a means of determining the eigenfunctions and eigenvalues of the scattering operator for either of these problems. Each eigenfunction may often be approximately associated with an individual scatterer. The resulting decoupling of the scattered field from a collection of targets is a very useful aid to localizing the targets, and suggests a number of imaging and localization algorithms. Two of these are linear subspace methods and maximum-entropy imaging.

  3. Acoustic imaging in a water filled metallic pipe

    NASA Astrophysics Data System (ADS)

    Kolbe, W. F.; Turko, B. T.; Leskovar, B.

    1984-04-01

    A method is described for imaging the interior of a water filled metallic pipe using acoustical techniques. The apparatus consists of an array of 20 acoustic transducers mounted circumferentially around the pipe. Each transducer is pulsed in sequence, and the echos resulting from bubbles in the interior are digitized and processed by a computer to generate an image. The electronic control and digitizing system and the software processing of the echo signals are described. The performance of the apparatus is illustrated by the imaging of simulated bubbles consisting of thin walled glass spheres suspended in the pipe.

  4. System engineering for image and video systems

    NASA Astrophysics Data System (ADS)

    Talbot, Raymond J., Jr.

    1997-02-01

    The National Law Enforcement and Corrections Technology Centers (NLECTC) support public law enforcement agencies with technology development, evaluation, planning, architecture, and implementation. The NLECTC Western Region has a particular emphasis on surveillance and imaging issues. Among its activities, working with government and industry, NLECTC-WR produces 'Guides to Best Practices and Acquisition Methodologies' that facilitate government organizations in making better informed purchasing and operational decisions. This presentation includes specific examples from current activities. Through these systematic procedures, it is possible to design solutions optimally matched to the desired outcomes and provide a process for continuous improvement and greater public awareness of success.

  5. Quantitative Determination of Lateral Mode Dispersion in Film Bulk Acoustic Resonators through Laser Acoustic Imaging

    SciTech Connect

    Ken Telschow; John D. Larson III

    2006-10-01

    Film Bulk Acoustic Resonators are useful for many signal processing applications. Detailed knowledge of their operation properties are needed to optimize their design for specific applications. The finite size of these resonators precludes their use in single acoustic modes; rather, multiple wave modes, such as, lateral wave modes are always excited concurrently. In order to determine the contributions of these modes, we have been using a newly developed full-field laser acoustic imaging approach to directly measure their amplitude and phase throughout the resonator. This paper describes new results comparing modeling of both elastic and piezoelectric effects in the active material with imaging measurement of all excited modes. Fourier transformation of the acoustic amplitude and phase displacement images provides a quantitative determination of excited mode amplitude and wavenumber at any frequency. Images combined at several frequencies form a direct visualization of lateral mode excitation and dispersion for the device under test allowing mode identification and comparison with predicted operational properties. Discussion and analysis are presented for modes near the first longitudinal thickness resonance (~900 MHz) in an AlN thin film resonator. Plate wave modeling, taking account of material crystalline orientation, elastic and piezoelectric properties and overlayer metallic films, will be discussed in relation to direct image measurements.

  6. Acoustic-optical imaging without immersion

    NASA Technical Reports Server (NTRS)

    Liu, H.

    1979-01-01

    System using membraneous end wall of Bragg cell to separate test specimen from acoustic transmission medium, operates in real time and uses readily available optical components. System can be easily set up and maintained by people with little or no training in holography.

  7. Overview of parallel processing approaches to image and video compression

    NASA Astrophysics Data System (ADS)

    Shen, Ke; Cook, Gregory W.; Jamieson, Leah H.; Delp, Edward J., III

    1994-05-01

    In this paper we present an overview of techniques used to implement various image and video compression algorithms using parallel processing. Approaches used can largely be divided into four areas. The first is the use of special purpose architectures designed specifically for image and video compression. An example of this is the use of an array of DSP chips to implement a version of MPEG1. The second approach is the use of VLSI techniques. These include various chip sets for JPEG and MPEG1. The third approach is algorithm driven, in which the structure of the compression algorithm describes the architecture, e.g. pyramid algorithms. The fourth approach is the implementation of algorithms on high performance parallel computers. Examples of this approach are the use of a massively parallel computer such as the MasPar MP-1 or the use of a coarse-grained machine such as the Intel Touchstone Delta.

  8. Synthetic aperture acoustic imaging of non-metallic cords

    NASA Astrophysics Data System (ADS)

    Glean, Aldo A. J.; Good, Chelsea E.; Vignola, Joseph F.; Judge, John A.; Ryan, Teresa J.; Bishop, Steven S.; Gugino, Peter M.; Soumekh, Mehrdad

    2012-06-01

    This work presents a set of measurements collected with a research prototype synthetic aperture acoustic (SAA) imaging system. SAA imaging is an emerging technique that can serve as an inexpensive alternative or logical complement to synthetic aperture radar (SAR). The SAA imaging system uses an acoustic transceiver (speaker and microphone) to project acoustic radiation and record backscatter from a scene. The backscattered acoustic energy is used to generate information about the location, morphology, and mechanical properties of various objects. SAA detection has a potential advantage when compared to SAR in that non-metallic objects are not readily detectable with SAR. To demonstrate basic capability of the approach with non-metallic objects, targets are placed in a simple, featureless scene. Nylon cords of five diameters, ranging from 2 to 15 mm, and a joined pair of 3 mm fiber optic cables are placed in various configurations on flat asphalt that is free of clutter. The measurements were made using a chirp with a bandwidth of 2-15 kHz. The recorded signal is reconstructed to form a two-dimensional image of the distribution of acoustic scatterers within the scene. The goal of this study was to identify basic detectability characteristics for a range of sizes and configurations of non-metallic cord. It is shown that for sufficiently small angles relative to the transceiver path, the SAA approach creates adequate backscatter for detectability.

  9. Performance Evaluation of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  10. Evaluation of Skybox Video and Still Image products

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  11. Acoustically modulated x-ray phase contrast imaging.

    PubMed

    Hamilton, Theron J; Bailat, Claude J; Rose-Petruck, Christoph; Diebold, Gerald J

    2004-11-07

    We report the use of ultrasonic radiation pressure with phase contrast x-ray imaging to give an image proportional to the space derivative of a conventional phase contrast image in the direction of propagation of an ultrasonic beam. Intense ultrasound is used to exert forces on objects within a body giving displacements of the order of tens to hundreds of microns. Subtraction of images made with and without the ultrasound field gives an image that removes low spatial frequency features and highlights high frequency features. The method acts as an acoustic 'contrast agent' for phase contrast x-ray imaging, which in soft tissue acts to highlight small density changes.

  12. Two-dimensional acoustic metamaterial structure for potential image processing

    NASA Astrophysics Data System (ADS)

    Sun, Hongwei; Han, Yu; Li, Ying; Pai, Frank

    2015-12-01

    This paper presents modeling, analysis techniques and experiment of for two-Dimensional Acoustic metamaterial Structure for filtering acoustic waves. For a unit cell of an infinite two-Dimensional Acoustic metamaterial Structure, governing equations are derived using the extended Hamilton principle. The concepts of negative effective mass and stiffness and how the spring-mass-damper subsystems create a stopband are explained in detail. Numerical simulations reveal that the actual working mechanism of the proposed acoustic metamaterial structure is based on the concept of conventional mechanical vibration absorbers. It uses the incoming wave in the structure to resonate the integrated membrane-mass-damper absorbers to vibrate in their optical mode at frequencies close to but above their local resonance frequencies to create shear forces and bending moments to straighten the panel and stop the wave propagation. Moreover, a two-dimension acoustic metamaterial structure consisting of lumped mass and elastic membrane is fabricated in the lab. We do experiments on the model and The results validate the concept and show that, for two-dimension acoustic metamaterial structure do exist two vibration modes. For the wave absorption, the mass of each cell should be considered in the design. With appropriate design calculations, the proposed two-dimension acoustic metamaterial structure can be used for absorption of low-frequency waves. Hence this special structure can be used in filtering the waves, and the potential using can increase the ultrasonic imaging quality.

  13. Discovering thematic objects in image collections and videos.

    PubMed

    Yuan, Junsong; Zhao, Gangqiang; Fu, Yun; Li, Zhu; Katsaggelos, Aggelos K; Wu, Ying

    2012-04-01

    Given a collection of images or a short video sequence, we define a thematic object as the key object that frequently appears and is the representative of the visual contents. Successful discovery of the thematic object is helpful for object search and tagging, video summarization and understanding, etc. However, this task is challenging because 1) there lacks a priori knowledge of the thematic objects, such as their shapes, scales, locations, and times of re-occurrences, and 2) the thematic object of interest can be under severe variations in appearances due to viewpoint and lighting condition changes, scale variations, etc. Instead of using a top-down generative model to discover thematic visual patterns, we propose a novel bottom-up approach to gradually prune uncommon local visual primitives and recover the thematic objects. A multilayer candidate pruning procedure is designed to accelerate the image data mining process. Our solution can efficiently locate thematic objects of various sizes and can tolerate large appearance variations of the same thematic object. Experiments on challenging image and video data sets and comparisons with existing methods validate the effectiveness of our method.

  14. Block-based embedded color image and video coding

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Pearlman, William A.; Islam, Asad

    2004-01-01

    Set Partitioned Embedded bloCK coder (SPECK) has been found to perform comparable to the best-known still grayscale image coders like EZW, SPIHT, JPEG2000 etc. In this paper, we first propose Color-SPECK (CSPECK), a natural extension of SPECK to handle color still images in the YUV 4:2:0 format. Extensions to other YUV formats are also possible. PSNR results indicate that CSPECK is among the best known color coders while the perceptual quality of reconstruction is superior than SPIHT and JPEG2000. We then propose a moving picture based coding system called Motion-SPECK with CSPECK as the core algorithm in an intra-based setting. Specifically, we demonstrate two modes of operation of Motion-SPECK, namely the constant-rate mode where every frame is coded at the same bit-rate and the constant-distortion mode, where we ensure the same quality for each frame. Results on well-known CIF sequences indicate that Motion-SPECK performs comparable to Motion-JPEG2000 while the visual quality of the sequence is in general superior. Both CSPECK and Motion-SPECK automatically inherit all the desirable features of SPECK such as embeddedness, low computational complexity, highly efficient performance, fast decoding and low dynamic memory requirements. The intended applications of Motion-SPECK would be high-end and emerging video applications such as High Quality Digital Video Recording System, Internet Video, Medical Imaging etc.

  15. Laser Imaging of Airborne Acoustic Emission by Nonlinear Defects

    NASA Astrophysics Data System (ADS)

    Solodov, Igor; Döring, Daniel; Busse, Gerd

    2008-06-01

    Strongly nonlinear vibrations of near-surface fractured defects driven by an elastic wave radiate acoustic energy into adjacent air in a wide frequency range. The variations of pressure in the emitted airborne waves change the refractive index of air thus providing an acoustooptic interaction with a collimated laser beam. Such an air-coupled vibrometry (ACV) is proposed for detecting and imaging of acoustic radiation of nonlinear spectral components by cracked defects. The photoelastic relation in air is used to derive induced phase modulation of laser light in the heterodyne interferometer setup. The sensitivity of the scanning ACV to different spatial components of the acoustic radiation is analyzed. The animated airborne emission patterns are visualized for the higher harmonic and frequency mixing fields radiated by planar defects. The results confirm a high localization of the nonlinear acoustic emission around the defects and complicated directivity patterns appreciably different from those observed for fundamental frequencies.

  16. Optimal flushing agents for integrated optical and acoustic imaging systems

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-05-01

    An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging.

  17. Optimal flushing agents for integrated optical and acoustic imaging systems.

    PubMed

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, Kirk; Zhou, Qifa; Patel, Pranav; Chen, Zhongping

    2015-05-01

    An increasing number of integrated optical and acoustic intravascular imaging systems have been developed and hold great promise for accurately diagnosing vulnerable plaques and guiding atherosclerosis treatment. However, in any intravascular environment, the vascular lumen is filled with blood, a high-scattering source for optical and high-frequency ultrasound signals. Blood must be flushed away to provide clearer images. To our knowledge, no research has been performed to find the ideal flushing agent for combined optical and acoustic imaging techniques. We selected three solutions as potential flushing agents for their image-enhancing effects: mannitol, dextran, and iohexol. Testing of these flushing agents was performed in a closed-loop circulation model and in vivo on rabbits. We found that a high concentration of dextran was the most useful for simultaneous intravascular ultrasound and optical coherence tomography imaging.

  18. Epipolar geometry of opti-acoustic stereo imaging.

    PubMed

    Negahdaripour, Shahriar

    2007-10-01

    Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity.

  19. Application of acoustic reflection tomography to sonar imaging.

    PubMed

    Ferguson, Brian G; Wyber, Ron J

    2005-05-01

    Computer-aided tomography is a technique for providing a two-dimensional cross-sectional view of a three-dimensional object through the digital processing of many one-dimensional views (or projections) taken at different look directions. In acoustic reflection tomography, insonifying the object and then recording the backscattered signal provides the projection information for a given look direction (or aspect angle). Processing the projection information for all possible aspect angles enables an image to be reconstructed that represents the two-dimensional spatial distribution of the object's acoustic reflectivity function when projected on the imaging plane. The shape of an idealized object, which is an elliptical cylinder, is reconstructed by applying standard backprojection, Radon transform inversion (using both convolution and filtered backprojections), and direct Fourier inversion to simulated projection data. The relative merits of the various reconstruction algorithms are assessed and the resulting shape estimates compared. For bandpass sonar data, however, the wave number components of the acoustic reflectivity function that are outside the passband are absent. This leads to the consideration of image reconstruction for bandpass data. Tomographic image reconstruction is applied to real data collected with an ultra-wideband sonar transducer to form high-resolution acoustic images of various underwater objects when the sonar and object are widely separated.

  20. OHIO INTERNATIONAL TELEVISION AND VIDEO FESTIVAL AWARD WINNERS FROM THE IMAGING TECHNOLOGY CENTER IT

    NASA Technical Reports Server (NTRS)

    2000-01-01

    OHIO INTERNATIONAL TELEVISION AND VIDEO FESTIVAL AWARD WINNERS FROM THE IMAGING TECHNOLOGY CENTER ITC KEVIN BURKE - BILL FLETCHER - GARY NOLAN - EMERY ADANICH FOR THE VIDEO ENTITLED ICING FOR REGIONAL AND CORPORATE PILOTS

  1. Imaging and detection of mines from acoustic measurements

    NASA Astrophysics Data System (ADS)

    Witten, Alan J.; DiMarzio, Charles A.; Li, Wen; McKnight, Stephen W.

    1999-08-01

    A laboratory-scale acoustic experiment is described where a buried target, a hockey puck cut in half, is shallowly buried in a sand box. To avoid the need for source and receiver coupling to the host sand, an acoustic wave is generated in the subsurface by a pulsed laser suspended above the air-sand interface. Similarly, an airborne microphone is suspended above this interface and moved in unison with the laser. After some pre-processing of the data, reflections for the target, although weak, could clearly be identified. While the existence and location of the target can be determined by inspection of the data, its unique shape can not. Since target discrimination is important in mine detection, a 3D imaging algorithm was applied to the acquired acoustic data. This algorithm yielded a reconstructed image where the shape of the target was resolved.

  2. Multifeature image and video content-based storage and retrieval

    NASA Astrophysics Data System (ADS)

    Ardizzone, Edoardo; La Cascia, Marco

    1996-11-01

    In this paper we present most recent evolution of JACOB, a system we developed for image and video content-based storage and retrieval. The system is based on two separate archives: a 'features DB' and a 'raw-data DB'. When a user puts a query, a search is done in the 'features DB'; the selected items are taken form the 'raw-data DB' and shown to the user. Two kinds of sessions are allowed: 'database population' and 'database querying'. During a 'database population' session the user inserts new data into the archive. The input data can consist of digital images or videos. Videos are split into shots and for each shot one or more representative frames are automatically extracted. Shots and r-frames are then characterized, either in automatic or semi-automatic way, and stored in the archives. Automatic features' extraction consist of computing some low-level global features. Semi-automatic features' extraction is done by using annotation tools that perform operations that aren't currently possible with fully automatic methods. To this aim semi-automatic motion based segmentation and labeling tools have been developed. During a 'database querying' session, queries direct or by example are allowed. Queries may be iterated and variously combined to satisfy the query in the smallest number steps. Multifeature querying is based on statistical analysis of the feature space.

  3. Detection and magnification of bridge displacements using video images

    NASA Astrophysics Data System (ADS)

    Terán, Leticia; Ordóñez, Celestino; García-Cortés, Silverio; Menéndez, Agustín.

    2016-11-01

    Monitoring displacements on some structures such as large bridges is essential to study their structural performance in order to avoid severe damage or even their collapse. In this work, we use images obtained with digital video cameras to estimate the displacements of a metallic bridge by means of cross-correlation. Thus, it was possible to detect millimetric displacements for distances between the camera and the bridge upper ten meters. In order to obtain a better representation of the structural displacements along the bridge and its modal shapes, a technique of video magnification was also applied. The results obtained show that the combination of both techniques can provide relevant information for a structural analysis of the bridge.

  4. Acoustic imaging for temperature distribution reconstruction

    NASA Astrophysics Data System (ADS)

    Jia, Ruixi; Xiong, Qingyu; Liang, Shan

    2016-12-01

    For several industrial processes, such as burning and drying, temperature distribution is important because it can reflect the internal running state of industrial equipment and assist to develop control strategy and ensure safety in operation of industrial equipment. The principle of this technique is mainly based on the relationship between acoustic velocity and temperature. In this paper, an algorithm for temperature distribution reconstruction is considered. Compared with reconstruction results of simulation experiments with the least square algorithm and the proposed one, the latter indicates a better information reflection of temperature distribution and relatively higher reconstruction accuracy.

  5. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    PubMed Central

    Ibrahim, Mohamed M.; Abdel Kader, Neamat S.; Zorkany, M.

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth. PMID:25587570

  6. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  7. Refocusing images and videos with a conventional compact camera

    NASA Astrophysics Data System (ADS)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  8. Acoustic imaging for diagnostics of chemically reacting systems

    NASA Technical Reports Server (NTRS)

    Ramohalli, K.; Seshan, P.

    1983-01-01

    The concept of local diagnostics, in chemically reacting systems, with acoustic imaging is developed. The elements of acoustic imaging through ellipsoidal mirrors are theoretically discussed. In a general plan of the experimental program, the first system is chosen in these studies to be a simple open jet, non premixed turbulent flame. Methane is the fuel and enriched air is the oxidizer. This simple chemically reacting flow system is established at a Reynolds number (based on cold viscosity) of 50,000. A 1.5 m diameter high resolution acoustic mirror with an f-number of 0.75 is used to map the acoustic source zone along the axis of the flame. The results are presented as acoustic power spectra at various distances from the nozzle exit. It is seen that most of the reaction intensity is localized in a zone within 8 diameters from the exit. The bulk reactions (possibly around the periphery of the larger eddies) are evenly distributed along the length of the flame. Possibilities are seen for locally diagnosing single zones in a multiple cluster of reaction zones that occur frequently in practice. A brief outline is given of the future of this work which will be to apply this technique to chemically reacting flows not limited to combustion.

  9. Ideal flushing agents for integrated optical acoustic imaging systems

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Minami, Hataka; Steward, Earl; Ma, Teng; Mohar, Dilbahar; Robertson, Claire; Shung, K. Kirk; Zhou, Qifa; Patel, Pranav M.; Chen, Zhongping

    2015-02-01

    An increased number of integrated optical acoustic intravascular imaging systems have been researched and hold great hope for accurate diagnosing of vulnerable plaques and for guiding atherosclerosis treatment. However, in any intravascular environment, vascular lumen is filled with blood, which is a high-scattering source for optical and high frequency ultrasound signals. Blood must be flushed away to make images clear. To our knowledge, no research has been performed to find the ideal flushing agent that works for both optical and acoustic imaging techniques. We selected three solutions, mannitol, dextran and iohexol, as flushing agents because of their image-enhancing effects and low toxicities. Quantitative testing of these flushing agents was performed in a closed loop circulation model and in vivo on rabbits.

  10. Evaluation schemes for video and image anomaly detection algorithms

    NASA Astrophysics Data System (ADS)

    Parameswaran, Shibin; Harguess, Josh; Barngrover, Christopher; Shafer, Scott; Reese, Michael

    2016-05-01

    Video anomaly detection is a critical research area in computer vision. It is a natural first step before applying object recognition algorithms. There are many algorithms that detect anomalies (outliers) in videos and images that have been introduced in recent years. However, these algorithms behave and perform differently based on differences in domains and tasks to which they are subjected. In order to better understand the strengths and weaknesses of outlier algorithms and their applicability in a particular domain/task of interest, it is important to measure and quantify their performance using appropriate evaluation metrics. There are many evaluation metrics that have been used in the literature such as precision curves, precision-recall curves, and receiver operating characteristic (ROC) curves. In order to construct these different metrics, it is also important to choose an appropriate evaluation scheme that decides when a proposed detection is considered a true or a false detection. Choosing the right evaluation metric and the right scheme is very critical since the choice can introduce positive or negative bias in the measuring criterion and may favor (or work against) a particular algorithm or task. In this paper, we review evaluation metrics and popular evaluation schemes that are used to measure the performance of anomaly detection algorithms on videos and imagery with one or more anomalies. We analyze the biases introduced by these by measuring the performance of an existing anomaly detection algorithm.

  11. Videos and images from 25 years of teaching compressible flow

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  12. Acoustic property measurements in a photoacoustic imager

    NASA Astrophysics Data System (ADS)

    Willemink, René G. H.; Manohar, Srirang; Slump, Cornelis H.; van der Heijden, Ferdi; van Leeuwen, Ton

    2007-07-01

    Photoacoustics is a hybrid imaging technique that combines the contrast available to optical imaging with the resolution that is possessed by ultrasound imaging. The technique is based on generating ultrasound from absorbing structures in tissue using pulsed light. In photoacoustic (PA) computerized tomography (CT) imaging, reconstruction of the optical absorption in a subject, is performed for example by filtered backprojection. The backprojection is performed along circular paths in image space instead of along straight lines as in X-ray CT imaging. To achieve this, the speed-of-sound through the subject is usually assumed constant. An unsuitable speed-of-sound can degrade resolution and contrast. We discuss here a method of actually measuring the speed-of- sound distribution using ultrasound transmission through the subject under photoacoustic investigation. This is achieved in a simple approach that does not require any additional ultrasound transmitter. The method uses a passive element (carbon fiber) that is placed in the imager in the path of the illumination which generates ultrasound by the photoacoustic effect and behaves as an ultrasound source. Measuring the time-of-flight of this ultrasound transient by the same detector used for conventional photoacoustics, allows a speed-of-sound image to be reconstructed. This concept is validated on phantoms.

  13. Using underwater video imaging as an assessment tool for coastal condition

    EPA Science Inventory

    As part of an effort to monitor ecological conditions in nearshore habitats, from 2009-2012 underwater videos were captured at over 400 locations throughout the Laurentian Great Lakes. This study focuses on developing a video rating system and assessing video images. This ratin...

  14. Synchrotron x-ray imaging of acoustic cavitation bubbles induced by acoustic excitation

    NASA Astrophysics Data System (ADS)

    Jung, Sung Yong; Park, Han Wook; Park, Sung Ho; Lee, Sang Joon

    2017-04-01

    The cavitation induced by acoustic excitation has been widely applied in various biomedical applications because cavitation bubbles can enhance the exchanges of mass and energy. In order to minimize the hazardous effects of the induced cavitation, it is essential to understand the spatial distribution of cavitation bubbles. The spatial distribution of cavitation bubbles visualized by the synchrotron x-ray imaging technique is compared to that obtained with a conventional x-ray tube. Cavitation bubbles with high density in the region close to the tip of the probe are visualized using the synchrotron x-ray imaging technique, however, the spatial distribution of cavitation bubbles in the whole ultrasound field is not detected. In this study, the effects of the ultrasound power of acoustic excitation and working medium on the shape and density of the induced cavitation bubbles are examined. As a result, the synchrotron x-ray imaging technique is useful for visualizing spatial distributions of cavitation bubbles, and it could be used for optimizing the operation conditions of acoustic cavitation.

  15. Reconstruction of an acoustic pressure field in a resonance tube by particle image velocimetry.

    PubMed

    Kuzuu, K; Hasegawa, S

    2015-11-01

    A technique for estimating an acoustic field in a resonance tube is suggested. The estimation of an acoustic field in a resonance tube is important for the development of the thermoacoustic engine, and can be conducted employing two sensors to measure pressure. While this measurement technique is known as the two-sensor method, care needs to be taken with the location of pressure sensors when conducting pressure measurements. In the present study, particle image velocimetry (PIV) is employed instead of a pressure measurement by a sensor, and two-dimensional velocity vector images are extracted as sequential data from only a one- time recording made by a video camera of PIV. The spatial velocity amplitude is obtained from those images, and a pressure distribution is calculated from velocity amplitudes at two points by extending the equations derived for the two-sensor method. By means of this method, problems relating to the locations and calibrations of multiple pressure sensors are avoided. Furthermore, to verify the accuracy of the present method, the experiments are conducted employing the conventional two-sensor method and laser Doppler velocimetry (LDV). Then, results by the proposed method are compared with those obtained with the two-sensor method and LDV.

  16. Opto-acoustic breast imaging with co-registered ultrasound

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Stavros, A. Thomas; Oraevsky, Alexander; Kist, Kenneth; Dornbluth, N. Carol; Otto, Pamela

    2014-03-01

    We present results from a recent study involving the ImagioTM breast imaging system, which produces fused real-time two-dimensional color-coded opto-acoustic (OA) images that are co-registered and temporally inter- leaved with real-time gray scale ultrasound using a specialized duplex handheld probe. The use of dual optical wavelengths provides functional blood map images of breast tissue and tumors displayed with high contrast based on total hemoglobin and oxygen saturation of the blood. This provides functional diagnostic information pertaining to tumor metabolism. OA also shows morphologic information about tumor neo-vascularity that is complementary to the morphological information obtained with conventional gray scale ultrasound. This fusion technology conveniently enables real-time analysis of the functional opto-acoustic features of lesions detected by readers familiar with anatomical gray scale ultrasound. We demonstrate co-registered opto-acoustic and ultrasonic images of malignant and benign tumors from a recent clinical study that provide new insight into the function of tumors in-vivo. Results from the Feasibility Study show preliminary evidence that the technology may have the capability to improve characterization of benign and malignant breast masses over conventional diagnostic breast ultrasound alone and to improve overall accuracy of breast mass diagnosis. In particular, OA improved speci city over that of conventional diagnostic ultrasound, which could potentially reduce the number of negative biopsies performed without missing cancers.

  17. Optical and opto-acoustic imaging.

    PubMed

    Ntziachristos, Vasilis; Razansky, Daniel

    2013-01-01

     Since the inception of the microscope, optical imaging is serving the biological discovery for more than four centuries. With the recent emergence of methods appropriate for in vivo staining, such as bioluminescence, fluorescent molecular probes, and proteins, as well as nanoparticle-based targeted agents, significant attention has been shifted toward in vivo interrogations of different dynamic biological processes at the molecular level. This progress has been largely supported by the development of advanced optical tomographic imaging technologies suitable for obtaining volumetric visualization of biomarker distributions in small animals at a whole-body or whole-organ scale, an imaging frontier that is not accessible by the existing tissue-sectioning microscopic techniques due to intensive light scattering beyond the depth of a few hundred microns. Biomedical optoacoustics has also emerged in the recent decade as a powerful tool for high-resolution visualization of optical contrast, overcoming a variety of longstanding limitations imposed by light scattering in deep tissues. By detecting tiny sound vibrations, resulting from selective absorption of light at multiple wavelengths, multispectral optoacoustic tomography methods can now "hear color" in three dimensions, i.e., deliver volumetric spectrally enriched (color) images from deep living tissues at high spatial resolution and in real time. These new-found imaging abilities directly relate to preclinical screening applications in animal models and are foreseen to significantly impact clinical decision making as well.

  18. Tracking of multiple points using color video image analyzer

    NASA Astrophysics Data System (ADS)

    Nennerfelt, Leif

    1990-08-01

    The Videomex-X is a new product intended for use in biomechanical measurement. It tracks up to six points at 60 frames per second using colored markers placed on the subject. The system can be used for applications such as gait analysis, studying facial movements, or tracking the pattern of movements of individuals in a group. The Videomex-X is comprised of a high speed color image analyzer, an RBG color video camera, an IBM AT compatible computer and motion analysis software. The markers are made from brightly colored plastic disks and each marker is a different color. Since the markers are unique, the problem of misidentification of markers does not occur. The Videomex-X performs realtime analysis so that the researcher can get immediate feedback on the subject's performance. High speed operation is possible because the system uses distributed processing. The image analyzer is a hardwired parallel image processor which identifies the markers within the video picture and computes their x-y locations. The image analyzer sends the x-y coordinates to the AT computer which performs additional analysis and presents the result. The x-y coordinate data acquired during the experiment may be streamed to the computer's hard disk. This allows the data to be re-analyzed repeatedly using different analysis criteria. The original Videomex-X tracked in two dimensions. However, a 3-D system has recently been completed. The algorithm used by the system to derive performance results from the x-y coordinates is contained in a separate ASCII file. These files can be modified by the operator to produce the required type of data reduction.

  19. Acoustic and photoacoustic microscopy imaging of single leukocytes

    NASA Astrophysics Data System (ADS)

    Strohm, Eric M.; Moore, Michael J.; Kolios, Michael C.

    2016-03-01

    An acoustic/photoacoustic microscope was used to create micrometer resolution images of stained cells from a blood smear. Pulse echo ultrasound images were made using a 1000 MHz transducer with 1 μm resolution. Photoacoustic images were made using a fiber coupled 532 nm laser, where energy losses through stimulated Raman scattering enabled output wavelengths from 532 nm to 620 nm. The laser was focused onto the sample using a 20x objective, and the laser spot co-aligned with the 1000 MHz transducer opposite the laser. The blood smear was stained with Wright-Giemsa, a common metachromatic dye that differentially stains the cellular components for visual identification. A neutrophil, lymphocyte and a monocyte were imaged using acoustic and photoacoustic microscopy at two different wavelengths, 532 nm and 600 nm. Unique features in each imaging modality enabled identification of the different cell types. This imaging method provides a new way of imaging stained leukocytes, with applications towards identifying and differentiating cell types, and detecting disease at the single cell level.

  20. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  1. Image reconstruction with acoustic radiation force induced shear waves

    NASA Astrophysics Data System (ADS)

    McAleavey, Stephen A.; Nightingale, Kathryn R.; Stutz, Deborah L.; Hsu, Stephen J.; Trahey, Gregg E.

    2003-05-01

    Acoustic radiation force may be used to induce localized displacements within tissue. This phenomenon is used in Acoustic Radiation Force Impulse Imaging (ARFI), where short bursts of ultrasound deliver an impulsive force to a small region. The application of this transient force launches shear waves which propagate normally to the ultrasound beam axis. Measurements of the displacements induced by the propagating shear wave allow reconstruction of the local shear modulus, by wave tracking and inversion techniques. Here we present in vitro, ex vivo and in vivo measurements and images of shear modulus. Data were obtained with a single transducer, a conventional ultrasound scanner and specialized pulse sequences. Young's modulus values of 4 kPa, 13 kPa and 14 kPa were observed for fat, breast fibroadenoma, and skin. Shear modulus anisotropy in beef muscle was observed.

  2. Scanning Michelson interferometer for imaging surface acoustic wave fields.

    PubMed

    Knuuttila, J V; Tikka, P T; Salomaa, M M

    2000-05-01

    A scanning homodyne Michelson interferometer is constructed for two-dimensional imaging of high-frequency surface acoustic wave (SAW) fields in SAW devices. The interferometer possesses a sensitivity of ~10(-5)nm/ radicalHz , and it is capable of directly measuring SAW's with frequencies ranging from 0.5 MHz up to 1 GHz. The fast scheme used for locating the optimum operation point of the interferometer facilitates high measuring speeds, up to 50,000 points/h. The measured field image has a lateral resolution of better than 1 mu;m . The fully optical noninvasive scanning system can be applied to SAW device development and research, providing information on acoustic wave distribution that cannot be obtained by merely electrical measurements.

  3. Ultra high frequency imaging acoustic microscope

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2006-05-23

    An imaging system includes: an object wavefront source and an optical microscope objective all positioned to direct an object wavefront onto an area of a vibrating subject surface encompassed by a field of view of the microscope objective, and to direct a modulated object wavefront reflected from the encompassed surface area through a photorefractive material; and a reference wavefront source and at least one phase modulator all positioned to direct a reference wavefront through the phase modulator and to direct a modulated reference wavefront from the phase modulator through the photorefractive material to interfere with the modulated object wavefront. The photorefractive material has a composition and a position such that interference of the modulated object wavefront and modulated reference wavefront occurs within the photorefractive material, providing a full-field, real-time image signal of the encompassed surface area.

  4. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalised cross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  5. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  6. An introduction to video image compression and authentication technology for safeguards applications

    SciTech Connect

    Johnson, C.S.

    1995-07-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970`s. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images.

  7. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  8. The Sharper Image: Implementing a Fast Fourier Transform (FFT) to Enhance a Video-Captured Image.

    DTIC Science & Technology

    1994-01-01

    mathematical system to quantitatively analyze and compare complex wave forms. In 1307, Baron Jean - Baptiste - Joseph Fourier proved that any periodic wave can be...HOVEY ROAD, PENSACOLA, FL 32508-1046 NAMRL Special Report 94-1 THE SHARPER IMAGE: 16 IMPLEMENTING A FAST FOURIER TRANSFORM (FFT) TO ENHANCE A VIDEO...most visually impaired persons fail to discern the higher spatial frequencies present in an image. Based on the Fourier analysis of vision, Peli et al

  9. Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques

    NASA Technical Reports Server (NTRS)

    Smith, Michael A.; Kanade, Takeo

    1997-01-01

    Digital video is rapidly becoming important for education, entertainment, and a host of multimedia applications. With the size of the video collections growing to thousands of hours, technology is needed to effectively browse segments in a short time without losing the content of the video. We propose a method to extract the significant audio and video information and create a "skim" video which represents a very short synopsis of the original. The goal of this work is to show the utility of integrating language and image understanding techniques for video skimming by extraction of significant information, such as specific objects, audio keywords and relevant video structure. The resulting skim video is much shorter, where compaction is as high as 20:1, and yet retains the essential content of the original segment.

  10. Enhancement of Video Images Degraded by Turbid Water

    DTIC Science & Technology

    1986-12-01

    I o o i^ipipP^^iW^ NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS ENHANCEMENT OF VIDEO IMAGES DEGRADED BY TURBID WATER by Jorge A...OUTPUT ARRAY » « • DOL = I.IHG DO M - 1 .IM6Y SALIDA (L.M) = 0 END DO END DO c c c c c c c TYPE «/OUTPUT ARRAY INITIALIZED’ CALL riCHECK...MM((J-1)»16>+1 IXX=LL+IX-1 IYY =MM+IY-1 SALIDA (IXX.IYY) = SALIDA (IXX.IYY>+INTE(IX.IY) TYPEMXX AND IYY =>’.IXX.IYY 50 ^&&J^i£aJ^^ ’» 100 c

  11. Space Shuttle Video Images: An Example of Warm Cloud Lightning

    NASA Technical Reports Server (NTRS)

    Vaughan, Otha H., Jr.; Boeck, William L.

    1998-01-01

    Warm cloud lightning has been reported in several tropical locations. We have been using the intensified monochrome TV cameras at night during a number of shuttle flights to observe large active thunderstorms and their associated lightning. During a nighttime orbital pass of the STS-70 mission on 17 July 1995 at 07:57:42 GMT, the controllers obtained video imagery of a small cloud that was producing lightning. Data from a GOES infrared image establishes that the cloud top had a temperature of about 271 degrees Kelvin ( -2 degrees Celsius). Since this cloud was electrified to the extent that a lightning discharge did occur, it may be another case of lightning in a cloud that presents little if any evidence of frozen or melting precipitation.

  12. Video-rate terahertz electric-field vector imaging

    SciTech Connect

    Takai, Mayuko; Takeda, Masatoshi; Sasaki, Manabu; Tachizaki, Takehiro; Yasumatsu, Naoya; Watanabe, Shinichi

    2014-10-13

    We present an experimental setup to dramatically reduce a measurement time for obtaining spatial distributions of terahertz electric-field (E-field) vectors. The method utilizes the electro-optic sampling, and we use a charge-coupled device to detect a spatial distribution of the probe beam polarization rotation by the E-field-induced Pockels effect in a 〈110〉-oriented ZnTe crystal. A quick rotation of the ZnTe crystal allows analyzing the terahertz E-field direction at each image position, and the terahertz E-field vector mapping at a fixed position of an optical delay line is achieved within 21 ms. Video-rate mapping of terahertz E-field vectors is likely to be useful for achieving real-time sensing of terahertz vector beams, vector vortices, and surface topography. The method is also useful for a fast polarization analysis of terahertz beams.

  13. Degraded visual environment image/video quality metrics

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.

    2014-06-01

    A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.

  14. Passive 350 GHz Video Imaging Systems for Security Applications

    NASA Astrophysics Data System (ADS)

    Heinz, E.; May, T.; Born, D.; Zieger, G.; Anders, S.; Zakosarenko, V.; Meyer, H.-G.; Schäffel, C.

    2015-10-01

    Passive submillimeter-wave imaging is a concept that has been in the focus of interest as a promising technology for personal security screening for a number of years. In contradiction to established portal-based millimeter-wave scanning techniques, it allows for scanning people from a distance in real time with high throughput and without a distinct inspection procedure. This opens up new possibilities for scanning, which directly address an urgent security need of modern societies: protecting crowds and critical infrastructure from the growing threat of individual terror attacks. Considering the low radiometric contrast of indoor scenes in the submillimeter range, this objective calls for an extremely high detector sensitivity that can only be achieved using cooled detectors. Our approach to this task is a series of passive standoff video cameras for the 350 GHz band that represent an evolving concept and a continuous development since 2007. Arrays of superconducting transition-edge sensors (TES), operated at temperatures below 1 K, are used as radiation detectors. By this means, background limited performance (BLIP) mode is achieved, providing the maximum possible signal to noise ratio. At video rates, this leads to a temperature resolution well below 1 K. The imaging system is completed by reflector optics based on free-form mirrors. For object distances of 5-25 m, a field of view up to 2 m height and a diffraction-limited spatial resolution in the order of 1-2 cm is provided. Opto-mechanical scanning systems are part of the optical setup and capable of frame rates of up to 25 frames per second.

  15. Airframe noise measurements by acoustic imaging

    NASA Technical Reports Server (NTRS)

    Kendall, J. M.

    1977-01-01

    Studies of the noise produced by flow past wind tunnel models are presented. The central objective of these is to find the specific locations within a flow which are noisy, and to identify the fluid dynamic processes responsible, with the expectation that noise reduction principles will be discovered. The models tested are mostly simple shapes which result in types of flow that are similar to those occurring on, for example, aircraft landing gear and wheel cavities. A model landing gear and a flap were also tested. Turbulence has been intentionally induced as appropriate in order to simulate full-scale effects more closely. The principal technique involves use of a highly directional microphone system which is scanned about the flow field to be analyzed. The data so acquired are presented as a pictorial image of the noise source distribution. An important finding is that the noise production is highly variable within a flow field and that sources can be attributed to various fluid dynamic features of the flow. Flow separation was not noisy, but separation closure usually was.

  16. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  17. An application of backprojection for video SAR image formation exploiting a subaperature circular shift register

    NASA Astrophysics Data System (ADS)

    Miller, J.; Bishop, E.; Doerry, A.

    2013-05-01

    This paper details a Video SAR (Synthetic Aperture Radar) mode that provides a persistent view of a scene centered at the Motion Compensation Point (MCP). The radar platform follows a circular flight path. An objective is to form a sequence of SAR images while observing dynamic scene changes at a selectable video frame rate. A formulation of backprojection meets this objective. Modified backprojection equations take into account changes in the grazing angle or squint angle that result from non-ideal flight paths. The algorithm forms a new video frame relying upon much of the signal processing performed in prior frames. The method described applies an appropriate azimuth window to each video frame for window sidelobe rejection. A Cardinal Direction Up (CDU) coordinate frame forms images with the top of the image oriented along a given cardinal direction for all video frames. Using this coordinate frame helps characterize a moving target's target response. Generation of synthetic targets with linear motion including both constant velocity and constant acceleration is described. The synthetic target video imagery demonstrates dynamic SAR imagery with expected moving target responses. The paper presents 2011 flight data collected by General Atomics Aeronautical Systems, Inc. (GA-ASI) implementing the video SAR mode. The flight data demonstrates good video quality showing moving vehicles. The flight imagery demonstrates the real-time capability of the video SAR mode. The video SAR mode uses a circular shift register of subapertures. The radar employs a Graphics Processing Unit (GPU) in order to implement this algorithm.

  18. Identifying Vulnerable Plaques with Acoustic Radiation Force Impulse Imaging

    NASA Astrophysics Data System (ADS)

    Doherty, Joshua Ryan

    The rupture of arterial plaques is the most common cause of ischemic complications including stroke, the fourth leading cause of death and number one cause of long term disability in the United States. Unfortunately, because conventional diagnostic tools fail to identify plaques that confer the highest risk, often a disabling stroke and/or sudden death is the first sign of disease. A diagnostic method capable of characterizing plaque vulnerability would likely enhance the predictive ability and ultimately the treatment of stroke before the onset of clinical events. This dissertation evaluates the hypothesis that Acoustic Radiation Force Impulse (ARFI) imaging can noninvasively identify lipid regions, that have been shown to increase a plaque's propensity to rupture, within carotid artery plaques in vivo. The work detailed herein describes development efforts and results from simulations and experiments that were performed to evaluate this hypothesis. To first demonstrate feasibility and evaluate potential safety concerns, finite- element method simulations are used to model the response of carotid artery plaques to an acoustic radiation force excitation. Lipid pool visualization is shown to vary as a function of lipid pool geometry and stiffness. A comparison of the resulting Von Mises stresses indicates that stresses induced by an ARFI excitation are three orders of magnitude lower than those induced by blood pressure. This thesis also presents the development of a novel pulse inversion harmonic tracking method to reduce clutter-imposed errors in ultrasound-based tissue displacement estimates. This method is validated in phantoms and was found to reduce bias and jitter displacement errors for a marked improvement in image quality in vivo. Lastly, this dissertation presents results from a preliminary in vivo study that compares ARFI imaging derived plaque stiffness with spatially registered composition determined by a Magnetic Resonance Imaging (MRI) gold standard

  19. single-channel stereoscopic video imaging modality based on a transparent rotating deflector

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Jun, Eunkwon; Ha, Myungjin; Lee, Sangyeob; Yu, SungKon; Jang, Seul G.; Jung, Byungjo

    2015-03-01

    This paper introduces a stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained by rotating the TRD on a stepping motor synchronized with a complementary metal-oxide semiconductor camera, and the components of the imaging modality were controlled through general purpose input/output ports using a microcontroller unit. In this research, live stereoscopic videos were visualized on a personal computer by both active shutter 3D and passive polarization 3D methods. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation characteristics of the TRD. The level of 3D conception was estimated in terms of simplified human stereovision. The results show that singlechannel stereoscopic video imaging modality has the potential to become an economical compact stereoscopic device as the system components are amenable to miniaturization; and could be applied in a wide variety of fields.

  20. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  1. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  2. An infrared high rate video imager for various space applications

    NASA Astrophysics Data System (ADS)

    Svedhem, Hâkan; Koschny, Detlef

    2010-05-01

    Modern spacecraft with high data transmission capabilities have opened up the possibility to fly video rate imagers in space. Several fields concerned with observations of transient phenomena can benefit significantly from imaging at video frame rate. Some applications are observations and characterization of bolides/meteors, sprites, lightning, volcanic eruptions, and impacts on airless bodies. Applications can be found both on low and high Earth orbiting spacecraft as well as on planetary and lunar orbiters. The optimum wavelength range varies depending on the application but we will focus here on the near infrared, partly since it allows exploration of a new field and partly because it, in many cases, allows operation both during day and night. Such an instrument has to our knowledge never flown in space so far. The only sensors of a similar kind fly on US defense satellites for monitoring launches of ballistic missiles. The data from these sensors, however, is largely inaccessible to scientists. We have developed a bread-board version of such an instrument, the SPOSH-IR. The instrument is based on an earlier technology development - SPOSH - a Smart Panoramic Optical Sensor Head, for operation in the visible range, but with the sensor replace by a cooled IR detector and new optics. The instrument is using a Sofradir 320x256 pixel HgCdTe detector array with 30µm pixel size, mounted directly on top of a four stage thermoelectric Peltier cooler. The detector-cooler combination is integrated into an evacuated closed package with a glass window on its front side. The detector has a sensitive range between 0.8 and 2.5 µm. The optical part is a seven lens design with a focal length of 6 mm and a FOV 90deg by 72 deg optimized for use at SWIR. The detector operates at 200K while the optics operates at ambient temperature. The optics and electronics for the bread-board has been designed and built by Jena-Optronik, Jena, Germany. This talk will present the design and the

  3. Feasibility of High Frequency Acoustic Imaging for Inspection of Containments

    SciTech Connect

    C.N. Corrado; J.E. Bondaryk; V. Godino

    1998-08-01

    The Nuclear Regulatory Commission has a program at the Oak Ridge National Laboratory to provide assistance in their assessment of the effects of potential degradation on the structural integrity and Ieaktightness of metal containment vessels and steel liners of concrete containment in nuclear power plants. One of the program objectives is to identify a technique(s) for inspection of inaccessible portions of the containment pressure boundary. Acoustic imaging has been identified as one of these potential techniques. A numerical feasibility study investigated the use of high-frequency bistatic acoustic imaging techniques for inspection of inaccessible portions of the metallic pressure boundary of nuclear power plant containment. The range-dependent version of the OASES Code developed at the Massachusetts Institute of Technology was utilized to perform a series of numerical simulations. OASES is a well developed and extensively tested code for evaluation of the acoustic field in a system of stratified fluid and/or elastic layers. Using the code, an arbitrary number of fluid or solid elastic layers are interleaved, with the outer layers modeled as halfspaces. High frequency vibrational sources were modeled to simulate elastic waves in the steel. The received field due to an arbitrary source array can be calculated at arbitrary depth and range positions. In this numerical study, waves that reflect and scatter from surface roughness caused by modeled degradations (e.g., corrosion) are detected and used to identify and map the steel degradation. Variables in the numerical study included frequency, flaw size, interrogation distance, and sensor incident angle.Based on these analytical simulations, it is considered unlikely that acoustic imaging technology can be used to investigate embedded steel liners of reinforced concrete containment. The thin steel liner and high signal losses to the concrete make this application difficult. Results for portions of steel containment

  4. Multi-crack imaging using nonclassical nonlinear acoustic method

    NASA Astrophysics Data System (ADS)

    Zhang, Lue; Zhang, Ying; Liu, Xiao-Zhou; Gong, Xiu-Fen

    2014-10-01

    Solid materials with cracks exhibit the nonclassical nonlinear acoustical behavior. The micro-defects in solid materials can be detected by nonlinear elastic wave spectroscopy (NEWS) method with a time-reversal (TR) mirror. While defects lie in viscoelastic solid material with different distances from one another, the nonlinear and hysteretic stress—strain relation is established with Preisach—Mayergoyz (PM) model in crack zone. Pulse inversion (PI) and TR methods are used in numerical simulation and defect locations can be determined from images obtained by the maximum value. Since false-positive defects might appear and degrade the imaging when the defects are located quite closely, the maximum value imaging with a time window is introduced to analyze how defects affect each other and how the fake one occurs. Furthermore, NEWS-TR-NEWS method is put forward to improve NEWS-TR scheme, with another forward propagation (NEWS) added to the existing phases (NEWS and TR). In the added phase, scanner locations are determined by locations of all defects imaged in previous phases, so that whether an imaged defect is real can be deduced. NEWS-TR-NEWS method is proved to be effective to distinguish real defects from the false-positive ones. Moreover, it is also helpful to detect the crack that is weaker than others during imaging procedure.

  5. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, Kevin F.

    1994-01-01

    The primary goal of this research is to develop a solid-state high definition television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels per frame. This imager offers an order of magnitude improvement in speed over CCD designs and will allow for monolithic imagers operating from the IR to the UV. The technical approach of the project focuses on the development of the three basic components of the imager and their integration. The imager chip can be divided into three distinct components: (1) image capture via an array of avalanche photodiodes (APD's), (2) charge collection, storage and overflow control via a charge transfer transistor device (CTD), and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the development of manufacturable designs for each of these component devices. In addition to the development of each of the three distinct components, work towards their integration is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail in Sections 2-4.

  6. Diagnostic agreement when comparing still and video imaging for the medical evaluation of child sexual abuse.

    PubMed

    Killough, Emily; Spector, Lisa; Moffatt, Mary; Wiebe, Jan; Nielsen-Parker, Monica; Anderst, Jim

    2016-02-01

    Still photo imaging is often used in medical evaluations of child sexual abuse (CSA) but video imaging may be superior. We aimed to compare still images to videos with respect to diagnostic agreement regarding hymenal deep notches and transections in post-pubertal females. Additionally, we evaluated the role of experience and expertise on agreement. We hypothesized that videos would result in improved diagnostic agreement of multiple evaluators as compared to still photos. This was a prospective quasi-experimental study using imaging modality as the quasi-independent variable. The dependent variable was diagnostic agreement of participants regarding presence/absence of findings indicating penetrative trauma on non-acute post-pubertal genital exams. Participants were medical personnel who regularly perform CSA exams. Diagnostic agreement was evaluated utilizing a retrospective selection of videos and still photos obtained directly from the videos. Videos and still photos were embedded into an on-line survey as sixteen cases. One-hundred sixteen participants completed the study. Participant diagnosis was more likely to agree with study center diagnosis when using video (p<0.01). Use of video resulted in statistically significant changes in diagnosis in four of eight cases. In two cases, the diagnosis of the majority of participants changed from no hymenal transection to transection present. No difference in agreement was found based on experience or expertise. Use of video vs. still images resulted in increased agreement with original examiner and changes in diagnostic impressions in review of CSA exams. Further study is warranted, as video imaging may have significant impacts on diagnosis.

  7. Respiratory acoustic thoracic imaging (RATHI): assessing intrasubject variability.

    PubMed

    Torres-Jimenez, A; Charleston-Villalobos, S; Gonzalez-Camarena, R; Chi-Lem, G; Aljama-Corrales, T

    2008-01-01

    Respiratory acoustic thoracic imaging (RATHI) permits analysing lung sounds (LS) temporal and spatial distribution, however, a deep understanding of RATHI repeatability associated with the pulmonary function is necessary. As a consequence, in the current work intrasubject variability of RATHI is evaluated at different airflows. For generating RATHIs, LS were acquired at the posterior thoracic surface. The associated image was computed at the inspiratory phases by interpolation through a Hermite function. The acoustic information of eleven subjects was considered at airflows of 1.0, 1.5 and 2.0 L/s. Several RATHIs were generated for each subject according to the number of acquired inspiratory phases. Quadratic mutual information based on Cauchy-Schwartz inequality (I(CS)) was used to evaluate the degree of similitude between intrasubject RATHIs. The results indicated that, for the same subject, I(CS) averaged 0.893, 0.897, and 0.902, for airflows of 1.0, 1.5, and 2 L/s, respectively. In addition, when the airflow was increased, increments in intensity values and in the dispersion of the spatial distribution reflected in RATHI were observed. In conclusion, since the intrasubject variability of RATHI was low for airflows between 1.0 and 2.0 L/s, the pattern of sound distribution during airflow variations is repeatable but differences in sound intensity should be considered.

  8. Imaging of contact acoustic nonlinearity using synthetic aperture technique.

    PubMed

    Yun, Dongseok; Kim, Jongbeom; Jhang, Kyung-Young

    2013-09-01

    The angle beam incidence and reflection technique for the evaluation of contact acoustic nonlinearity (CAN) at solid-solid contact interfaces (e.g., closed cracks) has recently been developed to overcome the disadvantage of accessing both the inner and outer surfaces of structures for attaching pulsing and receiving transducers in the through-transmission of normal incidence technique. This paper proposes a technique for B-mode imaging of CAN based on the above reflection technique, which uses the synthetic aperture focusing technique (SAFT) and short-time Fourier transform (STFT) to visualize the distribution of the CAN-induced second harmonic magnitude as well as the nonlinear parameter. In order to verify the usefulness of the proposed method, a solid-solid contact interface was tested and the change of the contact acoustic nonlinearity according to the increasing contact pressure was visualized in images of the second harmonic magnitude and the relative nonlinear parameter. The experimental results showed good agreement with the previously developed theory identifying the dependence of the scattered second harmonics on the contact pressure. This technique can be used for the detection and improvement of the sizing accuracy of closed cracks that are difficult to detect using the conventional linear ultrasonic technique.

  9. Long range acoustic imaging of the continental shelf environment: the Acoustic Clutter Reconnaissance Experiment 2001.

    PubMed

    Ratilal, Purnima; Lai, Yisan; Symonds, Deanelle T; Ruhlmann, Lilimar A; Preston, John R; Scheer, Edward K; Garr, Michael T; Holland, Charles W; Goff, John A; Makris, Nicholas C

    2005-04-01

    An active sonar system is used to image wide areas of the continental shelf environment by long-range echo sounding at low frequency. The bistatic system, deployed in the STRATAFORM area south of Long Island in April-May of 2001, imaged a large number of prominent clutter events over ranges spanning tens of kilometers in near real time. Roughly 3000 waveforms were transmitted into the water column. Wide-area acoustic images of the ocean environment were generated in near real time for each transmission. Between roughly 10 to more than 100 discrete and localized scatterers were registered for each image. This amounts to a total of at least 30000 scattering events that could be confused with those from submerged vehicles over the period of the experiment. Bathymetric relief in the STRATAFORM area is extremely benign, with slopes typically less than 0.5 degrees according to high resolution (30 m sampled) bathymetric data. Most of the clutter occurs in regions where the bathymetry is locally level and does not coregister with seafloor features. No statistically significant difference is found in the frequency of occurrence per unit area of repeatable clutter inside versus outside of areas occupied by subsurface river channels.

  10. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  11. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, W. D.; Brennan, K. F.; Summers, C. J.

    1994-01-01

    The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.

  12. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  13. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents' Perspectives.

    PubMed

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O'Connor, Alexander; Collins, Michael J

    This study examined adolescents' attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one's attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players' attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents' social cognitive judgments.

  14. Do Stereotypic Images in Video Games Affect Attitudes and Behavior? Adolescents’ Perspectives

    PubMed Central

    Henning, Alexandra; Brenick, Alaina; Killen, Melanie; O’Connor, Alexander; Collins, Michael J.

    2015-01-01

    This study examined adolescents’ attitudes about video games along with their self-reported play frequency. Ninth and eleventh grade students (N = 361), approximately evenly divided by grade and gender, were surveyed about whether video games have stereotypic images, involve harmful consequences or affect one’s attitudes, whether game playing should be regulated by parents or the government, and whether game playing is a personal choice. Adolescents who played video games frequently showed decreased concern about the effects that games with negatively stereotyped images may have on the players’ attitudes compared to adolescents who played games infrequently or not at all. With age, adolescents were more likely to view images as negative, but were also less likely to recognize stereotypic images of females as harmful and more likely to judge video-game playing as a personal choice. The paper discusses other findings in relation to research on adolescents’ social cognitive judgments. PMID:25729336

  15. A torsional eye movement calculation algorithm for low contrast images in video-oculography.

    PubMed

    Jansen, S H; Kingma, H; Peeters, R M; Westra, R L

    2010-01-01

    Video-oculography (VOG) is a frequently used clinical technique to detect eye movements. In this research, head mounted small video-cameras and IR-illumination are employed to image the eye. Many algorithms have been developed to extract horizontal and vertical eye movements from the video images. Designing a method to determine torsional eye movements is a more complex task. The use of IR-wavelengths required for illumination in certain clinical tests results in a very low image contrast. In such images, iris textures are almost invisible, making them unsuited for direct application of standard matching algorithms, which are used to calculate torsional eye movements. This research presents the design and implementation of a robust torsional eye movement detection algorithm for VOG. This algorithm uses a new approach to measure the torsional eye movement and is suitable for low contrast videos. The algorithm is implemented in a clinical device and its performance is compared to that of alternative techniques.

  16. Comparison of sonochemiluminescence images using image analysis techniques and identification of acoustic pressure fields via simulation.

    PubMed

    Tiong, T Joyce; Chandesa, Tissa; Yap, Yeow Hong

    2017-05-01

    One common method to determine the existence of cavitational activity in power ultrasonics systems is by capturing images of sonoluminescence (SL) or sonochemiluminescence (SCL) in a dark environment. Conventionally, the light emitted from SL or SCL was detected based on the number of photons. Though this method is effective, it could not identify the sonochemical zones of an ultrasonic systems. SL/SCL images, on the other hand, enable identification of 'active' sonochemical zones. However, these images often provide just qualitative data as the harvesting of light intensity data from the images is tedious and require high resolution images. In this work, we propose a new image analysis technique using pseudo-colouring images to quantify the SCL zones based on the intensities of the SCL images and followed by comparison of the active SCL zones with COMSOL simulated acoustic pressure zones.

  17. Quantification of nearshore morphology based on video imaging

    USGS Publications Warehouse

    Alexander, P.S.; Holman, R.A.

    2004-01-01

    The Argus network is a series of video cameras with aerial views of beaches around the world. Intensity contrasts in time exposure images reveal areas of preferential breaking, which are closely tied to underlying bed morphology. This relationship was further investigated, including the effect of tidal elevation and wave height on the presence of wave breaking and its cross-shore position over sand bars. Computerized methods of objectively extracting shoreline and sand bar locations were developed, allowing the vast quantity of data generated by Argus to be more effectively examined. Once features were identified in the images, daily alongshore mean values were taken to create time series of shoreline and sand bar location, which were analyzed for annual cycles and cross-correlated with wave data to investigate environmental forcing and response. These data extraction techniques were applied to images from four of the Argus camera sites. A relationship between wave height and shoreline location was found in which increased wave heights resulted in more landward shoreline positions; given the short lag times over which this correlation was significant, and that the strong annual signal in wave height was not replicated in the shoreline time series, it is likely that this relationship is a result of set-up during periods of large waves. Wave height was also found to have an effect on sand bar location, whereby an increase in wave height resulted in offshore bar migration. This correlation was significant over much longer time lags than the relationship between wave height and shoreline location, and a strong annual signal was found in the location of almost all observed bars, indicating that the sand bars are migrating with changes in wave height. In the case of the site with multiple sand bars, the offshore bars responded more significantly to changes in wave height, whereas the innermost bar seemed to be shielded from incident wave energy by breaking over the other

  18. Video Data Management System Archives and Provides Online Access to NOAA Deep-Sea Corals Digital Video and Image Data

    DTIC Science & Technology

    2008-09-01

    System (OAS) and NCDDC’s MERMAid catalogs, CoRIS and OER Digital Atlas databases. II. VIDEO AND IMAGE DATA MANAGEMENT The primary media currently...protocol. • A crosswalk and converter in the MERMAid system (NCDDC online catalog) enables sharing common metadata in both FGDC and MARC21 metadata...records provide the descriptive information for both NOAALINC and MERMAid metadata discovery tools (Figure 4). Figure 4. FGDC-MARCXML-MARC21 Metadata

  19. Acoustic resonances in microfluidic chips: full-image micro-PIV experiments and numerical simulations.

    PubMed

    Hagsäter, S M; Jensen, T Glasdam; Bruus, H; Kutter, J P

    2007-10-01

    We show that full-image micro-PIV analysis in combination with images of transient particle motion is a powerful tool for experimental studies of acoustic radiation forces and acoustic streaming in microfluidic chambers under piezo-actuation in the MHz range. The measured steady-state motion of both large 5 microm and small 1 microm particles can be understood in terms of the acoustic eigenmodes or standing ultra-sound waves in the given experimental microsystems. This interpretation is supported by numerical solutions of the corresponding acoustic wave equation.

  20. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  1. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  2. Standing tree decay detection by using acoustic tomography images

    NASA Astrophysics Data System (ADS)

    Espinosa, Luis F.; Arciniegas, Andres F.; Prieto, Flavio A.; Cortes, Yolima; Brancheriau, Loïc.

    2015-04-01

    The acoustic tomographic technique is used in the diagnosis process of standing trees. This paper presents a segmentation methodology to separate defective regions in cross-section tomographic images obtained with Arbotom® device. A set of experiments was proposed using two trunk samples obtained from a eucalyptus tree, simulating defects by drilling holes with known geometry, size and position and using different number of sensors. Also, tomographic images from trees presenting real defects were studied, by testing two different species with significant internal decay. Tomographic images and photographs from the trunk cross-section were processed to align the propagation velocity data with a corresponding region, healthy or defective. The segmentation was performed by finding a velocity threshold value to separate the defective region; a logistic regression model was fitted to obtain the value that maximizes a performance criterion, being selected the geometric mean. Accuracy segmentation values increased as the number of sensors augmented; also the position influenced the result, obtaining improved results in the case of centric defects.

  3. Performance comparison of motion estimation algorithms on digital video images

    NASA Astrophysics Data System (ADS)

    Ali, N. A.; Ja'Afar, A. S.; Anathakrishnan, K. S.

    2009-12-01

    This paper presents a comparative study on technique to achieve high compression ratio in video coding. The focus is on the Block Matching Motion Estimation (BMME) techniques. It has been particularly used in various coding standards. In the BMME, search patterns and the center-biased characteristics of motion vector (MV) have large impact on the search speed and quality of video. Three fast Block Matching Algorithms (BMAs) of motion estimation through block matching have been implemented and performance of these three has been tested using MATLAB software. The Cross Diamond Search (CDS) is compared with Full Search (FS) and Cross Search (CS) algorithms based on search points (search speed) and peak signal-to-noise ratio (PSNR) as the quality of the video. The CDS algorithm was designed to fit the cross-center-biased (CCB) MV distribution characteristics of the real-world video sequences. CDS compares favorably with the other algorithms for low motion sequences in terms of speed, quality and computational complexity. Keywords: Block-matching, motion estimation, digital video compression, cross-centered biased, cross diamond search.

  4. Performance comparison of motion estimation algorithms on digital video images

    NASA Astrophysics Data System (ADS)

    Ali, N. A.; Ja'afar, A. S.; Anathakrishnan, K. S.

    2010-03-01

    This paper presents a comparative study on technique to achieve high compression ratio in video coding. The focus is on the Block Matching Motion Estimation (BMME) techniques. It has been particularly used in various coding standards. In the BMME, search patterns and the center-biased characteristics of motion vector (MV) have large impact on the search speed and quality of video. Three fast Block Matching Algorithms (BMAs) of motion estimation through block matching have been implemented and performance of these three has been tested using MATLAB software. The Cross Diamond Search (CDS) is compared with Full Search (FS) and Cross Search (CS) algorithms based on search points (search speed) and peak signal-to-noise ratio (PSNR) as the quality of the video. The CDS algorithm was designed to fit the cross-center-biased (CCB) MV distribution characteristics of the real-world video sequences. CDS compares favorably with the other algorithms for low motion sequences in terms of speed, quality and computational complexity. Keywords: Block-matching, motion estimation, digital video compression, cross-centered biased, cross diamond search.

  5. Acoustic-integrated dynamic MR imaging for a patient with obstructive sleep apnea.

    PubMed

    Chen, Yunn-Jy; Shih, Tiffany Ting-Fang; Chang, Yi-Chung; Hsu, Ying-Chieh; Huon, Leh-Kiong; Lo, Men-Tzung; Pham, Van-Truong; Lin, Chen; Wang, Pa-Chun

    2015-12-01

    Obstructive sleep apnea syndrome (OSAS) is caused by multi-level upper airway obstruction. Anatomic changes at the sites of obstruction may modify the physical or acoustic properties of snores. The surgical success of OSA depends upon precise localization of obstructed levels. We present a case of OSAS who received simultaneous dynamic MRI and snore acoustic recordings. The synchronized image and acoustic information successfully characterize the sites of temporal obstruction during sleep-disordered breathing events.

  6. The compressed average image intensity metric for stereoscopic video quality assessment

    NASA Astrophysics Data System (ADS)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  7. From computer images to video presentation: Enhancing technology transfer

    NASA Technical Reports Server (NTRS)

    Beam, Sherilee F.

    1994-01-01

    With NASA placing increased emphasis on transferring technology to outside industry, NASA researchers need to evaluate many aspects of their efforts in this regard. Often it may seem like too much self-promotion to many researchers. However, industry's use of video presentations in sales, advertising, public relations and training should be considered. Today, the most typical presentation at NASA is through the use of vu-graphs (overhead transparencies) which can be effective for text or static presentations. For full blown color and sound presentations, however, the best method is videotape. In fact, it is frequently more convenient due to its portability and the availability of viewing equipment. This talk describes techniques for creating a video presentation through the use of a combined researcher and video professional team.

  8. Platforms for hyperspectral imaging, in-situ optical and acoustical imaging in urbanized regions

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.; Oney, Taylor

    2016-10-01

    Hyperspectral measurements of the water surface of urban coastal waters are presented. Oblique bidirectional reflectance factor imagery was acquired made in a turbid coastal sub estuary of the Indian River Lagoon, Florida and along coastal surf zone waters of the nearby Atlantic Ocean. Imagery was also collected using a pushbroom hyperspectral imager mounted on a fixed platform with a calibrated circular mechatronic rotation stage. Oblique imagery of the shoreline and subsurface features clearly shows subsurface bottom features and rip current features within the surf zone water column. In-situ hyperspectral optical signatures were acquired from a vessel as a function of depth to determine the attenuation spectrum in Palm Bay. A unique stationary platform methodology to acquire subsurface acoustic images showing the presence of moving bottom boundary nephelometric layers passing through the acoustic fan beam. The acoustic fan beam imagery indicated the presence of oscillatory subsurface waves in the urbanized coastal estuary. Hyperspectral imaging using the fixed platform techniques are being used to collect hyperspectral bidirectional reflectance factor (BRF) measurements from locations at buildings and bridges in order to provide new opportunities to advance our scientific understanding of aquatic environments in urbanized regions.

  9. Subsurface defect of amorphous carbon film imaged by near field acoustic microscopy

    NASA Astrophysics Data System (ADS)

    Zeng, J. T.; Zhao, K. Y.; Zeng, H. R.; Song, H. Z.; Zheng, L. Y.; Li, G. R.; Yin, Q. R.

    2008-05-01

    Amorphous carbon films were examined by low frequency scanning-probe acoustic microscopy (LF-SPAM). Local elastic properties as well as topography were imaged in the acoustic mode. Two kinds of subsurface defects were revealed by the LF-SPAM method. The influence of the subsurface defects on the elastic properties was also discussed. The ability to image subsurface defects was dependent on the scan area and the scan speed. Our results showed that the low frequency scanning-probe acoustic microscopy is a useful method for imaging subsurface defects with high resolution.

  10. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  11. Correction of spatially varying image and video motion blur using a hybrid camera.

    PubMed

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  12. Thinking Images: Doing Philosophy in Film and Video

    ERIC Educational Resources Information Center

    Parkes, Graham

    2009-01-01

    Over the past several decades film and video have been steadily infiltrating the philosophy curriculum at colleges and universities. Traditionally, teachers of philosophy have not made much use of "audiovisual aids" in the classroom beyond the chalk board or overhead projector, with only the more adventurous playing audiotapes, for example, or…

  13. Three-dimensional photoacoustic imaging system with a 4f aspherical acoustic lens

    NASA Astrophysics Data System (ADS)

    Jen, En; Lin, Hsintien; Chiang, Huihua Kenny

    2016-08-01

    Photoacoustic (PA) imaging is a modality for achieving high-contrast images of blood vessels or tumors. Most PA imaging systems use complex reconstruction algorithms under conventional linear array transducers. We introduced the optical simulating method to improve the acoustic lens design and obtain a PA imaging system with improved spatial revolution (a 0.5-mm point spread function and a lateral image resolution of more than 1 mm) is realized using a 4f aspherical acoustic lens. The acoustic lens approach improved the image resolution and enabled direct reconstruction of three-dimensional (3-D) PA images. The system demonstrated a lateral resolution of more than 1 mm, a field of view of 8.5 deg, and a depth of focus of 10 mm. The system displays great potential for developing a real-time 3-D PA camera system for biomedical ultrasound imaging applications.

  14. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    NASA Astrophysics Data System (ADS)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  15. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  16. Experimental design and analysis of JND test on coded image/video

    NASA Astrophysics Data System (ADS)

    Lin, Joe Yuchieh; Jin, Lina; Hu, Sudeng; Katsavounidis, Ioannis; Li, Zhi; Aaron, Anne; Kuo, C.-C. Jay

    2015-09-01

    The visual Just-Noticeable-Difference (JND) metric is characterized by the detectable minimum amount of two visual stimuli. Conducting the subjective JND test is a labor-intensive task. In this work, we present a novel interactive method in performing the visual JND test on compressed image/video. JND has been used to enhance perceptual visual quality in the context of image/video compression. Given a set of coding parameters, a JND test is designed to determine the distinguishable quality level against a reference image/video, which is called the anchor. The JND metric can be used to save coding bitrates by exploiting the special characteristics of the human visual system. The proposed JND test is conducted using a binary-forced choice, which is often adopted to discriminate the difference in perception in a psychophysical experiment. The assessors are asked to compare coded image/video pairs and determine whether they are of the same quality or not. A bisection procedure is designed to find the JND locations so as to reduce the required number of comparisons over a wide range of bitrates. We will demonstrate the efficiency of the proposed JND test, report experimental results on the image and video JND tests.

  17. [Development of a video image system for wireless capsule endoscopes based on DSP].

    PubMed

    Yang, Li; Peng, Chenglin; Wu, Huafeng; Zhao, Dechun; Zhang, Jinhua

    2008-02-01

    A video image recorder to record video picture for wireless capsule endoscopes was designed. TMS320C6211 DSP of Texas Instruments Inc. is the core processor of this system. Images are periodically acquired from Composite Video Broadcast Signal (CVBS) source and scaled by video decoder (SAA7114H). Video data is transported from high speed buffer First-in First-out (FIFO) to Digital Signal Processor (DSP) under the control of Complex Programmable Logic Device (CPLD). This paper adopts JPEG algorithm for image coding, and the compressed data in DSP was stored to Compact Flash (CF) card. TMS320C6211 DSP is mainly used for image compression and data transporting. Fast Discrete Cosine Transform (DCT) algorithm and fast coefficient quantization algorithm are used to accelerate operation speed of DSP and decrease the executing code. At the same time, proper address is assigned for each memory, which has different speed;the memory structure is also optimized. In addition, this system uses plenty of Extended Direct Memory Access (EDMA) to transport and process image data, which results in stable and high performance.

  18. [Cap image--a new kind of computer-assisted video image analysis system for dynamic capillary microscopy].

    PubMed

    Klyscz, T; Jünger, M; Jung, F; Zeintl, H

    1997-06-01

    We describe a newly developed multi-function video image analysis system for the computer-aided evaluation of capillaroscopic findings in microcirculation research. The Cap image analysis system comprises an IBM-compatible PC with a Matrox image processing card and real-time video tape digitalization. The video recorder is driven by a personal computer to which it is connected via an RS-232 interface. In contrast to currently available systems, the program presented here makes it possible to select any of several integrated image analysis functions, depending on the quality of the video image. Some examples of the analysis functions available are measurements of erythrocyte flow velocity using the line shift diagram method, the spatial correlation method, and the auto flying spot method. The standard features of the new program include a number of special functions and automatic movement correction. The system thus makes it possible not only to measure numerous morphological parameters such as capillary diameter, length, torquation index and capillary density, but also to perform video densitometric analysis, for example using fluorescent dyes.

  19. Characterizing response to elemental unit of acoustic imaging noise: an FMRI study.

    PubMed

    Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2009-07-01

    Acoustic imaging noise produced during functional magnetic resonance imaging (fMRI) studies can hinder auditory fMRI research analysis by altering the properties of the acquired time-series data. Acoustic imaging noise can be especially confounding when estimating the time course of the hemodynamic response (HDR) in auditory event-related fMRI (fMRI) experiments. This study is motivated by the desire to establish a baseline function that can serve not only as a comparison to other quantities of acoustic imaging noise for determining how detrimental is one's experimental noise, but also as a foundation for a model that compensates for the response to acoustic imaging noise. Therefore, the amplitude and spatial extent of the HDR to the elemental unit of acoustic imaging noise (i.e., a single ping) associated with echoplanar acquisition were characterized and modeled. Results from this fMRI study at 1.5 T indicate that the group-averaged HDR in left and right auditory cortex to acoustic imaging noise (duration of 46 ms) has an estimated peak magnitude of 0.29% (right) to 0.48% (left) signal change from baseline, peaks between 3 and 5 s after stimulus presentation, and returns to baseline and remains within the noise range approximately 8 s after stimulus presentation.

  20. Negative refraction induced acoustic concentrator and the effects of scattering cancellation, imaging, and mirage

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Cheng, Ying; Liu, Xiao-jun

    2012-07-01

    We present a three-dimensional acoustic concentrator capable of significantly enhancing the sound intensity in the compressive region with scattering cancellation, imaging, and mirage effects. The concentrator shell is built by isotropic gradient negative-index materials, which together with an exterior host medium slab constructs a pair of complementary media. The enhancement factor, which can approach infinity by tuning the geometric parameters, is always much higher than that of a traditional concentrator made by positive-index materials with the same size. The acoustic scattering theory is applied to derive the pressure field distribution of the concentrator, which is consistent with the numerical full-wave simulations. The inherent acoustic impedance match at the interfaces of the shell as well as the inverse processes of “negative refraction—progressive curvature—negative refraction” for arbitrary sound rays can exactly cancel the scattering of the concentrator. In addition, the concentrator shell can also function as an acoustic spherical magnifying superlens, which produces a perfect image with the same shape, with bigger geometric and acoustic parameters located at a shifted position. Then some acoustic mirages are observed whereby the waves radiated from (scattered by) an object located in the center region may seem to be radiated from (scattered by) its image. Based on the mirage effect, we further propose an intriguing acoustic transformer which can transform the sound scattering pattern of one object into another object at will with arbitrary geometric, acoustic, and location parameters.

  1. Registering aerial video images using the projective constraint.

    PubMed

    Jackson, Brian P; Goshtasby, A Ardeshir

    2010-03-01

    To separate object motion from camera motion in an aerial video, consecutive frames are registered at their planar background. Feature points are selected in consecutive frames and those that belong to the background are identified using the projective constraint. Corresponding background feature points are then used to register and align the frames. By aligning video frames at the background and knowing that objects move against the background, a means to detect and track moving objects is provided. Only scenes with planar background are considered in this study. Experimental results show improvement in registration accuracy when using the projective constraint to determine the registration parameters as opposed to finding the registration parameters without the projective constraint.

  2. Acoustic angiography: a new high frequency contrast ultrasound technique for biomedical imaging

    NASA Astrophysics Data System (ADS)

    Shelton, Sarah E.; Lindsey, Brooks D.; Gessner, Ryan; Lee, Yueh; Aylward, Stephen; Lee, Hyunggyun; Cherin, Emmanuel; Foster, F. Stuart; Dayton, Paul A.

    2016-05-01

    Acoustic Angiography is a new approach to high-resolution contrast enhanced ultrasound imaging enabled by ultra-broadband transducer designs. The high frequency imaging technique provides signal separation from tissue which does not produce significant harmonics in the same frequency range, as well as high resolution. This approach enables imaging of microvasculature in-vivo with high resolution and signal to noise, producing images that resemble x-ray angiography. Data shows that acoustic angiography can provide important information about the presence of disease based on vascular patterns, and may enable a new paradigm in medical imaging.

  3. Computer Vision Tools for Finding Images and Video Sequences.

    ERIC Educational Resources Information Center

    Forsyth, D. A.

    1999-01-01

    Computer vision offers a variety of techniques for searching for pictures in large collections of images. Appearance methods compare images based on the overall content of the image using certain criteria. Finding methods concentrate on matching subparts of images, defined in a variety of ways, in hope of finding particular objects. These ideas…

  4. Negative refraction imaging of acoustic metamaterial lens in the supersonic range

    SciTech Connect

    Han, Jianning; Wen, Tingdun; Yang, Peng; Zhang, Lu

    2014-05-15

    Acoustic metamaterials with negative refraction index is the most promising method to overcome the diffraction limit of acoustic imaging to achieve ultrahigh resolution. In this paper, we use localized resonant phononic crystal as the unit cell to construct the acoustic negative refraction lens. Based on the vibration model of the phononic crystal, negative quality parameters of the lens are obtained while excited near the system resonance frequency. Simulation results show that negative refraction of the acoustic lens can be achieved when a sound wave transmiting through the phononic crystal plate. The patterns of the imaging field agree well with that of the incident wave, while the dispersion is very weak. The unit cell size in the simulation is 0.0005 m and the wavelength of the sound source is 0.02 m, from which we show that acoustic signal can be manipulated through structures with dimensions much smaller than the wavelength of incident wave.

  5. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  6. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  7. Phase-sensitive imaging of tissue acoustic vibrations using spectrally encoded interferometry.

    PubMed

    Ilgayev, Ovadia; Yelin, Dvir

    2013-08-26

    Acoustic vibrations in tissue are often difficult to image, requiring high-speed scanning, high sensitivity and nanometer-scale axial resolution. Here we use spectrally encoded interferometry to measure the vibration pattern of two-dimensional surfaces, including the skin of a volunteer, at nanometric resolution, without the need for rapid lateral scanning and with no prior knowledge of the driving acoustic waveform. Our results demonstrate the feasibility of this technique for measuring tissue biomechanics using simple and compact imaging probes.

  8. Tracking Energy Flow Using a Volumetric Acoustic Intensity Imager (VAIM)

    NASA Technical Reports Server (NTRS)

    Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas P.

    2006-01-01

    A new measurement device has been invented at the Naval Research Laboratory which images instantaneously the intensity vector throughout a three-dimensional volume nearly a meter on a side. The measurement device consists of a nearly transparent spherical array of 50 inexpensive microphones optimally positioned on an imaginary spherical surface of radius 0.2m. Front-end signal processing uses coherence analysis to produce multiple, phase-coherent holograms in the frequency domain each related to references located on suspect sound sources in an aircraft cabin. The analysis uses either SVD or Cholesky decomposition methods using ensemble averages of the cross-spectral density with the fixed references. The holograms are mathematically processed using spherical NAH (nearfield acoustical holography) to convert the measured pressure field into a vector intensity field in the volume of maximum radius 0.4 m centered on the sphere origin. The utility of this probe is evaluated in a detailed analysis of a recent in-flight experiment in cooperation with Boeing and NASA on NASA s Aries 757 aircraft. In this experiment the trim panels and insulation were removed over a section of the aircraft and the bare panels and windows were instrumented with accelerometers to use as references for the VAIM. Results show excellent success at locating and identifying the sources of interior noise in-flight in the frequency range of 0 to 1400 Hz. This work was supported by NASA and the Office of Naval Research.

  9. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr.; Younane Abousleiman

    2004-04-01

    The research during this project has concentrated on developing a correlation between rock deformation mechanisms and their acoustic velocity signature. This has included investigating: (1) the acoustic signature of drained and undrained unconsolidated sands, (2) the acoustic emission signature of deforming high porosity rocks (in comparison to their low porosity high strength counterparts), (3) the effects of deformation on anisotropic elastic and poroelastic moduli, and (4) the acoustic tomographic imaging of damage development in rocks. Each of these four areas involve triaxial experimental testing of weak porous rocks or unconsolidated sand and involves measuring acoustic properties. The research is directed at determining the seismic velocity signature of damaged rocks so that 3-D or 4-D seismic imaging can be utilized to image rock damage. These four areas of study are described in the report: (1) Triaxial compression experiments have been conducted on unconsolidated Oil Creek sand at high confining pressures. (2) Initial experiments on measuring the acoustic emission activity from deforming high porosity Danian chalk were accomplished and these indicate that the AE activity was of a very low amplitude. (3) A series of triaxial compression experiments were conducted to investigate the effects of induced stress on the anisotropy developed in dynamic elastic and poroelastic parameters in rocks. (4) Tomographic acoustic imaging was utilized to image the internal damage in a deforming porous limestone sample. Results indicate that the deformation damage in rocks induced during laboratory experimentation can be imaged tomographically in the laboratory. By extension the results also indicate that 4-D seismic imaging of a reservoir may become a powerful tool for imaging reservoir deformation (including imaging compaction and subsidence) and for imaging zones where drilling operation may encounter hazardous shallow water flows.

  10. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    PubMed

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation.

  11. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex

    PubMed Central

    Ranaweera, Ruwan D.; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G.; Luh, Wen-Ming; Talavage, Thomas M.

    2015-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. PMID:26519093

  12. An on-line video image processing system for real-time neutron radiography

    NASA Astrophysics Data System (ADS)

    Fujine, Shigenori; Yoneda, Kenji; Kanda, Keiji

    1983-09-01

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the Ne-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image bufer (32KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240×256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  13. A professional and cost effective digital video editing and image storage system for the operating room.

    PubMed

    Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N

    2007-06-01

    We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.

  14. Acoustic and optical borehole-wall imaging for fractured-rock aquifer studies

    USGS Publications Warehouse

    Williams, J.H.; Johnson, C.D.

    2004-01-01

    Imaging with acoustic and optical televiewers results in continuous and oriented 360?? views of the borehole wall from which the character, relation, and orientation of lithologic and structural planar features can be defined for studies of fractured-rock aquifers. Fractures are more clearly defined under a wider range of conditions on acoustic images than on optical images including dark-colored rocks, cloudy borehole water, and coated borehole walls. However, optical images allow for the direct viewing of the character of and relation between lithology, fractures, foliation, and bedding. The most powerful approach is the combined application of acoustic and optical imaging with integrated interpretation. Imaging of the borehole wall provides information useful for the collection and interpretation of flowmeter and other geophysical logs, core samples, and hydraulic and water-quality data from packer testing and monitoring. ?? 2003 Elsevier B.V. All rights reserved.

  15. The path to COVIS: A review of acoustic imaging of hydrothermal flow regimes

    NASA Astrophysics Data System (ADS)

    Bemis, Karen G.; Silver, Deborah; Xu, Guangyu; Light, Russ; Jackson, Darrell; Jones, Christopher; Ozer, Sedat; Liu, Li

    2015-11-01

    Acoustic imaging of hydrothermal flow regimes started with the incidental recognition of a plume on a routine sonar scan for obstacles in the path of the human-occupied submersible ALVIN. Developments in sonar engineering, acoustic data processing and scientific visualization have been combined to develop technology which can effectively capture the behavior of focused and diffuse hydrothermal discharge. This paper traces the development of these acoustic imaging techniques for hydrothermal flow regimes from their conception through to the development of the Cabled Observatory Vent Imaging Sonar (COVIS). COVIS has monitored such flow eight times a day for several years. Successful acoustic techniques for estimating plume entrainment, bending, vertical rise, volume flux, and heat flux are presented as is the state-of-the-art in diffuse flow detection.

  16. Sub-component modeling for face image reconstruction in video communications

    NASA Astrophysics Data System (ADS)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  17. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  18. Opto-acoustic image fusion technology for diagnostic breast imaging in a feasibility study

    NASA Astrophysics Data System (ADS)

    Zalev, Jason; Clingman, Bryan; Herzog, Don; Miller, Tom; Ulissey, Michael; Stavros, A. T.; Oraevsky, Alexander; Lavin, Philip; Kist, Kenneth; Dornbluth, N. C.; Otto, Pamela

    2015-03-01

    Functional opto-acoustic (OA) imaging was fused with gray-scale ultrasound acquired using a specialized duplex handheld probe. Feasibility Study findings indicated the potential to more accurately characterize breast masses for cancer than conventional diagnostic ultrasound (CDU). The Feasibility Study included OA imagery of 74 breast masses that were collected using the investigational Imagio® breast imaging system. Superior specificity and equal sensitivity to CDU was demonstrated, suggesting that OA fusion imaging may potentially obviate the need for negative biopsies without missing cancers in a certain percentage of breast masses. Preliminary results from a 100 subject Pilot Study are also discussed. A larger Pivotal Study (n=2,097 subjects) is underway to confirm the Feasibility Study and Pilot Study findings.

  19. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  20. Biosonar acoustic images for target localization and classification by bats

    NASA Astrophysics Data System (ADS)

    Simmons, James A.

    1997-07-01

    Echolocating bats use sonar to guide interception of insects, recognize objects by shape, and even track prey in clutter. Broadcasts of the big brown bat are 0.5 to 20 ms FM signals in the 20-100 kHz ultrasonic band. Insects consist of several reflecting glints, each equivalent in cross- section to a small sphere of 2 mm to 2 cm radius, while clutter is typically composed of numerous glints distributed over a large volume. The bats' signals extend in space for many target lengths, while ka values for each glint are 0.5 to 30 across the broadcast band. Bats perceive acoustic images having echo delay as their primary dimension, and space is perceived in terms of the distribution of target glints in range. Range disparities between the ears provide two 'looks' at each target from slightly different locations as well as information about azimuth. The bats auditory system encodes the FM sweeps of broadcasts and echoes as linear-period spectrograms with integration-times of 300-400 micrometers . Bats nevertheless perceive individual glints in targets for echo-delay separations well inside the integration-time window. Deconvolution is achieved by spectrogram correlation in the time domain and spectral shape transformation in the frequency-domain, with all output evidently being displayed in the time domina. Neural responses in the bat's auditory system seem limited in time precision to 20-50 micrometers at best and 300 microsecond(s) to 3 ms in a broader sample, and stimulus phase is thought to be lost for frequencies above 1-3 kHz. Yet bats perceive echo delay with an accuracy of 10-15 ns and have two-echo resolution of about 2 microsecond(s) . Moreover, bats perceive echo phase-shifts as the correctly corresponding shifts in echo delay. Successive images are subtracted to enhance perception of shape from multiple 'looks', and echo phase is an integral part of this critical process. Utterly novel time-scale magnification appears in the bat's neural responses to

  1. High-speed video imaging and digital analysis of microscopic features in contracting striated muscle cells

    NASA Astrophysics Data System (ADS)

    Roos, Kenneth P.; Taylor, Stuart R.

    1993-02-01

    The rapid motion of microscopic features such as the cross striations of single contracting muscle cells are difficult to capture with conventional optical microscopes, video systems, and image processing approaches. An integrated digital video imaging microscope system specifically designed to capture images from single contracting muscle cells at speeds of up to 240 Hz and to analyze images to extract features critical for the understanding of muscle contraction is described. This system consists of a brightfield microscope with immersion optics coupled to a high-speed charge-coupled device (CCD) video camera, super-VHS (S- VHS) and optical media disk video recording (OMDR) systems, and a semiautomated digital image analysis system. Components are modified to optimize spatial and temporal resolution to permit the evaluation of submicrometer features in real physiological time. This approach permits the critical evaluation of the magnitude, time course, and uniformity of contractile function throughout the volume of a single living cell with higher temporal and spatial resolutions than previously possible.

  2. Image enhancement using a range gated MCPII video system with a 180-ps FWHM shutter

    SciTech Connect

    Thomas, M.C.; Yates, G.J.; Zadgarino, P.

    1995-09-01

    The video image of a target submerged in a scattering medium was improved through the use of range gating techniques. The target, an Air Force resolution chart, was submerged in 18 in. of a colloidal suspension of tincture green soap in water. The target was illuminated with pulsed light from a Raman shifted, frequency-doubled, ND:YAG laser having a wavelength of 559 mm and a width of 20 ps FWHM. The laser light reflected by the target along with the light scattered by the soap, was imaged onto a microchannel-plate image intensifier (MCPII). The output from the MCPII was then recorded with a RS-170 video camera and a video digitizer. The MCPII was gated on with a pulse synchronously timed to the laser pulse. The relative timing between the reflected laser pulse and the shuttering of the MCPII determined the distance to the imaged region. The resolution of the image was influenced by the MCPII`s shutter time. A comparison was made between the resolution of images obtained with 6 ns, 500 ps and 180 ps FWHM (8 ns, 750 ps and 250 ps off-to-off) shutter times. it was found that the image resolution was enhanced by using the faster shutter since the longer exposures allowed light scattered by the water to be recorded too. The presence of scattered light in the image increased the noise, thereby reducing the contrast and the resolution.

  3. Enhancing thermal video using a public database of images

    NASA Astrophysics Data System (ADS)

    Qadir, Hemin; Kozaitis, S. P.; Ali, Ehsan

    2014-05-01

    We presented a system to display nightime imagery with natural colors using a public database of images. We initially combined two spectral bands of images, thermal and visible, to enhance night vision imagery, however the fused image gave an unnatural color appearance. Therefore, a color transfer based on look-up table (LUT) was used to replace the false color appearance with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Because of the computational demand in deriving the colormap from the reference image, we created an additional local database of colormaps. Reference images from the public database were compared to a compact local database to retrieve one of a limited number of colormaps that represented several driving environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a colormap, we compared the histogram of the fused image with histograms of images in the local database. The colormaps of the best match was then used for the fused image. Continuously selecting and applying colormaps using this approach offered a convenient way to color night vision imagery.

  4. Fusion of intraoperative cone-beam CT and endoscopic video for image-guided procedures

    NASA Astrophysics Data System (ADS)

    Daly, M. J.; Chan, H.; Prisman, E.; Vescan, A.; Nithiananthan, S.; Qiu, J.; Weersink, R.; Irish, J. C.; Siewerdsen, J. H.

    2010-02-01

    Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT [surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in CBCT-guided head and neck surgery.

  5. Evaluation of the Accuracy of Grading Indirect Ophthalmoscopy Video Images for Retinopathy of Prematurity Screening

    PubMed Central

    Prakalapakorn, Sasapin G.; Wallace, David K.; Dolland, Riana S.; Freedman, Sharon F.

    2015-01-01

    Purpose To determine whether digital retinal images obtained using a video indirect ophthalmoscopy system (Keeler) can be accurately graded for zone, stage, and plus or pre-plus disease and used to screen for type 1 retinopathy of prematurity (ROP). Methods We retrospectively reviewed the charts of 114 infants who had retinal video images acquired using the Keeler system during routine ROP examinations. Two masked ophthalmologists (1 expert and 1 non-expert in ROP screening) graded these videos for image quality, zone, stage, and pre-plus or plus disease. We compared the ophthalmologists’ grades of the videos against the clinical examination results, which served as the reference standard. We then determined the sensitivity/specificity of 2 predefined criteria for referral in detecting disease requiring treatment (i.e. type 1 ROP). Results Of images the expert considered fair or good quality (n=68), the expert and non-expert correctly identified zone (75% vs. 74%, respectively), stage (75% vs. 40%, respectively), and the presence of pre-plus or plus disease in 79% of images. Expert and non-expert judgment of prethreshold disease, pre-plus or plus disease had 100% sensitivity and 75% vs. 79% specificity, respectively, for detecting type 1 ROP. Expert and non-expert judgment of pre-plus or plus disease had 92% vs. 100% sensitivity and 77% vs. 82% specificity, respectively, for detecting type 1 ROP. Conclusions High-quality retinal video images can be read with high sensitivity and acceptable specificity to screen for type 1 ROP. Grading for pre-plus or plus disease alone may be sufficient for the purpose of ROP screening. PMID:25608280

  6. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  7. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2001-01-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  8. Video and image retrieval beyond the cognitive level: the needs and possibilities

    NASA Astrophysics Data System (ADS)

    Hanjalic, Alan

    2000-12-01

    The worldwide research efforts in the are of image and video retrieval have concentrated so far on increasing the efficiency and reliability of extracting the elements of image and video semantics and so on improving the search and retrieval performance at the cognitive level of content abstraction. At this abstraction level, the user is searching for 'factual' or 'objective' content such as image showing a panorama of San Francisco, an outdoor or an indoor image, a broadcast news report on a defined topic, a movie dialog between the actors A and B or the parts of a basketball game showing fast breaks, steals and scores. These efforts, however, do not address the retrieval applications at the so-called affective level of content abstraction where the 'ground truth' is not strictly defined. Such applications are, for instance, those where subjectivity of the user plays the major role, e.g. the task of retrieving all images that the user 'likes most', and those that are based on 'recognizing emotions' in audiovisual data. Typical examples are searching for all images that 'radiate happiness', identifying all 'sad' movie fragments and looking for the 'romantic landscapes', 'sentimental' movie segments, 'movie highlights' or 'most exciting' moments of a sport event. This paper discusses the needs and possibilities for widening the current scope of research in the area of image and video search and retrieval in order to enable applications at the affective level of content abstraction.

  9. Video rate imaging at 1.5 THz via frequency upconversion to the near-IR

    NASA Astrophysics Data System (ADS)

    Tekavec, Patrick F.; Kozlov, Vladimir G.; McNee, Ian; Lee, Yun-Shik; Vodopyanov, Konstantin

    2015-05-01

    We demonstrate video rate THz imaging in both reflection and transmission by frequency upconverting the THz image to the near-IR. In reflection, the ability to resolve images generated at different depths is shown. By mixing the THz pulses with a portion of the fiber laser pump (1064 nm) in a quasi-phase matched gallium arsenide crystal, distinct sidebands are observed at 1058 nm and 1070 nm, corresponding to sum and difference frequency generation of the pump pulse with the THz pulse. By using a polarizer and long pass filter, the strong pump light can be removed, leaving a nearly background free signal at 1070 nm. We have obtained video rate images with spatial resolution of 1mm and field of view ca. 20 mm in diameter without any post processing of the data.

  10. Progress toward a video-rate, passive millimeter-wave imager for brownout mitigation

    NASA Astrophysics Data System (ADS)

    Mackrides, Daniel G.; Schuetz, Christopher A.; Martin, Richard D.; Dillon, Thomas E.; Yao, Peng; Prather, Dennis W.

    2011-05-01

    Currently, brownout is the single largest contributor to military rotary-wing losses. Millimeter-wave radiation penetrates these dust clouds effectively, thus millimeter-wave imaging could provide pilots with valuable situational awareness during hover, takeoff, and landing operations. Herein, we detail efforts towards a passive, video-rate imager for use as a brownout mitigation tool. The imager presented herein uses a distributed-aperture, optically-upconverted architecture that provides real-time, video-rate imagery with minimal size and weight. Specifically, we detail phenomenology measurements in brownout environments, show developments in enabling component technologies, and present results from a 30-element aperiodic array imager that has recently been fabricated.

  11. High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Gallia, Gary L.; Taylor, Russell H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2011-03-01

    Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.

  12. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  13. Acoustic imaging and mirage effects with high transmittance in a periodically perforated metal slab

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng-Dong; Wang, Yue-Sheng; Zhang, Chuanzeng

    2016-11-01

    In this paper, we present a high-quality superlens to focus acoustic waves using a periodically perforated metallic structure which is made of zinc and immersed in water. By changing a geometrical parameter gradually, a kind of gradient-index phononic crystal lens is designed to attain the mirage effects. The acoustic waves can propagate along an arc-shaped trajectory which is precisely controlled by the angle and frequency of the incident waves. The negative refraction imaging effect depends delicately on the transmittance of the solid structure. The acoustic impedance matching between the solid and the liquid proposed in this article, which is determined by the effective density and group velocity of the unit-cell, is significant for overcoming the inefficiency problem of acoustic devices. This study focuses on how to obtain the high transmittance imaging and mirage effects based on the adequate material selection and geometrical design.

  14. Lidar-Incorporated Traffic Sign Detection from Video Log Images of Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Li, Y.; Fan, J.; Huang, Y.; Chen, Z.

    2016-06-01

    Mobile Mapping System (MMS) simultaneously collects the Lidar points and video log images in a scenario with the laser profiler and digital camera. Besides the textural details of video log images, it also captures the 3D geometric shape of point cloud. It is widely used to survey the street view and roadside transportation infrastructure, such as traffic sign, guardrail, etc., in many transportation agencies. Although many literature on traffic sign detection are available, they only focus on either Lidar or imagery data of traffic sign. Based on the well-calibrated extrinsic parameters of MMS, 3D Lidar points are, the first time, incorporated into 2D video log images to enhance the detection of traffic sign both physically and visually. Based on the local elevation, the 3D pavement area is first located. Within a certain distance and height of the pavement, points of the overhead and roadside traffic signs can be obtained according to the setup specification of traffic signs in different transportation agencies. The 3D candidate planes of traffic signs are then fitted using the RANSAC plane-fitting of those points. By projecting the candidate planes onto the image, Regions of Interest (ROIs) of traffic signs are found physically with the geometric constraints between laser profiling and camera imaging. The Random forest learning of the visual color and shape features of traffic signs is adopted to validate the sign ROIs from the video log images. The sequential occurrence of a traffic sign among consecutive video log images are defined by the geometric constraint of the imaging geometry and GPS movement. Candidate ROIs are predicted in this temporal context to double-check the salient traffic sign among video log images. The proposed algorithm is tested on a diverse set of scenarios on the interstate highway G-4 near Beijing, China under varying lighting conditions and occlusions. Experimental results show the proposed algorithm enhances the rate of detecting

  15. An adaptive optimal bandwidth sensor for video imaging and sparsifying basis

    NASA Astrophysics Data System (ADS)

    Noor, Imama

    Many compressive sensing architectures have shown promise towards reducing the bandwidth for image acquisition significantly. In order to use these architectures for video acquisition we need a scheme that is able to effectively exploit temporal redundancies in a sequence. In this thesis we study a method to efficiently sample and reconstruct specific video sequences. The method is suitable for implementation using a single pixel detector along with a digital micromirror device (DMD) or other forms of spatial light modulators (SLMs). Conventional implementations of single pixel cameras are able to spatially compress the signal but the compressed measurements make it difficult to exploit temporal redundancies directly. Moreover a single pixel camera needs to make measurements in a sequential manner before the scene changes making it inefficient for video imaging. In this thesis we discuss a measurement scheme that exploits sparsity along the time axis for video imaging. After acquiring all measurements required for the first frame, measurements are only acquired from the areas which change in subsequent frames. We segment the first frame and detect magnitude and direction of change for each segment and acquire compressed measurements for the changing segments in the predicted direction. TV minimization is used to reconstruct the dynamic areas and PSNR variation is studied against different parameters of proposed scheme. We show the reconstruction results for a few test sequences commonly used for performance analysis and demonstrate the practical utility of the scheme. A comparison is made with existing algorithms to show the effectiveness of proposed method for specific video sequences. We also discuss use of customized transform to improve reconstruction of submillimeter wave single pixel imager. We use a sparseness inducing transformation on the measurements and optimize the result using l1 minimization algorithms. We demonstrate improvement in result of several

  16. Modeling Hemodynamic Responses in Auditory Cortex at 1.5T Using Variable Duration Imaging Acoustic Noise

    PubMed Central

    Hu, Shuowen; Olulade, Olumide; Gonzalez, Javier Castillo; Santos, Joseph; Kim, Sungeun; Tamer, Gregory G.; Luh, Wen-Ming; Talavage, Thomas M.

    2009-01-01

    A confound for functional magnetic resonance imaging (fMRI), especially for auditory studies, is the presence of imaging acoustic noise generated mainly as a byproduct of rapid gradient switching during volume acquisition and to a lesser extent, the radio-frequency transmit. This work utilized a novel pulse sequence to present actual imaging acoustic noise for characterization of the induced hemodynamic responses and assessment of linearity in the primary auditory cortex with respect to noise duration. Results show that responses to brief duration (46ms) imaging acoustic noise is highly nonlinear while responses to longer duration (>1s) imaging acoustic noise becomes approximately linear, with the right primary auditory cortex exhibiting a higher degree of nonlinearity than the left for the investigated noise durations. This study also assessed the spatial extent of activation induced by imaging acoustic noise, showing that the use of modeled responses (specific to imaging acoustic noise) as the reference waveform revealed additional activations in the auditory cortex not observed with a canonical gamma variate reference waveform, suggesting an improvement in detection sensitivity for imaging acoustic noise-induced activity. Longer duration (1.5s) imaging acoustic noise was observed to induce activity that expanded outwards from Heschl’s gyrus to cover the superior temporal gyrus as well as parts of the middle temporal gyrus and insula, potentially affecting higher level acoustic processing. PMID:19948232

  17. An Adaptive Framework for Image and Video Sensing

    DTIC Science & Technology

    2005-03-01

    bandwidth on the camera transmission or memory is not optimally utilized. In this paper we outline a framework for an adaptive sensor where the spatial and...scene can be realized, with small distortion. Keywords: Adaptive Imaging, Varying Sampling Rate, Image Content Measure, Scene Adaptive, Camera ...second order effect on the spatio-temporal trade-off. Figure 1 is an example of the spatio-temporal sampling rate tradeoff in a typical camera (e.g

  18. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  19. Contrast Enhancement for Thermal Acoustic Breast Cancer Imaging via Resonant Stimulation

    DTIC Science & Technology

    2009-03-01

    Olsen and J. C. Lin, “Acoustic imaging of a model of a human hand using pulsed microwave irradiation,” Bioelectromagnetics, vol. 4, pp. 397–400, 1983. [2...E. Steen and B. Olstad, “Volume rendering of 3-D medical ultrasound data using direct feature mapping,” IEEE Trans. Med. Imag., vol. 13, no. 6, pp

  20. Digital Video Needs for Oceanographic Images for the National Oceanic and Atmospheric Administration (NOAA): Phase 2

    DTIC Science & Technology

    2003-09-01

    www.lib.noaa.gov) and the worldwide online library catalog, WorldCat (http://www.oclc.org/ worldcat /). WorldCat is the world’s largest and richest database...OE expeditions will also be available through NOAALINC and WorldCat . E. The Digital Atlas Working Group This group has developed a Geographic...future NOAA video portal, and WorldCat . Input will be provided from both the NOAA Data Centers and the Internet for video images. Multi-platform

  1. [Specifics of perception of acoustic image of intrinsic bioelectric brain activity].

    PubMed

    Konstantinov, K V; Leonova, M K; Miroshnikov, D B; Klimenko, V M

    2014-06-01

    We studied the particularities of perception of the acoustic image of intrinsic EEG. We found that the assessment of perception of sounds, the presentation of which was synchronized and was agreed with current bioelectric brain activity, is higher that assessment of perception of acoustic EEG image presented in recorded form. Presentation of recorded acoustic image of EEG is accompanied by increased activity of beta-band in the frontal areas, while real-time presentation of acoustic EEG image is accompanied by the increase of slow wave activity: theta- and delta-bands of occipital areas of the brain. Increase activity in theta- and delta-bands of occipital areas in sessions of hearing the acoustic image of EEG in real time depend on the baseline frequency structure of EEG and correlates with expression of alpha-, beta- and theta-bands of bioelectric brain activity in both frontal and occipital areas. We suppose that presentation of sounds synchronized and agreed with the current bioelectric activity, activated the regulatory brain structures.

  2. High-spatial-resolution sub-surface imaging using a laser-based acoustic microscopy technique.

    PubMed

    Balogun, Oluwaseyi; Cole, Garrett D; Huber, Robert; Chinn, Diane; Murray, Todd W; Spicer, James B

    2011-01-01

    Scanning acoustic microscopy techniques operating at frequencies in the gigahertz range are suitable for the elastic characterization and interior imaging of solid media with micrometer-scale spatial resolution. Acoustic wave propagation at these frequencies is strongly limited by energy losses, particularly from attenuation in the coupling media used to transmit ultrasound to a specimen, leading to a decrease in the depth in a specimen that can be interrogated. In this work, a laser-based acoustic microscopy technique is presented that uses a pulsed laser source for the generation of broadband acoustic waves and an optical interferometer for detection. The use of a 900-ps microchip pulsed laser facilitates the generation of acoustic waves with frequencies extending up to 1 GHz which allows for the resolution of micrometer-scale features in a specimen. Furthermore, the combination of optical generation and detection approaches eliminates the use of an ultrasonic coupling medium, and allows for elastic characterization and interior imaging at penetration depths on the order of several hundred micrometers. Experimental results illustrating the use of the laser-based acoustic microscopy technique for imaging micrometer-scale subsurface geometrical features in a 70-μm-thick single-crystal silicon wafer with a (100) orientation are presented.

  3. Liver reserve function assessment by acoustic radiation force impulse imaging

    PubMed Central

    Sun, Xiao-Lan; Liang, Li-Wei; Cao, Hui; Men, Qiong; Hou, Ke-Zhu; Chen, Zhen; Zhao, Ya-E

    2015-01-01

    AIM: To evaluate the utility of liver reserve function by acoustic radiation force impulse (ARFI) imaging in patients with liver tumors. METHODS: Seventy-six patients with liver tumors were enrolled in this study. Serum biochemical indexes, such as aminotransferase (ALT), aspartate aminotransferase (AST), serum albumin (ALB), total bilirubin (T-Bil), and other indicators were observed. Liver stiffness (LS) was measured by ARFI imaging, measurements were repeated 10 times, and the average value of the results was taken as the final LS value. Indocyanine green (ICG) retention was performed, and ICG-K and ICG-R15 were recorded. Child-Pugh (CP) scores were carried out based on patient’s preoperative biochemical tests and physical condition. Correlations among CP scores, ICG-R15, ICG-K and LS values were observed and analyzed using either the Pearson correlation coefficient or the Spearman rank correlation coefficient. Kruskal-Wallis test was used to compare LS values of CP scores, and the receiver-operator characteristic (ROC) curve was used to analyze liver reserve function assessment accuracy. RESULTS: LS in the ICG-R15 10%-20% group was significantly higher than in the ICG-R15 < 10% group; and the difference was statistically significant (2.19 ± 0.27 vs 1.59 ± 0.32, P < 0.01). LS in the ICG-R15 > 20% group was significantly higher than in the ICG-R15 < 10% group; and the difference was statistically significant (2.92 ± 0.29 vs 1.59 ± 0.32, P < 0.01). The LS value in patients with CP class A was lower than in patients with CP class B (1.57 ± 0.34 vs 1.86 ± 0.27, P < 0.05), while the LS value in patients with CP class B was lower than in patients with CP class C (1.86 ± 0.27 vs 2.47 ± 0.33, P < 0.01). LS was positively correlated with ICG-R15 (r = 0.617, P < 0.01) and CP score (r = 0.772, P < 0.01). Meanwhile, LS was negatively correlated with ICG-K (r = -0.673, P < 0.01). AST, ALT and T-Bil were positively correlated with LS, while ALB was negatively

  4. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  5. Video-rate dual polarization multispectral endoscopic imaging

    NASA Astrophysics Data System (ADS)

    Pigula, Anne; Clancy, Neil T.; Arya, Shobhit; Hanna, George B.; Elson, Daniel S.

    2015-03-01

    Cancerous and precancerous growths often exhibit changes in scattering, and therefore depolarization, as well as collagen breakdown, causing changes in birefringent effects. Simple difference of linear polarization imaging is unable to represent anisotropic effects like birefringence, and Mueller polarimetry is time-consuming and difficult to implement clinically. This work presents a dual-polarization endoscope to collect co- and cross-polarized images for each of two polarization states, and further incorporates narrow band detection to increase vascular contrast, particularly vascular remodeling present in diseased tissue, and provide depth sensitivity. The endoscope was shown to be sensitive to both isotropic and anisotropic materials in phantom and in vivo experiments.

  6. Video-rate chemical identification and visualization with snapshot hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Bodkin, Andrew; Sheinis, A.; Norton, A.; Daly, J.; Roberts, C.; Beaven, S.; Weinheimer, J.

    2012-06-01

    Hyperspectral imaging has important benefits in remote sensing and target discrimination applications. This paper describes a class of snapshot-mode hyperspectral imaging systems which utilize a unique optical processor that provides video-rate hyperspectral datacubes. This system consists of numerous parallel optical paths which collect the full threedimensional (two spatial, one spectral) hyperspectral datacube with each video frame and are ideal for recording data from transient events, or on unstable platforms. We will present the results of laboratory and field-tests for several of these imagers operating at visible, near-infrared, MWIR and LWIR wavelengths. Measurement results for nitrate detection and identification as well as additional chemical identification and analysis will be presented.

  7. Standoff passive video imaging at 350 GHz with 251 superconducting detectors

    NASA Astrophysics Data System (ADS)

    Becker, Daniel; Gentry, Cale; Smirnov, Ilya; Ade, Peter; Beall, James; Cho, Hsiao-Mei; Dicker, Simon; Duncan, William; Halpern, Mark; Hilton, Gene; Irwin, Kent; Li, Dale; Paulter, Nicholas; Reintsema, Carl; Schwall, Robert; Tucker, Carole

    2014-06-01

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bomb belts and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) detectors makes them ideal for passive imaging of thermal signals at these wavelengths. We have built a 350 GHz video-rate imaging system using a large-format array of feedhorn-coupled TES bolometers. The system operates at a standoff distance of 16m to 28m with a spatial resolution of 1:4 cm (at 17m). It currently contains one 251-detector subarray, and will be expanded to contain four subarrays for a total of 1004 detectors. The system has been used to take video images which reveal the presence of weapons concealed beneath a shirt in an indoor setting. We present a summary of this work.

  8. 3D CT-Video Fusion for Image-Guided Bronchoscopy

    PubMed Central

    Higgins, William E.; Helferty, James P.; Lu, Kongkuo; Merritt, Scott A.; Rai, Lav; Yu, Kun-Chang

    2008-01-01

    Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient’s three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods. PMID:18096365

  9. Segmentation of the spinous process and its acoustic shadow in vertebral ultrasound images.

    PubMed

    Berton, Florian; Cheriet, Farida; Miron, Marie-Claude; Laporte, Catherine

    2016-05-01

    Spinal ultrasound imaging is emerging as a low-cost, radiation-free alternative to conventional X-ray imaging for the clinical follow-up of patients with scoliosis. Currently, deformity measurement relies almost entirely on manual identification of key vertebral landmarks. However, the interpretation of vertebral ultrasound images is challenging, primarily because acoustic waves are entirely reflected by bone. To alleviate this problem, we propose an algorithm to segment these images into three regions: the spinous process, its acoustic shadow and other tissues. This method consists, first, in the extraction of several image features and the selection of the most relevant ones for the discrimination of the three regions. Then, using this set of features and linear discriminant analysis, each pixel of the image is classified as belonging to one of the three regions. Finally, the image is segmented by regularizing the pixel-wise classification results to account for some geometrical properties of vertebrae. The feature set was first validated by analyzing the classification results across a learning database. The database contained 107 vertebral ultrasound images acquired with convex and linear probes. Classification rates of 84%, 92% and 91% were achieved for the spinous process, the acoustic shadow and other tissues, respectively. Dice similarity coefficients of 0.72 and 0.88 were obtained respectively for the spinous process and acoustic shadow, confirming that the proposed method accurately segments the spinous process and its acoustic shadow in vertebral ultrasound images. Furthermore, the centroid of the automatically segmented spinous process was located at an average distance of 0.38 mm from that of the manually labeled spinous process, which is on the order of image resolution. This suggests that the proposed method is a promising tool for the measurement of the Spinous Process Angle and, more generally, for assisting ultrasound-based assessment of scoliosis

  10. Segmentation and classification of shallow subbottom acoustic data, using image processing and neural networks

    NASA Astrophysics Data System (ADS)

    Yegireddi, Satyanarayana; Thomas, Nitheesh

    2014-06-01

    Subbottom acoustic profiler provides acoustic imaging of the subbottom structure constituting the upper sediment layers of the seabed, which is essential for geological and offshore geo-engineering studies. Delineation of the subbottom structure from a noisy acoustic data and classification of the sediment strata is a challenging task with the conventional signal processing techniques. Image processing techniques utilise the spatial variability of the image characteristics, known for their potential in medical imaging and pattern recognition applications. In the present study, they are found to be good in demarcating the boundaries of the sediment layers associated with weak acoustic reflectivity, masked by noisy background. The study deals with application of image processing techniques, like segmentation in identification of subbottom features and extraction of textural feature vectors using grey level co-occurrence matrix statistics. And also attempted classification using Self Organised Map, an unsupervised neural network model utilising these feature vectors. The methodology was successfully demonstrated in demarcating the different sediment layers from the subbottom images and established the sediments constituting the inferred four subsurface sediment layers differ from each other. The network model was also tested for its consistency, with repeated runs of different configuration of the network. Also the ability of simulated network was tested using a few untrained test images representing the similar environment and the classification results show a good agreement with the anticipated.

  11. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  12. Coastal morphodynamic features/patterns analisys through a video-based system and image processing

    NASA Astrophysics Data System (ADS)

    Santos, Fábio; Pais-Barbosa, Joaquim; Teodoro, Ana C.; Gonçalves, Hernâni; Baptista, Paolo; Moreira, António; Veloso-Gomes, Fernando; Taveira-Pinto, Francisco; Gomes-Costa, Paulo; Lopes, Vítor; Neves-Santos, Filipe

    2012-10-01

    The Portuguese coastline, like many other worldwide coastlines, is often submitted to several types of extreme events resulting in erosion, thus, acquisition of high quality field measurements has become a common concern. The nearshore survey systems have been traditionally based on in situ measurements or in the use of satellite or aircraft mounted remote sensing systems. As an alternative, video-monitoring systems proved to be an economic and efficient way to collect useful and continuous data, and to document extreme events. In this context, is under development the project MoZCo (Advanced Methodologies and Techniques Development for Coastal Zone Monitoring), which intends to develop and implement monitoring techniques for the coastal zone based on a low cost video monitoring system. The pilot study area is Ofir beach (north of Portugal), a critical coastal area. In the beginning of this project (2010) a monitoring video station was developed, collecting snapshots and 10 minutes videos every hour. In order to process the data, several video image processing algorithms were implemented in Matlab®, allowing achieve the main video-monitoring system products, such as, the shoreline detection. An algorithm based on image processing techniques was developed, using the HSV color space, the idea is to select a study and a sample area, containing pixels associated with dry and wet regions, over which a thresholding and some morphological operators are applied. After comparing the results with manual digitalization, promising results were achieved despite the method's simplicity, which is in continuous development in order to optimize the results.

  13. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Chris J.

    1992-01-01

    In this report we present the progress during the second six month period of the project. This includes both experimental and theoretical work on the acoustic charge transport (ACT) portion of the chip, the theoretical program modelling of both the avalanche photodiode (APD) and the charge transfer and overflow transistor and the materials growth and fabrication part of the program.

  14. Method and apparatus for detecting internal structures of bulk objects using acoustic imaging

    DOEpatents

    Deason, Vance A.; Telschow, Kenneth L.

    2002-01-01

    Apparatus for producing an acoustic image of an object according to the present invention may comprise an excitation source for vibrating the object to produce at least one acoustic wave therein. The acoustic wave results in the formation of at least one surface displacement on the surface of the object. A light source produces an optical object wavefront and an optical reference wavefront and directs the optical object wavefront toward the surface of the object to produce a modulated optical object wavefront. A modulator operatively associated with the optical reference wavefront modulates the optical reference wavefront in synchronization with the acoustic wave to produce a modulated optical reference wavefront. A sensing medium positioned to receive the modulated optical object wavefront and the modulated optical reference wavefront combines the modulated optical object and reference wavefronts to produce an image related to the surface displacement on the surface of the object. A detector detects the image related to the surface displacement produced by the sensing medium. A processing system operatively associated with the detector constructs an acoustic image of interior features of the object based on the phase and amplitude of the surface displacement on the surface of the object.

  15. Tonpilz piezoelectric transducers with acoustic matching plates for underwater color image transmission.

    PubMed

    Inoue, T; Nada, T; Tsuchiya, T; Nakanishi, T; Miyama, T; Konno, M

    1993-01-01

    Tonpilz piezoelectric transducers with multiple acoustic matching plates are suitable for color image acoustic transmission, to achieve wideband low-ripple characteristics as well as high-efficiency high-power transmitting capability. The design method for the transducers was investigated on the basis of multiple-mode filter synthesis theory. For transducers with single, double, and triple matching plates, optimum specific acoustic impedances and lengths were calculated. Moreover, based on this design method, a 24 kHz array comprising nine identical transducers with single matching plates was built and evaluated. As a result, this array showed high-efficiency, low-ripple, and wideband characteristics. Excellent agreement between theoretical values and experimental results was obtained. A field test was carried out on color image transmission from a 3500 m sea depth, using the fabricated array, during which good color images were received.

  16. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-04-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400-1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future.

  17. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications

    PubMed Central

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2016-01-01

    Hyperspectral imaging has proven significance in bio-imaging applications and it has the ability to capture up to several hundred images of different wavelengths offering relevant spectral signatures. To use hyperspectral imaging for in vivo monitoring and diagnosis of the internal body cavities, a snapshot hyperspectral video-endoscope is required. However, such reported systems provide only about 50 wavelengths. We have developed a four-dimensional snapshot hyperspectral video-endoscope with a spectral range of 400–1000 nm, which can detect 756 wavelengths for imaging, significantly more than such systems. Capturing the three-dimensional datacube sequentially gives the fourth dimension. All these are achieved through a flexible two-dimensional to one-dimensional fiber bundle. The potential of this custom designed and fabricated compact biomedical probe is demonstrated by imaging phantom tissue samples in reflectance and fluorescence imaging modalities. It is envisaged that this novel concept and developed probe will contribute significantly towards diagnostic in vivo biomedical imaging in the near future. PMID:27044607

  18. A novel rain removal technology based on video image

    NASA Astrophysics Data System (ADS)

    Liu, Shuo; Piao, Yan

    2016-11-01

    Due to the effect of bad weather conditions, it often conducts visual distortions on images for outdoor vision systems. Rain is one specific example of bad weather. Generally, rain streak is small and falls at high velocity. Traditional rain removal methods often cause blued visual effect. In addition, there is high time complexity. Moreover, some rain streaks are still in the de-rained image. Based on the characteristics of rain streak, a novel rain removal technology is proposed. The proposed method is not only removing the rain streak effectively, but also retaining much detail information. The experiments show that the proposed method outperform traditional rain removal methods. It can be widely used in intelligent traffic, civilian surveillance and national security so on.

  19. Using image analysis to develop reference standards for the video trashmeter

    NASA Astrophysics Data System (ADS)

    Thibodeaux, Devron P.; Evans, Janice P.

    1995-01-01

    Results of research to develop a reference method for calibrating standard dot image tiles and cotton trash boxes are reported. The tiles and trash boxes are produced by the Agricultural Marketing Service for use in standardizing HVI Video Trashmeters. The reference method involves use of a highly sensitive image analysis system (the Quantimet 970) to measure the number and percent area fraction of particle images produced on the replica tiles or of real trash particles placed on the surface of cotton incased in plastic boxes. Calibration data for a set of tiles and boxes is presented. The effects of magnification (system resolution) and detection threshold are investigated as related to measurement accuracy.

  20. Microcomputer video image processing technology in working posture analysis: application to seated postures of keyboard operators.

    PubMed

    Wrigley, T V; Green, R A; Briggs, C A

    1991-02-01

    A new two-dimensional video-based technique for the recording and analysis of working posture has been developed and applied to seated work. After initial set-up of the portable equipment in the workplace, and attachment of small adhesive retroreflective joint markers on the subject, an operator is not required for the remainder of the recording session. Analysis of the video recording is conducted in the laboratory using custom-written software and a commercially available image-processing package running on an IBM AT-compatible computer. After interactive set-up of a reference frame, the system is able to analyse the full video tape automatically, extracting video frames for analysis at approximately 30-s intervals. Poor image quality may occasionally necessitate interactive analysis. The system is capable of determining postural angles to an accuracy of 1-2 degrees , and thus represents a substantial improvement on other postural analysis systems which are able to be used readily in the workplace.

  1. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-09-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  2. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    SciTech Connect

    Wright, R.M.; Zander, M.E.; Brown, S.K.; Sandoval, D.P.; Gilpatrick, J.D.; Gibson, H.E.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.

  3. The effects of video compression on acceptability of images for monitoring life sciences experiments

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1992-01-01

    Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters

  4. Acoustic imaging of vapor bubbles through optically non-transparent media

    NASA Astrophysics Data System (ADS)

    Kolbe, W. F.; Turko, B. T.; Leskovar, B.

    1983-10-01

    A preliminary investigation of the feasibility of acoustic imaging of vapor bubbles through optically nontransparent media is described. Measurements are reported showing the echo signals produced by air filled glass spheres of various sizes positioned in an aqueous medium as well as signals produced by actual vapor bubbles within a water filled steel pipe. In addition, the influence of the metallic wall thickness and material on the amplitude of the echo signals is investigated. Finally several examples are given of the imaging of spherical bubbles within metallic pipes using a simulated array of acoustic transducers mounted circumferentially around the pipe. The measurement procedures and a description of the measuring system are also given.

  5. Exploration of amphoteric and negative refraction imaging of acoustic sources via active metamaterials

    NASA Astrophysics Data System (ADS)

    Wen, Jihong; Shen, Huijie; Yu, Dianlong; Wen, Xisen

    2013-11-01

    The present work describes the design of three flat superlens structures for acoustic source imaging and explores an active acoustic metamaterial (AAM) to realise such a design. The first two lenses are constructed via the coordinate transform method (CTM), and their constituent materials are anisotropic. The third lens consists of a material that has both a negative density and a negative bulk modulus. In these lenses, the quality of the images is “clear” and sharp; thus, the diffraction limit of classical lenses is overcome. Finally, a multi-control strategy is developed to achieve the desired parameters and to eliminate coupling effects in the AAM.

  6. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  7. Viral video: Live imaging of virus-host encounters

    NASA Astrophysics Data System (ADS)

    Son, Kwangmin; Guasto, Jeffrey S.; Cubillos-Ruiz, Andres; Chisholm, Sallie W.; Sullivan, Matthew B.; Stocker, Roman

    2014-11-01

    Viruses are non-motile infectious agents that rely on Brownian motion to encounter and subsequently adsorb to their hosts. Paradoxically, the viral adsorption rate is often reported to be larger than the theoretical limit imposed by the virus-host encounter rate, highlighting a major gap in the experimental quantification of virus-host interactions. Here we present the first direct quantification of the viral adsorption rate, obtained using live imaging of individual host cells and viruses for thousands of encounter events. The host-virus pair consisted of Prochlorococcus MED4, a 800 nm small non-motile bacterium that dominates photosynthesis in the oceans, and its virus PHM-2, a myovirus that has a 80 nm icosahedral capsid and a 200 nm long rigid tail. We simultaneously imaged hosts and viruses moving by Brownian motion using two-channel epifluorescent microscopy in a microfluidic device. This detailed quantification of viral transport yielded a 20-fold smaller adsorption efficiency than previously reported, indicating the need for a major revision in infection models for marine and likely other ecosystems.

  8. MO-A-BRD-06: In Vivo Cherenkov Video Imaging to Verify Whole Breast Irradiation Treatment

    SciTech Connect

    Zhang, R; Glaser, A; Jarvis, L; Gladstone, D; Andreozzi, J; Hitchcock, W; Pogue, B

    2014-06-15

    Purpose: To show in vivo video imaging of Cherenkov emission (Cherenkoscopy) can be acquired in the clinical treatment room without affecting the normal process of external beam radiation therapy (EBRT). Applications of Cherenkoscopy, such as patient positioning, movement tracking, treatment monitoring and superficial dose estimation, were examined. Methods: In a phase 1 clinical trial, including 12 patients undergoing post-lumpectomy whole breast irradiation, Cherenkov emission was imaged with a time-gated ICCD camera synchronized to the radiation pulses, during 10 fractions of the treatment. Images from different treatment days were compared by calculating the 2-D correlations corresponding to the averaged image. An edge detection algorithm was utilized to highlight biological features, such as the blood vessels. Superficial dose deposited at the sampling depth were derived from the Eclipse treatment planning system (TPS) and compared with the Cherenkov images. Skin reactions were graded weekly according to the Common Toxicity Criteria and digital photographs were obtained for comparison. Results: Real time (fps = 4.8) imaging of Cherenkov emission was feasible and feasibility tests indicated that it could be improved to video rate (fps = 30) with system improvements. Dynamic field changes due to fast MLC motion were imaged in real time. The average 2-D correlation was about 0.99, suggesting the stability of this imaging technique and repeatability of patient positioning was outstanding. Edge enhanced images of blood vessels were observed, and could serve as unique biological markers for patient positioning and movement tracking (breathing). Small discrepancies exists between the Cherenkov images and the superficial dose predicted from the TPS but the former agreed better with actual skin reactions than did the latter. Conclusion: Real time Cherenkoscopy imaging during EBRT is a novel imaging tool that could be utilized for patient positioning, movement tracking

  9. Body movement analysis during sleep for children with ADHD using video image processing.

    PubMed

    Nakatani, Masahiro; Okada, Shima; Shimizu, Sachiko; Mohri, Ikuko; Ohno, Yuko; Taniike, Masako; Makikawa, Masaaki

    2013-01-01

    In recent years, the amount of children with sleep disorders that cause arousal during sleep or light sleep is increasing. Attention-deficit hyperactivity disorder (ADHD) is a cause of this sleep disorder; children with ADHD have frequent body movement during sleep. Therefore, we investigated the body movement during sleep of children with and without ADHD using video imaging. We analysed large gross body movements (GM) that occur and obtained the GM rate and the rest duration. There were differences between the body movements of children with ADHD and normally developed children. The children with ADHD moved frequently, so their rest duration was shorter than that of the normally developed children. Additionally, the rate of gross body movement indicated a significant difference in REM sleep (p < 0.05). In the future, we will develop a new device that can easily diagnose children with ADHD, using video image processing.

  10. Synthetic Aperture Acoustic Imaging for Roadside Detection of Solid Objects

    DTIC Science & Technology

    2014-11-20

    degrees azimuth. These are only one example that validated the approach. The next step was to develop a system that could be used to collect data along a...model Figure 5.3: The sample grow box is show. On the right side is germinated Kentucky perennial grass. On the left side soil that has been sieved to...electromagnetically opaque, like the chain link fence, are transparent acoustically. An important next step in this research is to collect target data using

  11. 3D Underwater Imaging Using Vector Acoustic Sensors

    DTIC Science & Technology

    2007-12-01

    infidelity. Direc- tionality also can be lost when two waves from different directions arrive simultaneously. Figure 3 shows a hodograph of the direct...red) deviated substantially from the axis. The *-direction -0.2 -0.1 0 0.1 0.2 X-axis response Figure 3. Hodograph of the x...the sensor motions caused by the scattered waves from the targets. This hodograph illustrates the directional informa- tion in vector acoustic data

  12. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    PubMed

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  13. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  14. Feature point tracking and trajectory analysis for video imaging in cell biology.

    PubMed

    Sbalzarini, I F; Koumoutsakos, P

    2005-08-01

    This paper presents a computationally efficient, two-dimensional, feature point tracking algorithm for the automated detection and quantitative analysis of particle trajectories as recorded by video imaging in cell biology. The tracking process requires no a priori mathematical modeling of the motion, it is self-initializing, it discriminates spurious detections, and it can handle temporary occlusion as well as particle appearance and disappearance from the image region. The efficiency of the algorithm is validated on synthetic video data where it is compared to existing methods and its accuracy and precision are assessed for a wide range of signal-to-noise ratios. The algorithm is well suited for video imaging in cell biology relying on low-intensity fluorescence microscopy. Its applicability is demonstrated in three case studies involving transport of low-density lipoproteins in endosomes, motion of fluorescently labeled Adenovirus-2 particles along microtubules, and tracking of quantum dots on the plasma membrane of live cells. The present automated tracking process enables the quantification of dispersive processes in cell biology using techniques such as moment scaling spectra.

  15. Measurement of thigmomorphogenesis and gravitropism by non-intrusive computerized video image processing

    NASA Technical Reports Server (NTRS)

    Jaffe, M. J.

    1984-01-01

    A video image processing instrument, DARWIN (Digital Analyser of Resolvable Whole-pictures by Image Numeration), was developed. It was programmed to measure stem or root growth and bending, and coupled to a specially mounted video camera to be able to automatically generate growth and bending curves during gravitropism. The growth of the plant is recorded on a video casette recorder with a specially modified time lapse function. At the end of the experiment, DARWIN analyses the growth or movement and prints out bending and growth curves. This system was used to measure thigmomorphagenesis in light grown corn plants. If the plant is rubbed with an applied force load of 0.38 N., it grows faster than the unrubbed control, whereas 1.14 N. retards its growth. Image analysis shows that most of the change in the rate of growth is caused in the first hour after rubbing. When DARWIN was used to measure gravitropism in dark grown oat seedlings, it was found that the top side of the shoot contracts during the first hour of gravitational stimulus, whereas the bottom side begins to elongate after 10 to 15 minutes.

  16. Experimental study on acoustic subwavelength imaging based on zero-mass metamaterials

    NASA Astrophysics Data System (ADS)

    Xu, Xianchen; Li, Pei; Zhou, Xiaoming; Hu, Gengkai

    2015-01-01

    Anisotropic zero-mass acoustic metamaterials are able to transmit evanescent waves without decaying to a far distance, and have been used for near-field acoustic subwavelength imaging. In this work, we design and fabricate such metamaterial lens based on clamped paper membrane units. The zero-mass frequency is determined by normal-incidence acoustic transmission measurement. At this frequency, we verify in experiment that the fabricated metamaterial lens is able to distinguish clearly two sound sources separated with a distance 0.16λ0 (λ0 is the wavelength in air) below the diffraction limit. We also demonstrate that the imaging frequency is invariant to the change of the lens thickness.

  17. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  18. Segmentation and outline detection in underwater video images using particle filters

    NASA Astrophysics Data System (ADS)

    Yoerger, Edward J.; Charalampidis, Dimitrios; Ioup, George E.; Ioup, Juliette W.

    2016-05-01

    Recently we have been concerned with locating and tracking images of fish in underwater videos. While edge detection and region growing have assisted in obtaining some advances in this effort, a more extensive, non-linear approach appears necessary for improved results. In particular, the use of particle filtering applied to contour detection in natural images has met with some success. Following recent ideas in the literature, we are proposing to use a recursive Bayesian model which employs a sequential Monte Carlo approach, also known as the particle filter. This approach uses the corroboration between two scales of an image to produce various local features which characterize the different probability densities required by the particle filter. Since our data consist of video images of fish recorded by a stationary camera, we are capable of augmenting this process by means of background subtraction. Moreover, we are proposing a method that does not require the pre-computation of the distributions required by the particle filter. The above capabilities are applied to our dataset for the purpose of using contour detection with the aim of eventual segmentation of the fish images and fish classification. Although our dataset consists of fish images, the proposed techniques can be employed in applications involving different kinds of non-stationary underwater objects. We present results and examples of this analysis and discuss the particle filter application to our dataset.

  19. A low-cost, high-resolution, video-rate imaging optical radar

    SciTech Connect

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F.; Grantham, J.W.; Monson, T.

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  20. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    PubMed

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices.

  1. Acoustic Neuroma Educational Video

    MedlinePlus Videos and Cool Tools

    ... for Healthcare Providers Patient Surveys Related Links Health Care Insurance Toolkit ANA Store Clinical Trials.gov Additional ... for Healthcare Providers Patient Surveys Related Links Health Care Insurance Toolkit ANA Store Clinical Trials.gov Additional ...

  2. A real-time remote video streaming platform for ultrasound imaging.

    PubMed

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  3. High-Performance Motion Estimation for Image Sensors with Video Compression

    PubMed Central

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME. PMID:26307996

  4. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties.

    PubMed

    Vogt, William C; Jia, Congxian; Wear, Keith A; Garra, Brian S; Joshua Pfefer, T

    2016-10-01

    Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison.

  5. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties

    NASA Astrophysics Data System (ADS)

    Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Joshua Pfefer, T.

    2016-10-01

    Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison.

  6. A Correlated Microwave-Acoustic Imaging method for early-stage cancer detection.

    PubMed

    Gao, Fei; Zheng, Yuanjin

    2012-01-01

    Microwave-based imaging technique shows large potential in detecting early-stage cancer due to significant dielectric contrast between tumor and surrounding healthy tissue. In this paper, we present a new way named Correlated Microwave-Acoustic Imaging (CMAI) of combining two microwave-based imaging modalities: confocal microwave imaging(CMI) by detecting scattered microwave signal, and microwave-induced thermo-acoustic imaging (TAI) by detecting induced acoustic signal arising from microwave energy absorption and thermal expansion. Necessity of combining CMI and TAI is analyzed theoretically, and by applying simple algorithm to CMI and TAI separately, we propose an image correlation approach merging CMI and TAI together to achieve better performance in terms of resolution and contrast. Preliminary numerical simulation shows promising results in case of low contrast and large variation scenarios. A UWB transmitter is designed and tested for future complete system implementation. This preliminary study inspires us to develop a new medical imaging modality CMAI to achieve real-time, high resolution and high contrast simultaneously.

  7. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties

    PubMed Central

    Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Joshua Pfefer, T.

    2016-01-01

    Abstract. Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681

  8. Noninvasive estimation of temperature elevations in biological tissues using acoustic nonlinearity parameter imaging.

    PubMed

    Liu, Xiaozhou; Gong, Xiufen; Yin, Chang; Li, Junlun; Zhang, Dong

    2008-03-01

    A method for noninvasively imaging temperature would assist the development of hyperthermia. In this study, the relationships between the acoustic nonlinearity parameters and the temperatures in porcine fat and liver were obtained. The temperature elevations induced by ultrasound irradiation of porcine fat and liver were then derived inversely from acoustic nonlinearity parameter imaging. These temperature elevations were compared with theoretical predictions and with those measured by a thermocouple. The temperature elevations at the focus in the fat and liver samples measured via a thermocouple were 21.1 +/- 0.8 degrees C and 15.7 +/- 0.6 degrees C, respectively, which coincided with those obtained by acoustic nonlinearity parameter imaging (22.0 +/- 1.4 degrees C in fat and 16.9 +/- 1.1 degrees C in liver). These may be compared with the theoretical predictions of elevations of 24.0 degrees C in fat and 19.7 degrees C in liver. The results of this study show that acoustic nonlinearity imaging may be a novel method for temperature evaluation in hyperthermia. (E-mail: xzliu@nju.edu.cn).

  9. Selective magnetic resonance imaging of magnetic nanoparticles by Acoustically Induced Rotary Saturation (AIRS)

    PubMed Central

    Zhu, Bo; Witzel, Thomas; Jiang, Shan; Huang, Susie Y.; Rosen, Bruce R.; Wald, Lawrence L.

    2016-01-01

    Purpose We introduce a new method to selectively detect iron oxide contrast agents using an acoustic wave to perturb the spin-locked water signal in the vicinity of the magnetic particles. The acoustic drive can be externally modulated to turn the effect on and off, allowing sensitive and quantitative statistical comparison and removal of confounding image background variations. Methods We demonstrate the effect in spin-locking experiments using piezoelectric actuators to generate vibrational displacements of iron oxide samples. We observe a resonant behavior of the signal changes with respect to the acoustic frequency where iron oxide is present. We characterize the effect as a function of actuator displacement and contrast agent concentration. Results The resonant effect allows us to generate block-design “modulation response maps” indicating the contrast agent’s location, as well as positive contrast images with suppressed background signal. We show the AIRS effect stays approximately constant across acoustic frequency, and behaves monotonically over actuator displacement and contrast agent concentration. Conclusion AIRS is a promising method capable of using acoustic vibrations to modulate the contrast from iron oxide nanoparticles and thus perform selective detection of the contrast agents, potentially enabling more accurate visualization of contrast agents in clinical and research settings. PMID:25537578

  10. Acoustic imaging of underground storage tank wastes: A feasibility study. Final report

    SciTech Connect

    Turpening, R.; Zhu, Z.; Caravana, C.; Matarese, J.; Turpening, W.

    1995-12-31

    The objectives for this underground storage tank (UST) imaging investigation are: (1) to assess the feasibility of using acoustic methods in UST wastes, if shown to be feasible, develop and assess imaging strategies; (2) to assess the validity of using chemical simulants for the development of acoustic methods and equipment. This investigation examined the velocity of surrogates, both salt cake and sludge surrogates. In addition collected seismic cross well data in a real tank (114-TX) on the Hanford Reservation. Lastly, drawing on the knowledge of the simulants and the estimates of the velocities of the waste in tank 114-TX the authors generated a hypothetical model of waste in a tank and showed that non-linear travel time tomographic imaging would faithfully image that stratigraphy.

  11. Video rate imaging of narrow band THz radiation based on frequency upconversion

    NASA Astrophysics Data System (ADS)

    Tekavec, Patrick F.; Kozlov, Vladimir G.; Mcnee, Ian; Spektor, Igor E.; Lebedev, Sergey P.

    2015-03-01

    We demonstrate video rate THz imaging by detecting a frequency upconverted signal with a CMOS camera. A fiber laser pumped, double resonant optical parametric oscillator generates THz pulses via difference frequency generation in a quasi-phasematched gallium arsenide (QPM-GaAs) crystal located inside the OPO cavity. The output produced THz pulses centered at 1.5 THz, with an average power up to 1 mW, a linewidth of <100 GHz, and peak power of >2 W. By mixing the THz pulses with a portion of the fiber laser pump (1064 nm) in a second QPM-GaAs crystal, distinct sidebands are observed at 1058 nm and 1070 nm, corresponding to sum and difference frequency generation of the pump pule with the THz pulse. By using a polarizer and long pass filter, the strong pump light can be removed, leaving a nearly background free signal at 1070 nm. For imaging, a Fourier imaging geometry is used, with the object illuminated by the THz beam located one focal length from the GaAs crystal. The spatial Fourier transform is upconverted with a large diameter pump beam, after which a second lens inverse transforms the upconverted spatial components, and the image is detected with a CMOS camera. We have obtained video rate images with spatial resolution of 1mm and field of view ca. 20 mm in diameter without any post processing of the data.

  12. Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries

    NASA Astrophysics Data System (ADS)

    Kang, Jin U.; Huang, Yong; Zhang, Kang; Ibrahim, Zuhaib; Cha, Jaepyeong; Lee, W. P. Andrew; Brandacher, Gerald; Gehlbach, Peter L.

    2012-08-01

    The authors describe the development of an ultrafast three-dimensional (3D) optical coherence tomography (OCT) imaging system that provides real-time intraoperative video images of the surgical site to assist surgeons during microsurgical procedures. This system is based on a full-range complex conjugate free Fourier-domain OCT (FD-OCT). The system was built in a CPU-GPU heterogeneous computing architecture capable of video OCT image processing. The system displays at a maximum speed of 10 volume/s for an image volume size of 160×80×1024 (X×Y×Z) pixels. We have used this system to visualize and guide two prototypical microsurgical maneuvers: microvascular anastomosis of the rat femoral artery and ultramicrovascular isolation of the retinal arterioles of the bovine retina. Our preliminary experiments using 3D-OCT-guided microvascular anastomosis showed optimal visualization of the rat femoral artery (diameter<0.8 mm), instruments, and suture material. Real-time intraoperative guidance helped facilitate precise suture placement due to optimized views of the vessel wall during anastomosis. Using the bovine retina as a model system, we have performed "ultra microvascular" feasibility studies by guiding handheld surgical micro-instruments to isolate retinal arterioles (diameter˜0.1 mm). Isolation of the microvessels was confirmed by successfully passing a suture beneath the vessel in the 3D imaging environment.

  13. Active millimeter-wave video rate imaging with a staring 120-element microbolometer array

    NASA Astrophysics Data System (ADS)

    Luukanen, Arttu; Miller, Aaron J.; Grossman, Erich N.

    2004-08-01

    Passive indoors imaging of weapons concealed under clothing poses a formidable challenge for millimeter-wave imagers due to the sub-picowatt signal levels present in the scene. Moreover, video-rate imaging requires a large number of pixels, which leads to a very complex and expensive front end for the imager. To meet the concealed weapons detection challenge, our approach uses a low cost pulsed-noise source as an illuminator and an array of room-temperature antenna-coupled microbolometers as the detectors. The reflected millimeter-wave power is detected by the bolometers, gated, integrated and amplified by audio-frequency amplifiers, and after digitization, displayed in real time on a PC display. We present recently acquired videos obtained with the 120-element array, and comprehensively describe the performance characteristics of the array in terms of sensitivity, optical efficiency, uniformity and spatial resolution. Our results show that active imaging with antenna-coupled microbolometers can yield imagery comparable to that obtained with systems using MMIC amplifiers but with a cost per pixel that is orders of magnitude lower.

  14. Real-time three-dimensional Fourier-domain optical coherence tomography video image guided microsurgeries

    PubMed Central

    Huang, Yong; Zhang, Kang; Ibrahim, Zuhaib; Cha, Jaepyeong; Lee, W. P. Andrew; Brandacher, Gerald; Gehlbach, Peter L.

    2012-01-01

    Abstract. The authors describe the development of an ultrafast three-dimensional (3D) optical coherence tomography (OCT) imaging system that provides real-time intraoperative video images of the surgical site to assist surgeons during microsurgical procedures. This system is based on a full-range complex conjugate free Fourier-domain OCT (FD-OCT). The system was built in a CPU-GPU heterogeneous computing architecture capable of video OCT image processing. The system displays at a maximum speed of 10  volume/s for an image volume size of 160×80×1024 (X×Y×Z) pixels. We have used this system to visualize and guide two prototypical microsurgical maneuvers: microvascular anastomosis of the rat femoral artery and ultramicrovascular isolation of the retinal arterioles of the bovine retina. Our preliminary experiments using 3D-OCT-guided microvascular anastomosis showed optimal visualization of the rat femoral artery (diameter<0.8  mm), instruments, and suture material. Real-time intraoperative guidance helped facilitate precise suture placement due to optimized views of the vessel wall during anastomosis. Using the bovine retina as a model system, we have performed “ultra microvascular” feasibility studies by guiding handheld surgical micro-instruments to isolate retinal arterioles (diameter∼0.1  mm). Isolation of the microvessels was confirmed by successfully passing a suture beneath the vessel in the 3D imaging environment. PMID:23224164

  15. Method and system to synchronize acoustic therapy with ultrasound imaging

    NASA Technical Reports Server (NTRS)

    Owen, Neil (Inventor); Bailey, Michael R. (Inventor); Hossack, James (Inventor)

    2009-01-01

    Interference in ultrasound imaging when used in connection with high intensity focused ultrasound (HIFU) is avoided by employing a synchronization signal to control the HIFU signal. Unless the timing of the HIFU transducer is controlled, its output will substantially overwhelm the signal produced by ultrasound imaging system and obscure the image it produces. The synchronization signal employed to control the HIFU transducer is obtained without requiring modification of the ultrasound imaging system. Signals corresponding to scattered ultrasound imaging waves are collected using either the HIFU transducer or a dedicated receiver. A synchronization processor manipulates the scattered ultrasound imaging signals to achieve the synchronization signal, which is then used to control the HIFU bursts so as to substantially reduce or eliminate HIFU interference in the ultrasound image. The synchronization processor can alternatively be implemented using a computing device or an application-specific circuit.

  16. Analysis of Decorrelation Transform Gain for Uncoded Wireless Image and Video Communication.

    PubMed

    Ruiqin Xiong; Feng Wu; Jizheng Xu; Xiaopeng Fan; Chong Luo; Wen Gao

    2016-04-01

    An uncoded transmission scheme called SoftCast has recently shown great potential for wireless video transmission. Unlike conventional approaches, SoftCast processes input images only by a series of transformations and modulates the coefficients directly to a dense constellation for transmission. The transmission is uncoded and lossy in nature, with its noise level commensurate with the channel condition. This paper presents a theoretical analysis for an uncoded visual communication, focusing on developing a quantitative measurements for the efficiency of decorrelation transform in a generalized uncoded transmission framework. Our analysis reveals that the energy distribution among signal elements is critical for the efficiency of uncoded transmission. A decorrelation transform can potentially bring a significant performance gain by boosting the energy diversity in signal representation. Numerical results on Markov random process and real image and video signals are reported to evaluate the performance gain of using different transforms in uncoded transmission. The analysis presented in this paper is verified by simulated SoftCast transmissions. This provide guidelines for designing efficient uncoded video transmission schemes.

  17. The temporomandibular joint in video motion--noninvasive image techniques to present the functional anatomy.

    PubMed

    Kordass, B

    1999-01-01

    The presentation of the functional anatomy of the temporomandibular joint (TMJ) is involved with difficulties if dynamic aspects are to be of prime interest, and it should be demonstrated with the highest resolution. Usually noninvasive techniques like MRI and sonography are available for presenting functionality of the temporomandibular joint in video motion. Such images reflect the functional anatomy much better than single pictures of figures could do. In combination with computer aided records of the condyle movements the video motion of MR and sonographical images represent tools for better understanding the relationships between functional or dysfunctional patterns and the morphological or dysmorphological shape and structure of the temporomandibular joint. The possibilities of such tools will be explained and discussed in detail relating, in addition, to loading effects caused by transmitted occlusal pressure onto the joint compartments. If pressure occurs the condyle slides mainly more or less retrocranially whereas the articular disc takes up a more displaced position and a deformed shape. In a few extreme cases the disc prolapses out of the joint space. These video pictures offer new aspects for the diagnosis of the disc-condyle stability and can also be used for explicit educational programs on the complex dysfunction-dysmorphology-relationship of temporomandibular diseases.

  18. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    PubMed Central

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-01-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image. PMID:24499811

  19. Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography

    NASA Astrophysics Data System (ADS)

    Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori

    2014-02-01

    In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.

  20. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy

    PubMed Central

    Mertz, Jerome

    2013-01-01

    Abstract. Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy. PMID:23733023

  1. Video-rate imaging of microcirculation with single-exposure oblique back-illumination microscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Mertz, Jerome

    2013-06-01

    Oblique back-illumination microscopy (OBM) is a new technique for simultaneous, independent measurements of phase gradients and absorption in thick scattering tissues based on widefield imaging. To date, OBM has been used with sequential camera exposures, which reduces temporal resolution, and can produce motion artifacts in dynamic samples. Here, a variation of OBM that allows single-exposure operation with wavelength multiplexing and image splitting with a Wollaston prism is introduced. Asymmetric anamorphic distortion induced by the prism is characterized and corrected in real time using a graphics-processing unit. To demonstrate the capacity of single-exposure OBM to perform artifact-free imaging of blood flow, video-rate movies of microcirculation in ovo in the chorioallantoic membrane of the developing chick are presented. Imaging is performed with a high-resolution rigid Hopkins lens suitable for endoscopy.

  2. Portable video rate time domain terahertz line imager for security and aerospace nondestructive examination

    NASA Astrophysics Data System (ADS)

    Zimdars, David; Fichter, G.; Megdanoff, C.; Murdock, M.; Duling, Irl; White, Jeffrey; Williamson, S. L.

    2010-04-01

    A portable video rate time-domain terahertz (TD-THz) reflection line-scanner suitable for aerospace destructive examination (NDE) and security inspection is described. The imager scans a line 6 inches wide and collects a TD-THz cross-sectional "B-scan" of the sub-surface structure at rates up to 30 Hz. The imager is hand-held. By rolling the scanner over surface, a radiographic two dimensional "C-Scan" image can be stitched together from the individual lines at a rate of 1-4 inches per second (depending on desired resolution). The case is 8.7 in. wide (12.9 in. with wheels), 12.5 in. long, and 7.9 in. high. The weight is approximately 11 lbs. Example images taken with the scanner of radome THz NDE are shown.

  3. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves.

    PubMed

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J R; Krenner, Hubert J; Wixforth, Achim; Salditt, Tim

    2014-10-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length).

  4. A synchronized particle image velocimetry and infrared thermography technique applied to an acoustic streaming flow

    PubMed Central

    Sou, In Mei; Layman, Christopher N.; Ray, Chittaranjan

    2013-01-01

    Subsurface coherent structures and surface temperatures are investigated using simultaneous measurements of particle image velocimetry (PIV) and infrared (IR) thermography. Results for coherent structures from acoustic streaming and associated heating transfer in a rectangular tank with an acoustic horn mounted horizontally at the sidewall are presented. An observed vortex pair develops and propagates in the direction along the centerline of the horn. From the PIV velocity field data, distinct kinematic regions are found with the Lagrangian coherent structure (LCS) method. The implications of this analysis with respect to heat transfer and related sonochemical applications are discussed. PMID:24347810

  5. Time-resolved coherent X-ray diffraction imaging of surface acoustic waves

    PubMed Central

    Nicolas, Jan-David; Reusch, Tobias; Osterhoff, Markus; Sprung, Michael; Schülein, Florian J. R.; Krenner, Hubert J.; Wixforth, Achim; Salditt, Tim

    2014-01-01

    Time-resolved coherent X-ray diffraction experiments of standing surface acoustic waves, illuminated under grazing incidence by a nanofocused synchrotron beam, are reported. The data have been recorded in stroboscopic mode at controlled and varied phase between the acoustic frequency generator and the synchrotron bunch train. At each time delay (phase angle), the coherent far-field diffraction pattern in the small-angle regime is inverted by an iterative algorithm to yield the local instantaneous surface height profile along the optical axis. The results show that periodic nanoscale dynamics can be imaged at high temporal resolution in the range of 50 ps (pulse length). PMID:25294979

  6. Characteristics of luminous structures in the stratosphere above thunderstorms as imaged by low-light video

    SciTech Connect

    Lyons, W.A. , Inc., Ft. Collins, CO )

    1994-05-15

    An experiment was conducted in which an image-intensified, low-light video camera systematically monitored the stratosphere above distant (100-800 km) mesoscale convective systems over the high plains of the central US for 21 nights between 6 July and 27 August 1993. Complex, luminous structures were observed above large thunderstorm clusters on eleven nights, with one storm system (7 July 1993) yielding 248 events in 410 minutes. Their duration ranged from 33 to 283 ms, with an average of 98 ms. The luminous structures, generally not visible to the naked, dark-adapted eye, exhibited on video a wide variety of brightness levels and shapes including streaks, aurora-like curtains, smudges, fountains and jets. The structures were often more than 10 km wide and their upper portions extended to above 50 km msl. 14 refs., 4 figs.

  7. Characteristics of luminous structures in the stratosphere above thunderstorms as imaged by low-light video

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.

    1994-01-01

    An experiment was conducted in which an image-intensified, low-light video camera systematically monitored the stratosphere above distant (100-800 km) mesoscale convective systems over the high plains of the central U.S. for 21 nights between 6 July and 27 August 1993. Complex, luminous structures were observed above large thunderstorm clusters on eleven nights, with one storm system (7 July 1993) yielding 248 events in 410 minutes. Their duration ranged from 33 to 283 ms, with an average of 98 ms. The luminous structures, generally not visible to the naked, dark-adapted eye, exhibited on video a wide variety of brightness levels and shapes including streaks, aurora-like curtains, smudges, fountains and jets. The structures were often more than 10 km wide and their upper portions extended to above 50 km msl.

  8. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns.

    PubMed

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C; Tang, Shou Jiang

    2014-11-20

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician's time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a "texton histogram" of an image block as features. The histogram captures the distribution of different "textons" representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images.

  9. Double-channel, frequency-steered acoustic transducer with 2-D imaging capabilities.

    PubMed

    Baravelli, Emanuele; Senesi, Matteo; Ruzzene, Massimo; De Marchi, Luca; Speciale, Nicolò

    2011-07-01

    A frequency-steerable acoustic transducer (FSAT) is employed for imaging of damage in plates through guided wave inspection. The FSAT is a shaped array with a spatial distribution that defines a spiral in wavenumber space. Its resulting frequency-dependent directional properties allow beam steering to be performed by a single two-channel device, which can be used for the imaging of a two-dimensional half-plane. Ad hoc signal processing algorithms are developed and applied to the localization of acoustic sources and scatterers when FSAT arrays are used as part of pitch-catch and pulse-echo configurations. Localization schemes rely on the spectrogram analysis of received signals upon dispersion compensation through frequency warping and the application of the frequency-angle map characteristic of FSAT. The effectiveness of FSAT designs and associated imaging schemes are demonstrated through numerical simulations and experiments. Preliminary experimental validation is performed by forming a discrete array through the points of the measurement grid of a scanning laser Doppler vibrometer. The presented results demonstrate the frequency-dependent directionality of the spiral FSAT and suggest its application for frequency-selective acoustic sensors, for the localization of broadband acoustic events, or for the directional generation of Lamb waves for active interrogation of structural health.

  10. Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging

    PubMed Central

    Izquierdo, Alberto; Villacorta, Juan José; del Val Puente, Lara; Suárez, Luis

    2016-01-01

    This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. PMID:27727174

  11. Multifunctional single beam acoustic tweezer for non-invasive cell/organism manipulation and tissue imaging

    PubMed Central

    Lam, Kwok Ho; Li, Ying; Li, Yang; Lim, Hae Gyun; Zhou, Qifa; Shung, Koping Kirk

    2016-01-01

    Non-contact precise manipulation of single microparticles, cells, and organisms has attracted considerable interest in biophysics and biomedical engineering. Similar to optical tweezers, acoustic tweezers have been proposed to be capable of manipulating microparticles and even cells. Although there have been concerted efforts to develop tools for non-contact manipulation, no alternative to complex, unifunctional tweezer has yet been found. Here we report a simple, low-cost, multifunctional single beam acoustic tweezer (SBAT) that is capable of manipulating an individual micrometer scale non-spherical cell at Rayleigh regime and even a single millimeter scale organism at Mie regime, and imaging tissue as well. We experimentally demonstrate that the SBAT with an ultralow f-number (f# = focal length/aperture size) could manipulate an individual red blood cell and a single 1.6 mm-diameter fertilized Zebrafish egg, respectively. Besides, in vitro rat aorta images were collected successfully at dynamic foci in which the lumen and the outer surface of the aorta could be clearly seen. With the ultralow f-number, the SBAT offers the combination of large acoustic radiation force and narrow beam width, leading to strong trapping and high-resolution imaging capabilities. These attributes enable the feasibility of using a single acoustic device to perform non-invasive multi-functions simultaneously for biomedical and biophysical applications. PMID:27874052

  12. Multifunctional single beam acoustic tweezer for non-invasive cell/organism manipulation and tissue imaging

    NASA Astrophysics Data System (ADS)

    Lam, Kwok Ho; Li, Ying; Li, Yang; Lim, Hae Gyun; Zhou, Qifa; Shung, Koping Kirk

    2016-11-01

    Non-contact precise manipulation of single microparticles, cells, and organisms has attracted considerable interest in biophysics and biomedical engineering. Similar to optical tweezers, acoustic tweezers have been proposed to be capable of manipulating microparticles and even cells. Although there have been concerted efforts to develop tools for non-contact manipulation, no alternative to complex, unifunctional tweezer has yet been found. Here we report a simple, low-cost, multifunctional single beam acoustic tweezer (SBAT) that is capable of manipulating an individual micrometer scale non-spherical cell at Rayleigh regime and even a single millimeter scale organism at Mie regime, and imaging tissue as well. We experimentally demonstrate that the SBAT with an ultralow f-number (f# = focal length/aperture size) could manipulate an individual red blood cell and a single 1.6 mm-diameter fertilized Zebrafish egg, respectively. Besides, in vitro rat aorta images were collected successfully at dynamic foci in which the lumen and the outer surface of the aorta could be clearly seen. With the ultralow f-number, the SBAT offers the combination of large acoustic radiation force and narrow beam width, leading to strong trapping and high-resolution imaging capabilities. These attributes enable the feasibility of using a single acoustic device to perform non-invasive multi-functions simultaneously for biomedical and biophysical applications.

  13. Integrating Acoustic Imaging of Flow Regimes With Bathymetry: A Case Study, Main Endeavor Field

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Rona, P. A.; Jackson, D. R.; Jones, C. D.

    2003-12-01

    A unified view of the seafloor and the hydrothermal flow regimes (plumes and diffuse flow) is constructed for three major vent clusters in the Main Endeavour Field (e.g., Grotto, S&M, and Salut) of the Endeavour Segment, Juan de Fuca Ridge. The Main Endeavour Field is one of RIDGE 2000's Integrated Study Sites. A variety of visualization techniques are used to reconstruct the plumes (3D) and the diffuse flow field (2D) based on our acoustic imaging data set (July 2000 cruise). Plumes are identified as volumes of high backscatter intensity (indicating high particulate content or sharp density contrasts due to temperature variations) that remained high intensity when successive acoustic pings were subtracted (indicating that the acoustic targets producing the backscatter were in motion). Areas of diffuse flow are detected using our acoustic scintillation technique (AST). For the Grotto vent region (where a new Doppler technique was used to estimate vertical velocities in the plume), we estimate the areal partitioning between black smoker and diffuse flow in terms of volume fluxes. The volumetric and areal regions, where plume and diffuse flow were imaged, are registered over the bathymetry and compared to geologic maps of each region. The resulting images provide a unified view of the seafloor by integrating hydrothermal flow with geology.

  14. Synthetic streak images (x-t diagrams) from high-speed digital video records

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2013-11-01

    Modern digital video cameras have entirely replaced the older photographic drum and rotating-mirror cameras for recording high-speed physics phenomena. They are superior in almost every regard except, at speeds approaching one million frames/s, sensor segmentation results in severely reduced frame size, especially height. However, if the principal direction of subject motion is arranged to be along the frame length, a simple Matlab code can extract a row of pixels from each frame and stack them to produce a pseudo-streak image or x-t diagram. Such a 2-D image can convey the essence of the large volume of information contained in a high-speed video sequence, and can be the basis for the extraction of quantitative velocity data. Examples include streak shadowgrams of explosions and gunshots, streak schlieren images of supersonic cavity-flow oscillations, and direct streak images of shock-wave motion in polyurea samples struck by gas-gun projectiles, from which the shock Hugoniot curve of the polymer is measured. This approach is especially useful, since commercial streak cameras remain very expensive and rooted in 20th-century technology.

  15. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    SciTech Connect

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  16. Modern Techniques in Acoustical Signal and Image Processing

    SciTech Connect

    Candy, J V

    2002-04-04

    Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve this goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.

  17. An acoustic charge transport imager for high definition television applications

    NASA Astrophysics Data System (ADS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Christopher J.

    1993-09-01

    This report covers: (1) invention of a new, ultra-low noise, low operating voltage APD which is expected to offer far better performance than the existing volume doped APD device; (2) performance of a comprehensive series of experiments on the acoustic and piezoelectric properties of ZnO films sputtered on GaAs which can possibly lead to a decrease in the required rf drive power for ACT devices by 15dB; (3) development of an advanced, hydrodynamic, macroscopic simulator used for evaluating the performance of ACT and CTD devices and aiding in the development of the next generation of devices; (4) experimental development of CTD devices which utilize a p-doped top barrier demonstrating charge storage capacity and low leakage currents; (5) refinements in materials growth techniques and in situ controls to lower surface defect densities to record levels as well as increase material uniformity and quality.

  18. An acoustic charge transport imager for high definition television applications

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin F.; Summers, Christopher J.

    1993-01-01

    This report covers: (1) invention of a new, ultra-low noise, low operating voltage APD which is expected to offer far better performance than the existing volume doped APD device; (2) performance of a comprehensive series of experiments on the acoustic and piezoelectric properties of ZnO films sputtered on GaAs which can possibly lead to a decrease in the required rf drive power for ACT devices by 15dB; (3) development of an advanced, hydrodynamic, macroscopic simulator used for evaluating the performance of ACT and CTD devices and aiding in the development of the next generation of devices; (4) experimental development of CTD devices which utilize a p-doped top barrier demonstrating charge storage capacity and low leakage currents; (5) refinements in materials growth techniques and in situ controls to lower surface defect densities to record levels as well as increase material uniformity and quality.

  19. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  20. Screen-imaging guidance using a modified portable video macroscope for middle cerebral artery occlusion.

    PubMed

    Zhu, Xingbao; Luo, Junli; Liu, Yun; Chen, Guolong; Liu, Song; Ruan, Qiangjin; Deng, Xunding; Wang, Dianchun; Fan, Quanshui; Pan, Xinghua

    2012-04-25

    The use of operating microscopes is limited by the focal length. Surgeons using these instruments cannot simultaneously view and access the surgical field and must choose one or the other. The longer focal length (more than 1 000 mm) of an operating telescope permits a position away from the operating field, above the surgeon and out of the field of view. This gives the telescope an advantage over an operating microscope. We developed a telescopic system using screen-imaging guidance and a modified portable video macroscope constructed from a Computar MLH-10 × macro lens, a DFK-21AU04 USB CCD Camera and a Dell laptop computer as monitor screen. This system was used to establish a middle cerebral artery occlusion model in rats. Results showed that magnification of the modified portable video macroscope was appropriate (5-20 ×) even though the Computar MLH-10 × macro lens was placed 800 mm away from the operating field rather than at the specified working distance of 152.4 mm with a zoom of 1-40 ×. The screen-imaging telescopic technique was clear, life-like, stereoscopic and matched the actual operation. Screen-imaging guidance led to an accurate, smooth, minimally invasive and comparatively easy surgical procedure. Success rate of the model establishment evaluated by neurological function using the modified neurological score system was 74.07%. There was no significant difference in model establishment time, sensorimotor deficit and infarct volume percentage. Our findings indicate that the telescopic lens is effective in the screen surgical operation mode referred to as "long distance observation and short distance operation" and that screen-imaging guidance using an modified portable video macroscope can be utilized for the establishment of a middle cerebral artery occlusion model and micro-neurosurgery.

  1. Phase Time and Envelope Time in Time-Distance Analysis and Acoustic Imaging

    NASA Technical Reports Server (NTRS)

    Chou, Dean-Yi; Duvall, Thomas L.; Sun, Ming-Tsung; Chang, Hsiang-Kuang; Jimenez, Antonio; Rabello-Soares, Maria Cristina; Ai, Guoxiang; Wang, Gwo-Ping; Goode Philip; Marquette, William; Ehgamberdiev, Shuhrat; Landenkov, Oleg

    1999-01-01

    Time-distance analysis and acoustic imaging are two related techniques to probe the local properties of solar interior. In this study, we discuss the relation of phase time and envelope time between the two techniques. The location of the envelope peak of the cross correlation function in time-distance analysis is identified as the travel time of the wave packet formed by modes with the same w/l. The phase time of the cross correlation function provides information of the phase change accumulated along the wave path, including the phase change at the boundaries of the mode cavity. The acoustic signals constructed with the technique of acoustic imaging contain both phase and intensity information. The phase of constructed signals can be studied by computing the cross correlation function between time series constructed with ingoing and outgoing waves. In this study, we use the data taken with the Taiwan Oscillation Network (TON) instrument and the Michelson Doppler Imager (MDI) instrument. The analysis is carried out for the quiet Sun. We use the relation of envelope time versus distance measured in time-distance analyses to construct the acoustic signals in acoustic imaging analyses. The phase time of the cross correlation function of constructed ingoing and outgoing time series is twice the difference between the phase time and envelope time in time-distance analyses as predicted. The envelope peak of the cross correlation function between constructed ingoing and outgoing time series is located at zero time as predicted for results of one-bounce at 3 mHz for all four data sets and two-bounce at 3 mHz for two TON data sets. But it is different from zero for other cases. The cause of the deviation of the envelope peak from zero is not known.

  2. Partial-aperture array imaging in acoustic waveguides

    NASA Astrophysics Data System (ADS)

    Tsogka, Chrysoula; Mitsoudis, Dimitrios A.; Papadimitropoulos, Symeon

    2016-12-01

    We consider the problem of imaging extended reflectors in waveguides using partial-aperture array, i.e. an array that does not span the whole depth of the waveguide. For this imaging, we employ a method that back-propagates a weighted modal projection of the usual array response matrix. The challenge in this setup is to correctly define this projection matrix in order to maintain good energy concentration properties for the imaging method, which were obtained previously by Tsogka et al (2013 SIAM J. Imaging Sci. 6 2714-39) for the full-aperture case. In this paper we propose a way of achieving this and study the properties of the resulting imaging method.

  3. Imaging of transient surface acoustic waves by full-field photorefractive interferometry

    SciTech Connect

    Xiong, Jichuan; Xu, Xiaodong E-mail: christ.glorieux@fys.kuleuven.be; Glorieux, Christ E-mail: christ.glorieux@fys.kuleuven.be; Matsuda, Osamu; Cheng, Liping

    2015-05-15

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz.

  4. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  5. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  6. The research on binocular stereo video imaging and display system based on low-light CMOS

    NASA Astrophysics Data System (ADS)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  7. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  8. Psychophysical Comparison Of A Video Display System To Film By Using Bone Fracture Images

    NASA Astrophysics Data System (ADS)

    Seeley, George W.; Stempski, Mark; Roehrig, Hans; Nudelman, Sol; Capp, M. P.

    1982-11-01

    This study investigated the possibility of using a video display system instead of film for radiological diagnosis. Also investigated were the relationships between characteristics of the system and the observer's accuracy level. Radiologists were used as observers. Thirty-six clinical bone fractures were separated into two matched sets of equal difficulty. The difficulty parameters and ratings were defined by a panel of expert bone radiologists at the Arizona Health Sciences Center, Radiology Department. These two sets of fracture images were then matched with verifiably normal images using parameters such as film type, angle of view, size, portion of anatomy, the film's density range, and the patient's age and sex. The two sets of images were then displayed, using a counterbalanced design, to each of the participating radiologists for diagnosis. Whenever a response was given to a video image, the radiologist used enhancement controls to "window in" on the grey levels of interest. During the TV phase, the radiologist was required to record the settings of the calibrated controls of the image enhancer during interpretation. At no time did any single radiologist see the same film in both modes. The study was designed so that a standard analysis of variance would show the effects of viewing mode (film vs TV), the effects due to stimulus set, and any interactions with observers. A signal detection analysis of observer performance was also performed. Results indicate that the TV display system is almost as good as the view box display; an average of only two more errors were made on the TV display. The difference between the systems has been traced to four observers who had poor accuracy on a small number of films viewed on the TV display. This information is now being correlated with the video system's signal-to-noise ratio (SNR), signal transfer function (STF), and resolution measurements, to obtain information on the basic display and enhancement requirements for a

  9. Structural changes and imaging signatures of acoustically sensitive microcapsules under ultrasound.

    PubMed

    Sridhar-Keralapura, Mallika; Thirumalai, Shruthi; Mobed-Miremadi, Maryam

    2013-07-01

    The ultrasound drug delivery field is actively designing new agents that would obviate the problems of just using microbubbles for drug delivery. Microbubbles have very short circulation time (minutes), low payload and large size (2-10μm), all of these aspects are not ideal for systemic drug delivery. However, microbubble carriers provide excellent image contrast and their use for image guidance can be exploited. In this paper, we suggest an alternative approach by developing acoustically sensitive microcapsule reservoirs that have future applications for treating large ischemic tumors through intratumoral therapy. We call these agents Acoustically Sensitized Microcapsules (ASMs) and these are not planned for the circulation. ASMs are very simple in their formulation, robust and reproducible. They have been designed to offer high payload (because of their large size), be acoustically sensitive and reactive (because of the Ultrasound Contrast Agents (UCAs) encapsulated) and mechanically robust for future injections/implantations within tumors. We describe three different aspects - (1) effect of therapeutic ultrasound; (2) mechanical properties and (3) imaging signatures of these agents. Under therapeutic ultrasound, the formation of a cavitational bubble was seen prior to rupture. The time to rupture was size dependent. Size dependency was also seen when measuring mechanical properties of these ASMs. % Alginate and permeability also affected the Young's modulus estimates. For study of imaging signatures of these agents, we show six schemes. For example, with harmonic imaging, tissue phantoms and controls did not generate higher harmonic components. Only ASM phantoms created a harmonic signal, whose sensitivity increased with applied acoustic pressure. Future work includes developing schemes combining both sonication and imaging to help detect ASMs before, during and after release of drug substance.

  10. Acoustic imaging with time reversal methods: From medicine to NDT

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    2015-03-01

    This talk will present an overview of the research conducted on ultrasonic time-reversal methods applied to biomedical imaging and to non-destructive testing. We will first describe iterative time-reversal techniques that allow both focusing ultrasonic waves on reflectors in tissues (kidney stones, micro-calcifications, contrast agents) or on flaws in solid materials. We will also show that time-reversal focusing does not need the presence of bright reflectors but it can be achieved only from the speckle noise generated by random distributions of non-resolved scatterers. We will describe the applications of this concept to correct distortions and aberrations in ultrasonic imaging and in NDT. In the second part of the talk we will describe the concept of time-reversal processors to get ultrafast ultrasonic images with typical frame rates of order of 10.000 F/s. It is the field of ultrafast ultrasonic imaging that has plenty medical applications and can be of great interest in NDT. We will describe some applications in the biomedical domain: Quantitative Elasticity imaging of tissues by following shear wave propagation to improve cancer detection and Ultrafast Doppler imaging that allows ultrasonic functional imaging.

  11. Synchronized imaging and acoustic analysis of the upper airway in patients with sleep-disordered breathing.

    PubMed

    Chang, Yi-Chung; Huon, Leh-Kiong; Pham, Van-Truong; Chen, Yunn-Jy; Jiang, Sun-Fen; Shih, Tiffany Ting-Fang; Tran, Thi-Thao; Wang, Yung-Hung; Lin, Chen; Tsao, Jenho; Lo, Men-Tzung; Wang, Pa-Chun

    2014-12-01

    Progressive narrowing of the upper airway increases airflow resistance and can produce snoring sounds and apnea/hypopnea events associated with sleep-disordered breathing due to airway collapse. Recent studies have shown that acoustic properties during snoring can be altered with anatomic changes at the site of obstruction. To evaluate the instantaneous association between acoustic features of snoring and the anatomic sites of obstruction, a novel method was developed and applied in nine patients to extract the snoring sounds during sleep while performing dynamic magnetic resonance imaging (MRI). The degree of airway narrowing during the snoring events was then quantified by the collapse index (ratio of airway diameter preceding and during the events) and correlated with the synchronized acoustic features. A total of 201 snoring events (102 pure retropalatal and 99 combined retropalatal and retroglossal events) were recorded, and the collapse index as well as the soft tissue vibration time were significantly different between pure retropalatal (collapse index, 2 ± 11%; vibration time, 0.2 ± 0.3 s) and combined (retropalatal and retroglossal) snores (collapse index, 13 ± 7% [P ≤ 0.0001]; vibration time, 1.2 ± 0.7 s [P ≤ 0.0001]). The synchronized dynamic MRI and acoustic recordings successfully characterized the sites of obstruction and established the dynamic relationship between the anatomic site of obstruction and snoring acoustics.

  12. Methods And Systems For Using Reference Images In Acoustic Image Processing

    DOEpatents

    Moore, Thomas L.; Barter, Robert Henry

    2005-01-04

    A method and system of examining tissue are provided in which a field, including at least a portion of the tissue and one or more registration fiducials, is insonified. Scattered acoustic information, including both transmitted and reflected waves, is received from the field. A representation of the field, including both the tissue and the registration fiducials, is then derived from the received acoustic radiation.

  13. Fast photoacoustic imaging with a line scanning optical-acoustical resolution photoacoustic microscope (LS-OAR-PAM)

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Paltauf, Guenther

    2015-07-01

    We present the concept, the setup and a preliminary experiment using optical ultrasound detection with a CCD camera combined with focused line excitation for photoacoustic microscopy. The line scanning optical-acoustical resolution photoacoustic microscope (LS-OAR-PAM) with optical ultrasound detection is capable of real-time B-scan imaging providing acoustical resolution within the individual B-scans and optical out of plane resolution up to a depth limited by optical diffusion. A 3D image is composed of reconstructed B-scan images recorded while scanning the excitation line along the sample surface. Proof of concept is shown by imaging a phantom containing black human hairs and carbon fibers. The obtained C-scan image clearly shows the different resolution in the two perpendicular directions, namely diffraction limited by optical focusing in scan direction and acoustically limited in direction parallel to line orientation by the properties of acoustic wave propagation.

  14. Focused acoustic beam imaging of grain structure and local Young's modulus with Rayleigh and surface skimming longitudinal waves

    SciTech Connect

    Martin, R. W.; Sathish, S.; Blodgett, M. P.

    2013-01-25

    The interaction of a focused acoustic beam with materials generates Rayleigh surface waves (RSW) and surface skimming longitudinal waves (SSLW). Acoustic microscopic investigations have used the RSW amplitude and the velocity measurements, extensively for grain structure analysis. Although, the presence of SSLW has been recognized, it is rarely used in acoustic imaging. This paper presents an approach to perform microstructure imaging and local elastic modulus measurements by combining both RSW and SSLW. The acoustic imaging of grain structure was performed by measuring the amplitude of RSW and SSLW signal. The microstructure images obtained on the same region of the samples with RSW and SSLW are compared and the difference in the contrast observed is discussed based on the propagation characteristics of the individual surface waves. The velocity measurements are determined by two point defocus method. The surface wave velocities of RSW and SSLW of the same regions of the sample are combined and presented as average Young's modulus image.

  15. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Pinton, Gianmarco

    2015-10-01

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost. Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it

  16. Three dimensional full-wave nonlinear acoustic simulations: Applications to ultrasound imaging

    SciTech Connect

    Pinton, Gianmarco

    2015-10-28

    Characterization of acoustic waves that propagate nonlinearly in an inhomogeneous medium has significant applications to diagnostic and therapeutic ultrasound. The generation of an ultrasound image of human tissue is based on the complex physics of acoustic wave propagation: diffraction, reflection, scattering, frequency dependent attenuation, and nonlinearity. The nonlinearity of wave propagation is used to the advantage of diagnostic scanners that use the harmonic components of the ultrasonic signal to improve the resolution and penetration of clinical scanners. One approach to simulating ultrasound images is to make approximations that can reduce the physics to systems that have a low computational cost. Here a maximalist approach is taken and the full three dimensional wave physics is simulated with finite differences. This paper demonstrates how finite difference simulations for the nonlinear acoustic wave equation can be used to generate physically realistic two and three dimensional ultrasound images anywhere in the body. A specific intercostal liver imaging scenario for two cases: with the ribs in place, and with the ribs removed. This configuration provides an imaging scenario that cannot be performed in vivo but that can test the influence of the ribs on image quality. Several imaging properties are studied, in particular the beamplots, the spatial coherence at the transducer surface, the distributed phase aberration, and the lesion detectability for imaging at the fundamental and harmonic frequencies. The results indicate, counterintuitively, that at the fundamental frequency the beamplot improves due to the apodization effect of the ribs but at the same time there is more degradation from reverberation clutter. At the harmonic frequency there is significantly less improvement in the beamplot and also significantly less degradation from reverberation. It is shown that even though simulating the full propagation physics is computationally challenging it

  17. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    DOEpatents

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  18. Video-rate volumetric functional imaging of the brain at synaptic resolution.

    PubMed

    Lu, Rongwen; Sun, Wenzhi; Liang, Yajie; Kerlin, Aaron; Bierfeld, Jens; Seelig, Johannes D; Wilson, Daniel E; Scholl, Benjamin; Mohar, Boaz; Tanimoto, Masashi; Koyama, Minoru; Fitzpatrick, David; Orger, Michael B; Ji, Na

    2017-04-01

    Neurons and neural networks often extend hundreds of micrometers in three dimensions. Capturing the calcium transients associated with their activity requires volume imaging methods with subsecond temporal resolution. Such speed is a challenge for conventional two-photon laser-scanning microscopy, because it depends on serial focal scanning in 3D and indicators with limited brightness. Here we present an optical module that is easily integrated into standard two-photon laser-scanning microscopes to generate an axially elongated Bessel focus, which when scanned in 2D turns frame rate into volume rate. We demonstrated the power of this approach in enabling discoveries for neurobiology by imaging the calcium dynamics of volumes of neurons and synapses in fruit flies, zebrafish larvae, mice and ferrets in vivo. Calcium signals in objects as small as dendritic spines could be resolved at video rates, provided that the samples were sparsely labeled to limit overlap in their axially projected images.

  19. Imaging of dense cell cultures by multiwavelength lens-free video microscopy.

    PubMed

    Allier, C; Morel, S; Vincent, R; Ghenim, L; Navarro, F; Menneteau, M; Bordy, T; Hervé, L; Cioni, O; Gidrol, X; Usson, Y; Dinten, J-M

    2017-02-27

    They present results for lens-free microscopy for the imaging of dense cell culture. With this aim, they use a multiwavelength LED illumination with well separated wavelengths, together with the implementation of an appropriate holographic reconstruction algorithm. This allows for a fast and efficient reconstruction of the phase image of densely packed cells (up to 700 cells/mm(2) ) over a large field of view of 29.4 mm(2) . Combined with the compactness of the system which fits altogether inside an incubator, lens-free microscopy becomes a unique tool to monitor cell cultures over several days. The high contrast phase shift images provide robust cell segmentation and tracking, and enable high throughput monitoring of individual cell dimensions, dry mass, and motility. They tested the multiwavelength lens-free video-microscope over a broad range of cell lines, including mesenchymal, endothelial, and epithelial cells. © 2017 International Society for Advancement of Cytometry.

  20. Video-rate spectral imaging of gas leaks in the longwave infrared

    NASA Astrophysics Data System (ADS)

    Hagen, Nathan; Kester, Robert T.; Morlier, Christopher G.; Panek, Jeffrey A.; Drayton, Paul; Fashimpaur, Dave; Stone, Paul; Adams, Elizabeth

    2013-05-01

    We have recently constructed and tested a gas cloud imager which demonstrates the rst-ever video-rate detection (15 frames/sec) of gas leaks using an uncooled LWIR detector array. Laboratory and outdoor measurements, taken in collaboration with BP Products North America Inc. and IES Inc., show detection sensitivities comparable to existing cooled systems for detecting hydrocarbon gases. Gases imaged for these experiments include methane, propane, propylene, ethane, ethylene, butane, and iso-butylene, but any gases with absorption features in the LWIR band could potentially be detected, such as sarin and other toxic gases. These results show that practical continuous monitoring of gas leaks with uncooled imaging sensors is now possible.

  1. Classification of coral reef images from underwater video using neural networks

    NASA Astrophysics Data System (ADS)

    Marcos, Ma. Sheila Angeli C.; Soriano, Maricor N.; Saloma, Caesar A.

    2005-10-01

    We use a feedforward backpropagation neural network to classify close-up images of coral reef components into three benthic categories: living coral, dead coral and sand. We have achieved a success rate of 86.5% (false positive = 6.7%) for test images that were not in the training set which is high considering that corals occur in an immense variety of appearance. Color and texture features derived from video stills of coral reef transects from the Great Barrier Reef were used as inputs to the network. We also developed a rule-based decision tree classifier according to how marine scientists classify corals from texture and color, and obtained a lower recognition rate of 79.7% for the same set of images.

  2. ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.

    PubMed

    Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K

    2012-06-01

    In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.

  3. Synthetic aperture acoustic imaging of canonical targets with a 2-15 kHz linear FM chirp

    NASA Astrophysics Data System (ADS)

    Vignola, Joseph F.; Judge, John A.; Good, Chelsea E.; Bishop, Steven S.; Gugino, Peter M.; Soumekh, Mehrdad

    2011-06-01

    Synthetic aperture image reconstruction applied to outdoor acoustic recordings is presented. Acoustic imaging is an alternate method having several military relevant advantages such as being immune to RF jamming, superior spatial resolution, capable of standoff side and forward-looking scanning, and relatively low cost, weight and size when compared to 0.5 - 3 GHz ground penetrating radar technologies. Synthetic aperture acoustic imaging is similar to synthetic aperture radar, but more akin to synthetic aperture sonar technologies owing to the nature of longitudinal or compressive wave propagation in the surrounding acoustic medium. The system's transceiver is a quasi mono-static microphone and audio speaker pair mounted on a rail 5meters in length. Received data sampling rate is 80 kHz with a 2- 15 kHz Linear Frequency Modulated (LFM) chirp, with a pulse repetition frequency (PRF) of 10 Hz and an inter-pulse period (IPP) of 50 milliseconds. Targets are positioned within the acoustic scene at slant range of two to ten meters on grass, dirt or gravel surfaces, and with and without intervening metallic chain link fencing. Acoustic image reconstruction results in means for literal interpretation and quantifiable analyses. A rudimentary technique characterizes acoustic scatter at the ground surfaces. Targets within the acoustic scene are first digitally spotlighted and further processed, providing frequency and aspect angle dependent signature information.

  4. A Robust Mine Detection Algorithm for Acoustic and Radar Images

    DTIC Science & Technology

    2000-10-01

    Hough transforms as demonstrated on an NVL mine hunting SBIR and on SAR ground target detection. The fundamental detection technique will be...Williams, “IA-CHAMELEON: A SAR Wide Area Image Analysis Aid,” Proc. ATRWG Workshop, Baltimore, MD, July 1996 The adaptive detection algorithm will...University, Mississippi 38677, September 15, 1998 Systems Incorporated (PSI) Ground Penetrating Radar (GPR)9, and on synthetic aperture radar ( SAR ) images

  5. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    NASA Astrophysics Data System (ADS)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  6. Monitoring an eruption fissure in 3D: video recording, particle image velocimetry and dynamics

    NASA Astrophysics Data System (ADS)

    Witt, Tanja; Walter, Thomas R.

    2015-04-01

    The processes during an eruption are very complex. To get a better understanding several parameters are measured. One of the measured parameters is the velocity of particles and patterns, as ash and emitted magma, and of the volcano itself. The resulting velocity field provides insights into the dynamics of a vent. Here we test our algorithm for 3 dimensional velocity fields on videos of the second fissure eruption of Bárdarbunga 2014. There we acquired videos from lava fountains of the main fissure with 2 high speed cameras with small angles between the cameras. Additionally we test the algorithm on videos from the geyser Strokkur, where we had 3 cameras and larger angles between the cameras. The velocity is calculated by a correlation in the Fourier space of contiguous images. Considering that we only have the velocity field of the surface smaller angles result in a better resolution of the existing velocity field in the near field. For general movements also larger angles can be useful, e.g. to get the direction, height and velocity of eruption clouds. In summary, it can be stated that 3D velocimetry can be used for several application and with different setup due to the application.

  7. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  8. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2001-01-31

    During this phase of the project the research team concentrated on acquisition of acoustic emission data from the high porosity rock samples. The initial experiments indicated that the acoustic emission activity from high porosity Danian chalk were of a very low amplitude. Even though the sample underwent yielding and significant plastic deformation the sample did not generate significant AE activity. This was somewhat surprising. These initial results call into question the validity of attempting to locate AE activity in this weak rock type. As a result the testing program was slightly altered to include measuring the acoustic emission activity from many of the rock types listed in the research program. The preliminary experimental results indicate that AE activity in the sandstones is much higher than in the carbonate rocks (i.e., the chalks and limestones). This observation may be particularly important for planning microseismic imaging of reservoir rocks in the field environment. The preliminary results suggest that microseismic imaging of reservoir rock from acoustic emission activity generated from matrix deformation (during compaction and subsidence) would be extremely difficult to accomplish.

  9. Imaging textural variation in the acoustoelastic coefficient of aluminum using surface acoustic waves.

    PubMed

    Ellwood, R; Stratoudaki, T; Sharples, S D; Clark, M; Somekh, M G

    2015-11-01

    Much interest has arisen in nonlinear acoustic techniques because of their reported sensitivity to variations in residual stress, fatigue life, and creep damage when compared to traditional linear ultrasonic techniques. However, there is also evidence that the nonlinear acoustic properties are also sensitive to material microstructure. As many industrially relevant materials have a polycrystalline structure, this could potentially complicate the monitoring of material processes when using nonlinear acoustics. Variations in the nonlinear acoustoelastic coefficient on the same length scale as the microstructure of a polycrystalline sample of aluminum are investigated in this paper. This is achieved by the development of a measurement protocol that allows imaging of the acoustoelastic response of a material across a samples surface at the same time as imaging the microstructure. The development, validation, and limitations of this technique are discussed. The nonlinear acoustic response is found to vary spatially by a large factor (>20) between different grains. A relationship is observed when the spatial variation of the acoustoelastic coefficient is compared to the variation in material microstructure.

  10. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    SciTech Connect

    Mukai, K. Peterson, B. J.; Pandya, S. N.; Sano, R.

    2014-11-15

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  11. The architecture of a video image processor for the space station

    NASA Technical Reports Server (NTRS)

    Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.

    1987-01-01

    The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.

  12. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  13. Rocket engine plume diagnostics using video digitization and image processing - Analysis of start-up

    NASA Technical Reports Server (NTRS)

    Disimile, P. J.; Shoe, B.; Dhawan, A. P.

    1991-01-01

    Video digitization techniques have been developed to analyze the exhaust plume of the Space Shuttle Main Engine. Temporal averaging and a frame-by-frame analysis provide data used to evaluate the capabilities of image processing techniques for use as measurement tools. Capabilities include the determination of the necessary time requirement for the Mach disk to obtain a fully-developed state. Other results show the Mach disk tracks the nozzle for short time intervals, and that dominate frequencies exist for the nozzle and Mach disk movement.

  14. Acoustic Reciprocity of Spatial Coherence in Ultrasound Imaging

    PubMed Central

    Bottenus, Nick; Üstüner, Kutay F.

    2015-01-01

    A conventional ultrasound image is formed by transmitting a focused wave into tissue, time-shifting the backscattered echoes received on an array transducer and summing the resulting signals. The van Cittert-Zernike theorem predicts a particular similarity, or coherence, of these focused signals across the receiving array. Many groups have used an estimate of the coherence to augment or replace the B-mode image in an effort to suppress noise and stationary clutter echo signals, but this measurement requires access to individual receive channel data. Most clinical systems have efficient pipelines for producing focused and summed RF data without any direct way to individually address the receive channels. We describe a method for performing coherence measurements that is more accessible for a wide range of coherence-based imaging. The reciprocity of the transmit and receive apertures in the context of coherence is derived and equivalence of the coherence function is validated experimentally using a research scanner. The proposed method is implemented on a Siemens ACUSON SC2000™ultrasound system and in vivo short-lag spatial coherence imaging is demonstrated using only summed RF data. The components beyond the acquisition hardware and beamformer necessary to produce a real-time ultrasound coherence imaging system are discussed. PMID:25965679

  15. Observations of Brine Pool Surface Characteristics and Internal Structure Through Remote Acoustic and Structured Light Imaging

    NASA Astrophysics Data System (ADS)

    Smart, C.; Roman, C.; Michel, A.; Wankel, S. D.

    2015-12-01

    Observations and analysis of the surface characteristics and internal structure of deep-sea brine pools are currently limited to discrete in-situ observations. Complementary acoustic and structured light imaging sensors mounted on a remotely operated vehicle (ROV) have demonstrated the ability systematically detect variations in surface characteristics of a brine pool, reveal internal stratification and detect areas of active hydrocarbon activity. The presented visual and acoustic sensors combined with a stereo camera pair are mounted on the 4000m rated ROV Hercules (Ocean Exploration Trust). These three independent sensors operate simultaneously from a typical 3m altitude resulting in visual and bathymetric maps with sub-centimeter resolution. Applying this imaging technology to 2014 and 2015 brine pool surveys in the Gulf of Mexico revealed acoustic and visual anomalies due to the density changes inherent in the brine. Such distinct changes in acoustic impedance allowed the high frequency 1350KHz multibeam sonar to detect multiple interfaces. For instance, distinct acoustic reflections were observed at 3m and 5.5m below the vehicle. Subsequent verification using a CDT and lead line indicated the acoustic return from the brine surface was the signal at 3m, while a thicker muddy and more saline interface occurred at 5.5m, the bottom of the brine pool was not located but is assumed to be deeper than 15m. The multibeam is also capable of remotely detecting emitted gas bubbles within the brine pool, indicative of active hydrocarbon seeps. Bubbles associated with these seeps were not consistently visible above the brine while using the HD camera on the ROV. Additionally, while imaging the surface of brine pool the structured light sheet laser became diffuse, refracting across the main interface. Analysis of this refraction combined with varying acoustic returns allow for systematic and remote detection of the density, stratification and activity levels within and

  16. Near-Field Imaging with Sound: An Acoustic STM Model

    ERIC Educational Resources Information Center

    Euler, Manfred

    2012-01-01

    The invention of scanning tunneling microscopy (STM) 30 years ago opened up a visual window to the nano-world and sparked off a bunch of new methods for investigating and controlling matter and its transformations at the atomic and molecular level. However, an adequate theoretical understanding of the method is demanding; STM images can be…

  17. Study of acoustic shadow moire for imaging technique

    NASA Astrophysics Data System (ADS)

    Yaqoub, Mahmoud

    This research is to utilize ultrasound waves and moire phenomena to establish a new imaging technology for industrial and medical applications. The theory and mathematical description is presented in this work. Numerical simulation is performed to prove the concept; COMSOL simulation, which uses finite difference technique, is used. The results are compared with experimental results done by a researcher from NIU at Santec Systems Inc., Wheeling, IL. The diffraction of the ultrasound waves is dependent on the wavelength. Because the sound wave length is large, a diffraction grating of wider pitch is used. Therefore, using ultrasound in shadow moire imaging will be limited by the size of pitch of the diffraction grating. Talbot image of the grating was studied using numerical simulation. The simulation results were found to be in agreement with experimental results. This is an evidence that ultrasound shadow moire has the same characteristics as light shadow moire. This work simulates the imaging of an inclined specimen with two different angles, 20 and 25 degrees. The distance between the first 2-moire fringes is found to be close to 5.5 mm. This means that the second fringe is a locus of constant out-of-plane elevation of 4.2mm with respect to the first fringe. This simulation provides an error compared with the experimental and theoretical results of 17.7%. This difference can be attributed to the fact that the experiments conditions are not ideal, and the use of paraxial and Fresnel approximation used in the analytical equations.

  18. Microscopic imaging of residual stress using a scanning phase-measuring acoustic microscope

    NASA Astrophysics Data System (ADS)

    Meeks, Steven W.; Peter, D.; Horne, D.; Young, K.; Novotny, V.

    1989-10-01

    A high-resolution scanning phase-measuring acoustic microscope (SPAM) has been developed and used to image the near-surface residual stress field around features etched in sputtered alumina via the acoustoelastic effect. This microscope operates at 670 MHz and has a resolution of 5-10 microns, depending upon the amount of defocus. Relative velocity changes of sample surface waves as small as 50 ppm are resolved. Images of the stress field at the tip of a 400-micron-wide slot etched in alumina are presented and compared with a finite element simulation. The SPAM uses an unconventional acoustic lens with an anisotropic illumination pattern which can measure anisotropic effects and map residual stress fields with several-micron resolution and a stress sensitivity of 1/3 MPa in an alumina film.

  19. Acoustical imaging and processing of blood vessel and the related materials using ultrasound Doppler effect.

    PubMed

    Yokobori, A T; Ohkuma, T; Yoshinari, H; Yokobori, T; Ohuchi, H; Mori, S

    1991-01-01

    In the present paper a method is proposed to measure the degree of the degradation of the elasticity in natural blood vessel and the related materials by using ultrasound Doppler effect. It was found that the deformation rate and its acceleration in the radial direction of the blood vessel can be detected by acoustical imaging and processing using this method. These results were proven to correspond to the degree of the degradation of the elasticity, that is, the degree of viscoelasticity in the blood vessel from the wave versus time pattern detected and its simple analysis. This method was applied to predicting the arteriosclerosis of blood vessels of humans by acoustical imaging and processing uninvadedly, as the characteristics of viscoelasticity in blood vessels.

  20. Performance characterization of image and video analysis systems at Siemens Corporate Research

    NASA Astrophysics Data System (ADS)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  1. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  2. Security SVGA image sensor with on-chip video data authentication and cryptographic circuit

    NASA Astrophysics Data System (ADS)

    Stifter, P.; Eberhardt, K.; Erni, A.; Hofmann, K.

    2005-10-01

    Security applications of sensors in a networking environment has a strong demand of sensor authentication and secure data transmission due to the possibility of man-in-the-middle and address spoofing attacks. Therefore a secure sensor system should fulfil the three standard requirements of cryptography, namely data integrity, authentication and non-repudiation. This paper is intended to present the unique sensor development by AIM, the so called SecVGA, which is a high performance, monochrome (B/W) CMOS active pixel image sensor. The device is capable of capturing still and motion images with a resolution of 800x600 active pixels and converting the image into a digital data stream. The distinguishing feature of this development in comparison to standard imaging sensors is the on-chip cryptographic engine which provides the sensor authentication, based on a one-way challenge/response protocol. The implemented protocol results in the exchange of a session-key which will secure the following video data transmission. This is achieved by calculating a cryptographic checksum derived from a stateful hash value of the complete image frame. Every sensor contains an EEPROM memory cell for the non-volatile storage of a unique identifier. The imager is programmable via a two-wire I2C compatible interface which controls the integration time, the active window size of the pixel array, the frame rate and various operating modes including the authentication procedure.

  3. Overview of image processing tools to extract physical information from JET videos

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  4. Acoustic radiation force impulse imaging for evaluation of renal parenchyma elasticity in diabetic nephropathy.

    PubMed

    Goya, Cemil; Kilinc, Faruk; Hamidi, Cihad; Yavuz, Alpaslan; Yildirim, Yasar; Cetincakmak, Mehmet Guli; Hattapoglu, Salih

    2015-02-01

    OBJECTIVE. The goal of this study is to evaluate the changes in the elasticity of the renal parenchyma in diabetic nephropathy using acoustic radiation force impulse imaging. SUBJECTS AND METHODS. The study included 281 healthy volunteers and 114 patients with diabetic nephropathy. In healthy volunteers, the kidney elasticity was assessed quantitatively by measuring the shear-wave velocity using acoustic radiation force impulse imaging based on age, body mass index, and sex. The changes in the renal elasticity were compared between the different stages of diabetic nephropathy and the healthy control group. RESULTS. In healthy volunteers, there was a statistically significant correlation between the shear-wave velocity values and age and sex. The shear-wave velocity values for the kidneys were 2.87, 3.14, 2.95, 2.68, and 2.55 m/s in patients with stage 1, 2, 3, 4, and 5 diabetic nephropathy, respectively, compared with 2.35 m/s for healthy control subjects. Acoustic radiation force impulse imaging was able to distinguish between the different diabetic nephropathy stages (except for stage 5) in the kidneys. The threshold value for predicting diabetic nephropathy was 2.43 m/s (sensitivity, 84.1%; specificity, 67.3%; positive predictive value, 93.1%; negative predictive value 50.8%; accuracy, 72.1%; positive likelihood ratio, 2.5; and negative likelihood ratio, 0.23). CONCLUSION. Acoustic radiation force impulse imaging could be used for the evaluation of the renal elasticity changes that are due to secondary structural and functional changes in diabetic nephropathy.

  5. High-resolution acoustic imaging at low frequencies using 3D-printed metamaterials

    NASA Astrophysics Data System (ADS)

    Laureti, S.; Hutchins, D. A.; Davis, L. A. J.; Leigh, S. J.; Ricci, M.

    2016-12-01

    An acoustic metamaterial has been constructed using 3D printing. It contained an array of air-filled channels, whose size and shape could be varied within the design and manufacture process. In this paper we analyze both numerically and experimentally the properties of this polymer metamaterial structure, and demonstrate its use for the imaging of a sample with sub-wavelength dimensions in the audible frequency range.

  6. Improvement of the imaging of moving acoustic sources by the knowledge of their motion

    NASA Astrophysics Data System (ADS)

    Hay, J.

    1981-03-01

    An analytical and experimental study is presented showing that, due to a more precise definition of nonstationary noises of a certain class, and to the preprocessing of microphone signals (termed 'coherent dedopplerization'), one can obtain acoustic imaging for sources whose velocity is greater than may be processed by conventional methods without the generation of blurrs of the same order as the antenna field. A useful application of these techniques would be to two-dimensional antennas.

  7. High Resolution X-Ray Phase Contrast Imaging With Acoustic Tissue-Selective Contrast Enhancement

    DTIC Science & Technology

    2006-06-01

    microfocus x - ray source. Rev. Sci. Instr. 68, 2774 (1997). 8. Krol, A. et al. Laser-based microfocused x - ray source for mammography: Feasibility study...W81XWH-04-1-0481 TITLE: High Resolution X - ray Phase Contrast Imaging With Acoustic Tissue-Selective Contrast Enhancement PRINCIPAL...REPORT TYPE Annual 3. DATES COVERED (From - To) 1 Jun 2005 – 31 May 2006 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER High Resolution X - ray

  8. High Resolution X-Ray Phase Contrast Imaging with Acoustic Tissue-Selective Contrast Enhancement

    DTIC Science & Technology

    2005-06-01

    microfocus x - ray source. Rev. Sci. Instr. 68, 2774 (1997). 8. Krol, A. et al. Laser-based microfocused x - ray ...high spatial coherence, such as synchrotrons 46, microfocus x - ray tubes 7, or laser plasma x - ray sources 8,9are employed is the phase contrast component...imaging apparatus to determine the deflection of the bead as a function of acoustic pressure. The x - rays , generated by a microfocus x - ray tube

  9. Contrast Enhancement for Thermal Acoustic Breast Cancer Imaging via Resonant Stimulation

    DTIC Science & Technology

    2008-03-01

    Wang, “Time-domain reconstruction for thermoa- coustic tomography in a speherical geometry,” IEEE Trans. Med. Imag., vol. 21, no. 7, pp. 814–822, Jul...comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS...excited into resonance via EM stimulation, the effective acoustic scattering cross-section may increase by a factor in excess of 100 based on

  10. Acoustic output of multi-line transmit beamforming for fast cardiac imaging: a simulation study.

    PubMed

    Santos, Pedro; Tong, Ling; Ortega, Alejandra; Løvstakken, Lasse; Samset, Eigil; D'hooge, Jan

    2015-07-01

    Achieving higher frame rates in cardiac ultrasound could unveil short-lived myocardial events and lead to new insights on cardiac function. Multi-line transmit (MLT) beamforming (i.e., simultaneously transmitting multiple focused beams) is a potential approach to achieve this. However, two challenges come with it: first, it leads to cross-talk between the MLT beams, appearing as imaging artifacts, and second, it presents acoustic summation in the near field, where multiple MLT beams overlap. Although several studies have focused on the former, no studies have looked into the implications of the latter on acoustic safety. In this paper, the acoustic field of 4-MLT was simulated and compared with single-line transmit (SLT). The findings suggest that standard MLT does present potential concerns. Compared with SLT, it shows a 2-fold increase in mechanical index (MI) (from 1.0 to 2.3), a 6-fold increase in spatial-peak pulse-average intensity (I(sppa)) (from 99 to 576 W∙cm(-2)) and a 12-fold increase in spatial-peak temporalaverage intensity (I(spta)) (from 119 to 1407 mW∙cm(-2)). Subsequently, modifications of the transmit pulse and delay line of MLT were studied. These modifications allowed for a change in the spatio-temporal distribution of the acoustic output, thereby significantly decreasing the safety indices (MI = 1.2, I(sppa) = 92 W∙cm(-2) and I(spta) = 366 mW∙cm(-2)). Accordingly, they help mitigate the concerns around MLT, reducing potential tradeoffs between acoustic safety and image quality.

  11. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    PubMed Central

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-01-01

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708

  12. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.

    PubMed

    Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong

    2015-10-07

    Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  13. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  14. Guidance for horizontal image translation (HIT) of high definition stereoscopic video production

    NASA Astrophysics Data System (ADS)

    Broberg, David K.

    2011-03-01

    Horizontal image translation (HIT) is an electronic process for shifting the left-eye and right-eye images horizontally as a way to alter the stereoscopic characteristics and alignment of 3D content after signals have been captured by stereoscopic cameras. When used cautiously and with full awareness of the impact on other interrelated aspects of the stereography, HIT is a valuable tool in the post production process as a means to modify stereoscopic content for more comfortable viewing. Most commonly it is used to alter the zero parallax setting (ZPS), to compensate for stereo window violations or to compensate for excessive positive or negative parallax in the source material. As more and more cinematic 3D content migrates to television distribution channels the use of this tool will likely expand. Without proper attention to certain guidelines the use of HIT can actually harm the 3D viewing experience. This paper provides guidance on the most effective use and describes some of the interrelationships and trade-offs. The paper recommends the adoption of the cinematic 2K video format as a 3D source master format for high definition television distribution of stereoscopic 3D video programming.

  15. Estimation of respiratory rate from photoplethysmographic imaging videos compared to pulse oximetry.

    PubMed

    Karlen, Walter; Garde, Ainara; Myers, Dorothy; Scheffer, Cornie; Ansermino, J Mark; Dumont, Guy A

    2015-07-01

    We present a study evaluating two respiratory rate estimation algorithms using videos obtained from placing a finger on the camera lens of a mobile phone. The two algorithms, based on Smart Fusion and empirical mode decomposition (EMD), consist of previously developed signal processing methods to detect features and extract respiratory induced variations in photoplethysmographic signals to estimate respiratory rate. With custom-built software on an Android phone, photoplethysmographic imaging videos were recorded from 19 healthy adults while breathing spontaneously at respiratory rates between 6 to 32 breaths/min. Signals from two pulse oximeters were simultaneously recorded to compare the algorithms' performance using mobile phone data and clinical data. Capnometry was recorded to obtain reference respiratory rates. Two hundred seventy-two recordings were analyzed. The Smart Fusion algorithm reported 39 recordings with insufficient respiratory information from the photoplethysmographic imaging data. Of the 232 remaining recordings, a root mean square error (RMSE) of 6 breaths/min was obtained. The RMSE for the pulse oximeter data was lower at 2.3 breaths/min. RMSE for the EMD method was higher throughout all data sources as, unlike the Smart Fusion, the EMD method did not screen for inconsistent results. The study showed that it is feasible to estimate respiratory rates by placing a finger on a mobile phone camera, but that it becomes increasingly challenging at respiratory rates greater than 20 breaths/min, independent of data source or algorithm tested.

  16. SVD-based quality metric for image and video using machine learning.

    PubMed

    Narwaria, Manish; Lin, Weisi

    2012-04-01

    We study the use of machine learning for visual quality evaluation with comprehensive singular value decomposition (SVD)-based visual features. In this paper, the two-stage process and the relevant work in the existing visual quality metrics are first introduced followed by an in-depth analysis of SVD for visual quality assessment. Singular values and vectors form the selected features for visual quality assessment. Machine learning is then used for the feature pooling process and demonstrated to be effective. This is to address the limitations of the existing pooling techniques, like simple summation, averaging, Minkowski summation, etc., which tend to be ad hoc. We advocate machine learning for feature pooling because it is more systematic and data driven. The experiments show that the proposed method outperforms the eight existing relevant schemes. Extensive analysis and cross validation are performed with ten publicly available databases (eight for images with a total of 4042 test images and two for video with a total of 228 videos). We use all publicly accessible software and databases in this study, as well as making our own software public, to facilitate comparison in future research.

  17. A Deep Learning Pipeline for Image Understanding and Acoustic Modeling

    DTIC Science & Technology

    2014-01-01

    a combination of both. Results are reported on two different datasets, the validation and held- out sets. In early experiments, we used Dropout on only...indicates that the network is very confident in the location of the object, as opposed to being spread out randomly. The top left image shows that it can...however out - performs object proposal methods on the ILSVRC13 detection dataset. Krizhevsky et al. [21] demonstrated impressive localization performance

  18. Development of single-channel stereoscopic video imaging modality for real-time retinal imaging

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo

    2016-03-01

    Stereoscopic retinal image can effectively help doctors. Most of stereo imaging surgical microscopes are based on dual optical channels and benefit from dual cameras in which left and right cameras capture corresponding left and right eye views. This study developed a single-channel stereoscopic retinal imaging modality based on a transparent rotating deflector (TRD). Two different viewing angles are generated by imaging through the TRD which is mounted on a motor synchronized with a camera and is placed in single optical channel. Because of the function of objective lens in the imaging modality which generate stereo-image from an object at its focal point, and according to eye structure, the optical set up of the imaging modality can compatible for retinal imaging when the cornea and eye lens are engaged in objective lens.

  19. Finite element modelling for the investigation of edge effect in acoustic micro imaging of microelectronic packages

    NASA Astrophysics Data System (ADS)

    Shen Lee, Chean; Zhang, Guang-Ming; Harvey, David M.; Ma, Hong-Wei; Braden, Derek R.

    2016-02-01

    In acoustic micro imaging of microelectronic packages, edge effect is often presented as artifacts of C-scan images, which may potentially obscure the detection of defects such as cracks and voids in the solder joints. The cause of edge effect is debatable. In this paper, a 2D finite element model is developed on the basis of acoustic micro imaging of a flip-chip package using a 230 MHz focused transducer to investigate acoustic propagation inside the package in attempt to elucidate the fundamental mechanism that causes the edge effect. A virtual transducer is designed in the finite element model to reduce the coupling fluid domain, and its performance is characterised against the physical transducer specification. The numerical results showed that the under bump metallization (UBM) structure inside the package has a significant impact on the edge effect. Simulated wavefields also showed that the edge effect is mainly attributed to the horizontal scatter, which is observed in the interface of silicon die-to-the outer radius of solder bump. The horizontal scatter occurs even for a flip-chip package without the UBM structure.

  20. A magnetic resonance imaging study on the articulatory and acoustic speech parameters of Malay vowels.

    PubMed

    Zourmand, Alireza; Mirhassani, Seyed Mostafa; Ting, Hua-Nong; Bux, Shaik Ismail; Ng, Kwan Hoong; Bilgen, Mehmet; Jalaludin, Mohd Amin

    2014-07-25

    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.

  1. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  2. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  3. Video rate passive millimeter-wave imager utilizing optical upconversion with improved size, weight, and power

    NASA Astrophysics Data System (ADS)

    Martin, Richard D.; Shi, Shouyuan; Zhang, Yifei; Wright, Andrew; Yao, Peng; Shreve, Kevin P.; Schuetz, Christopher A.; Dillon, Thomas E.; Mackrides, Daniel G.; Harrity, Charles E.; Prather, Dennis W.

    2015-05-01

    In this presentation we will discuss the performance and limitations of our 220 channel video rate passive millimeter wave imaging system based on a distributed aperture with optical upconversion architecture. We will cover our efforts to reduce the cost, size, weight, and power (CSWaP) requirements of our next generation imager. To this end, we have developed custom integrated circuit silicon-germanium (SiGe) low noise amplifiers that have been designed to efficiently couple with our high performance lithium niobate upconversion modules. We have also developed millimeter wave packaging and components in multilayer liquid crystal polymer (LCP) substrates which greatly improve the manufacturability of the upconversion modules. These structures include antennas, substrate integrated waveguides, filters, and substrates for InP and SiGe mmW amplifiers.

  4. Frequency-space prediction filtering for acoustic clutter and random noise attenuation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Shin, Junseob; Huang, Lianjie

    2016-04-01

    Frequency-space prediction filtering (FXPF), also known as FX deconvolution, is a technique originally developed for random noise attenuation in seismic imaging. FXPF attempts to reduce random noise in seismic data by modeling only real signals that appear as linear or quasilinear events in the aperture domain. In medical ultrasound imaging, channel radio frequency (RF) signals from the main lobe appear as horizontal events after receive delays are applied while acoustic clutter signals from off-axis scatterers and electronic noise do not. Therefore, FXPF is suitable for preserving only the main-lobe signals and attenuating the unwanted contributions from clutter and random noise in medical ultrasound imaging. We adapt FXPF to ultrasound imaging, and evaluate its performance using simulated data sets from a point target and an anechoic cyst. Our simulation results show that using only 5 iterations of FXPF achieves contrast-to-noise ratio (CNR) improvements of 67 % in a simulated noise-free anechoic cyst and 228 % in a simulated anechoic cyst contaminated with random noise of 15 dB signal-to-noise ratio (SNR). Our findings suggest that ultrasound imaging with FXPF attenuates contributions from both acoustic clutter and random noise and therefore, FXPF has great potential to improve ultrasound image contrast for better visualization of important anatomical structures and detection of diseased conditions.

  5. Military jet noise source imaging using multisource statistically optimized near-field acoustical holography.

    PubMed

    Wall, Alan T; Gee, Kent L; Neilsen, Tracianne B; McKinley, Richard L; James, Michael M

    2016-04-01

    The identification of acoustic sources is critical to targeted noise reduction efforts for jets on high-performance tactical aircraft. This paper describes the imaging of acoustic sources from a tactical jet using near-field acoustical holography techniques. The measurement consists of a series of scans over the hologram with a dense microphone array. Partial field decomposition methods are performed to generate coherent holograms. Numerical extrapolation of data beyond the measurement aperture mitigates artifacts near the aperture edges. A multisource equivalent wave model is used that includes the effects of the ground reflection on the measurement. Multisource statistically optimized near-field acoustical holography (M-SONAH) is used to reconstruct apparent source distributions between 20 and 1250 Hz at four engine powers. It is shown that M-SONAH produces accurate field reconstructions for both inward and outward propagation in the region spanned by the physical hologram measurement. Reconstructions across the set of engine powers and frequencies suggests that directivity depends mainly on estimated source location; sources farther downstream radiate at a higher angle relative to the inlet axis. At some frequencies and engine powers, reconstructed fields exhibit multiple radiation lobes originating from overlapped source regions, which is a phenomenon relatively recently reported for full-scale jets.

  6. Acoustic property reconstruction of a pygmy sperm whale (Kogia breviceps) forehead based on computed tomography imaging.

    PubMed

    Song, Zhongchang; Xu, Xiao; Dong, Jianchen; Xing, Luru; Zhang, Meng; Liu, Xuecheng; Zhang, Yu; Li, Songhai; Berggren, Per

    2015-11-01

    Computed tomography (CT) imaging and sound experimental measurements were used to reconstruct the acoustic properties (density, velocity, and impedance) of the forehead tissues of a deceased pygmy sperm whale (Kogia breviceps). The forehead was segmented along the body axis and sectioned into cross section slices, which were further cut into sample pieces for measurements. Hounsfield units (HUs) of the corresponding measured pieces were obtained from CT scans, and regression analyses were conducted to investigate the linear relationships between the tissues' HUs and velocity, and HUs and density. The distributions of the acoustic properties of the head at axial, coronal, and sagittal cross sections were reconstructed, revealing that the nasal passage system was asymmetric and the cornucopia-shaped spermaceti organ was in the right nasal passage, surrounded by tissues and airsacs. A distinct dense theca was discovered in the posterior-dorsal area of the melon, which was characterized by low velocity in the inner core and high velocity in the outer region. Statistical analyses revealed significant differences in density, velocity, and acoustic impedance between all four structures, melon, spermaceti organ, muscle, and connective tissue (p < 0.001). The obtained acoustic properties of the forehead tissues provide important information for understanding the species' bioacoustic characteristics.

  7. Cherenkov Video Imaging Allows for the First Visualization of Radiation Therapy in Real Time

    SciTech Connect

    Jarvis, Lesley A.; Zhang, Rongxiao; Gladstone, David J.; Jiang, Shudong; Hitchcock, Whitney; Friedman, Oscar D.; Glaser, Adam K.; Jermyn, Michael; Pogue, Brian W.

    2014-07-01

    Purpose: To determine whether Cherenkov light imaging can visualize radiation therapy in real time during breast radiation therapy. Methods and Materials: An intensified charge-coupled device (CCD) camera was synchronized to the 3.25-μs radiation pulses of the clinical linear accelerator with the intensifier set × 100. Cherenkov images were acquired continuously (2.8 frames/s) during fractionated whole breast irradiation with each frame an accumulation of 100 radiation pulses (approximately 5 monitor units). Results: The first patient images ever created are used to illustrate that Cherenkov emission can be visualized as a video during conditions typical for breast radiation therapy, even with complex treatment plans, mixed energies, and modulated treatment fields. Images were generated correlating to the superficial dose received by the patient and potentially the location of the resulting skin reactions. Major blood vessels are visible in the image, providing the potential to use these as biological landmarks for improved geometric accuracy. The potential for this system to detect radiation therapy misadministrations, which can result from hardware malfunction or patient positioning setup errors during individual fractions, is shown. Conclusions: Cherenkoscopy is a unique method for visualizing surface dose resulting in real-time quality control. We propose that this system could detect radiation therapy errors in everyday clinical practice at a time when these errors can be corrected to result in improved safety and quality of radiation therapy.

  8. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    NASA Astrophysics Data System (ADS)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  9. Tunable far-field acoustic imaging by two-dimensional sonic crystal with concave incident surface

    NASA Astrophysics Data System (ADS)

    Shen, Feng-Fu; Lu, Dan-Feng; Zhu, Hong-Wei; Ji, Chang-Ying; Shi, Qing-Fan

    2017-01-01

    An additional concave incident surface comprised of two-dimensional (2D) sonic crystals (SCs) is employed to tune the acoustic image in the far-field region. The tunability is realized through changing the curvature of the concave surface. To explain the tuning mechanism, a simple ray-trace analysis is demonstrated based on the wave-beam negative refractive law. Then, a numerical confirmation is carried out. Results show that both the position and the intensity of the image can be tuned by the introduced concave surface.

  10. A comparison of traffic estimates of nocturnal flying animals using radar, thermal imaging, and acoustic recording.

    PubMed

    Horton, Kyle G; Shriver, W Gregory; Buler, Jeffrey J

    2015-03-01

    There are several remote-sensing tools readily available for the study of nocturnally flying animals (e.g., migrating birds), each possessing unique measurement biases. We used three tools (weather surveillance radar, thermal infrared camera, and acoustic recorder) to measure temporal and spatial patterns of nocturnal traffic estimates of flying animals during the spring and fall of 2011 and 2012 in Lewes, Delaware, USA. Our objective was to compare measures among different technologies to better understand their animal detection biases. For radar and thermal imaging, the greatest observed traffic rate tended to occur at, or shortly after, evening twilight, whereas for the acoustic recorder, peak bird flight-calling activity was observed just prior to morning twilight. Comparing traffic rates during the night for all seasons, we found that mean nightly correlations between acoustics and the other two tools were weakly correlated (thermal infrared camera and acoustics, r = 0.004 ± 0.04 SE, n = 100 nights; radar and acoustics, r = 0.14 ± 0.04 SE, n = 101 nights), but highly variable on an individual nightly basis (range = -0.84 to 0.92, range = -0.73 to 0.94). The mean nightly correlations between traffic rates estimated by radar and by thermal infrared camera during the night were more strongly positively correlated (r = 0.39 ± 0.04 SE, n = 125 nights), but also were highly variable for individual nights (range = -0.76 to 0.98). Through comparison with radar data among numerous height intervals, we determined that flying animal height above the ground influenced thermal imaging positively and flight call detections negatively. Moreover, thermal imaging detections decreased with the presence of cloud cover and increased with mean ground flight speed of animals, whereas acoustic detections showed no relationship with cloud cover presence but did decrease with increased flight speed. We found sampling methods to be positively correlated when comparing mean nightly

  11. Automatic detection of motion blur in intravital video microscopy image sequences via directional statistics of log-Gabor energy maps.

    PubMed

    Ferrari, Ricardo J; Pinto, Carlos H Villa; da Silva, Bruno C Gregório; Bernardes, Danielle; Carvalho-Tavares, Juliana

    2015-02-01

    Intravital microscopy is an important experimental tool for the study of cellular and molecular mechanisms of the leukocyte-endothelial interactions in the microcirculation of various tissues and in different inflammatory conditions of in vivo specimens. However, due to the limited control over the conditions of the image acquisition, motion blur and artifacts, resulting mainly from the heartbeat and respiratory movements of the in vivo specimen, will very often be present. This problem can significantly undermine the results of either visual or computerized analysis of the acquired video images. Since only a fraction of the total number of images are usually corrupted by severe motion blur, it is necessary to have a procedure to automatically identify such images in the video for either further restoration or removal. This paper proposes a new technique for the detection of motion blur in intravital video microscopy based on directional statistics of local energy maps computed using a bank of 2D log-Gabor filters. Quantitative assessment using both artificially corrupted images and real microscopy data were conducted to test the effectiveness of the proposed method. Results showed an area under the receiver operating characteristic curve (AUC) of 0.95 (AUC = 0.95; 95 % CI 0.93-0.97) when tested on 329 video images visually ranked by four observers.

  12. Effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography

    PubMed Central

    Jin, Xing; Li, Changhui; Wang, Lihong V.

    2008-01-01

    The effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography were studied. A numerical model for calculating the propagation of thermoacoustic waves through the skull was developed and experimentally examined. The model takes into account wave reflection and refraction at the skull surfaces and therefore provides improved accuracy for the reconstruction. To evaluate when the skull-induced effects could be ignored in reconstruction, the reconstructed images obtained by the proposed method were further compared with those obtained with the method based on homogeneous acoustic properties. From simulation and experimental results, it was found that when the target region is close to the center of the brain, the effects caused by the skull layer are minimal and both reconstruction methods work well. As the target region becomes closer to the interface between the skull and brain tissue, however, the skull-induced distortion becomes increasingly severe, and the reconstructed image would be strongly distorted without correcting those effects. In this case, the proposed numerical method can improve image quality by taking into consideration the wave refraction and mode conversion at the skull surfaces. This work is important for obtaining good brain images when the thickness of the skull cannot be ignored. PMID:18697545

  13. Effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography.

    PubMed

    Jin, Xing; Li, Changhui; Wang, Lihong V

    2008-07-01

    The effects of acoustic heterogeneities on transcranial brain imaging with microwave-induced thermoacoustic tomography were studied. A numerical model for calculating the propagation of thermoacoustic waves through the skull was developed and experimentally examined. The model takes into account wave reflection and refraction at the skull surfaces and therefore provides improved accuracy for the reconstruction. To evaluate when the skull-induced effects could be ignored in reconstruction, the reconstructed images obtained by the proposed method were further compared with those obtained with the method based on homogeneous acoustic properties. From simulation and experimental results, it was found that when the target region is close to the center of the brain, the effects caused by the skull layer are minimal and both reconstruction methods work well. As the target region becomes closer to the interface between the skull and brain tissue, however, the skull-induced distortion becomes increasingly severe, and the reconstructed image would be strongly distorted without correcting those effects. In this case, the proposed numerical method can improve image quality by taking into consideration the wave refraction and mode conversion at the skull surfaces. This work is important for obtaining good brain images when the thickness of the skull cannot be ignored.

  14. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  15. Study of a prototype high quantum efficiency thick scintillation crystal video-electronic portal imaging device.

    PubMed

    Samant, Sanjiv S; Gopal, Arun

    2006-08-01

    Image quality in portal imaging suffers significantly from the loss in contrast and spatial resolution that results from the excessive Compton scatter associated with megavoltage x rays. In addition, portal image quality is further reduced due to the poor quantum efficiency (QE) of current electronic portal imaging devices (EPIDs). Commercial video-camera-based EPIDs or VEPIDs that utilize a thin phosphor screen in conjunction with a metal buildup plate to convert the incident x rays to light suffer from reduced light production due to low QE (<2% for Eastman Kodak Lanex Fast-B). Flat-panel EPIDs that utilize the same luminescent screen along with an a-Si:H photodiode array provide improved image quality compared to VEPIDs, but they are expensive and can be susceptible to radiation damage to the peripheral electronics. In this article, we present a prototype VEPID system for high quality portal imaging at sub-monitor-unit (subMU) exposures based on a thick scintillation crystal (TSC) that acts as a high QE luminescent screen. The prototype TSC system utilizes a 12 mm thick transparent CsI(Tl) (thallium-activated cesium iodide) scintillator for QE=0.24, resulting in significantly higher light production compared to commercial phosphor screens. The 25 X 25 cm2 CsI(Tl) screen is coupled to a high spatial and contrast resolution Video-Optics plumbicon-tube camera system (1240 X 1024 pixels, 250 microm pixel width at isocenter, 12-bit ADC). As a proof-of-principle prototype, the TSC system with user-controlled camera target integration was adapted for use in an existing clinical gantry (Siemens BEAMVIEW(PLUS)) with the capability for online intratreatment fluoroscopy. Measurements of modulation transfer function (MTF) were conducted to characterize the TSC spatial resolution. The measured MTF along with measurements of the TSC noise power spectrum (NPS) were used to determine the system detective quantum efficiency (DQE). A theoretical expression of DQE(0) was developed

  16. High-resolution, high-speed, three-dimensional video imaging with digital fringe projection techniques.

    PubMed

    Ekstrand, Laura; Karpinsky, Nikolaus; Wang, Yajun; Zhang, Song

    2013-12-03

    Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras(1). The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera's field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in(1-5)). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame(6,7). Binary defocusing DFP methods can achieve even greater speeds(8). Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis(9), facial animation(10), cardiac mechanics studies(11), and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.

  17. Microstructure Imaging Using Frequency Spectrum Spatially Resolved Acoustic Spectroscopy F-Sras

    NASA Astrophysics Data System (ADS)

    Sharples, S. D.; Li, W.; Clark, M.; Somekh, M. G.

    2010-02-01

    Material microstructure can have a profound effect on the mechanical properties of a component, such as strength and resistance to creep and fatigue. SRAS—spatially resolved acoustic spectroscopy—is a laser ultrasonic technique which can image microstructure using highly localized surface acoustic wave (SAW) velocity as a contrast mechanism, as this is sensitive to crystallographic orientation. The technique is noncontact, nondestructive, rapid, can be used on large components, and is highly tolerant of acoustic aberrations. Previously, the SRAS technique has been demonstrated using a fixed frequency excitation laser and a variable grating period (к-vector) to determine the most efficiently generated SAWs, and hence the velocity. Here, we demonstrate an implementation which uses a fixed grating period with a broadband laser excitation source. The velocity is determined by analyzing the measured frequency spectrum. Experimental results using this "frequency spectrum SRAS" (f-SRAS) method are presented. Images of microstructure on an industrially relevant material are compared to those obtained using the previous SRAS method ("k-SRAS"), excellent agreement is observed. Moreover, f-SRAS is much simpler and potentially much more rapid than k-SRAS as the velocity can be determined at each sample point in one single laser shot, rather than scanning the grating period.

  18. Symmetry analysis for nonlinear time reversal methods applied to nonlinear acoustic imaging

    NASA Astrophysics Data System (ADS)

    Dos Santos, Serge; Chaline, Jennifer

    2015-10-01

    Using symmetry invariance, nonlinear Time Reversal (TR) and reciprocity properties, the classical NEWS methods are supplemented and improved by new excitations having the intrinsic property of enlarging frequency analysis bandwidth and time domain scales, with now both medical acoustics and electromagnetic applications. The analysis of invariant quantities is a well-known tool which is often used in nonlinear acoustics in order to simplify complex equations. Based on a fundamental physical principle known as symmetry analysis, this approach consists in finding judicious variables, intrinsically scale dependant, and able to describe all stages of behaviour on the same theoretical foundation. Based on previously published results within the nonlinear acoustic areas, some practical implementation will be proposed as a new way to define TR-NEWS based methods applied to NDT and medical bubble based non-destructive imaging. This paper tends to show how symmetry analysis can help us to define new methodologies and new experimental set-up involving modern signal processing tools. Some example of practical realizations will be proposed in the context of biomedical non-destructive imaging using Ultrasound Contrast Agents (ACUs) where symmetry and invariance properties allow us to define a microscopic scale-invariant experimental set-up describing intrinsic symmetries of the microscopic complex system.

  19. Eigenfunction analysis of stochastic backscatter for characterization of acoustic aberration in medical ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Varslot, Trond; Krogstad, Harald; Mo, Eirik; Angelsen, Bjørn A.

    2004-06-01

    Presented here is a characterization of aberration in medical ultrasound imaging. The characterization is optimal in the sense of maximizing the expected energy in a modified beamformer output of the received acoustic backscatter. Aberration correction based on this characterization takes the form of an aberration correction filter. The situation considered is frequently found in applications when imaging organs through a body wall: aberration is introduced in a layer close to the transducer, and acoustic backscatter from a scattering region behind the body wall is measured at the transducer surface. The scattering region consists of scatterers randomly distributed with very short correlation length compared to the acoustic wavelength of the transmit pulse. The scatterer distribution is therefore assumed to be δ correlated. This paper shows how maximizing the expected energy in a modified beamformer output signal naturally leads to eigenfunctions of a Fredholm integral operator, where the associated kernel function is a spatial correlation function of the received stochastic signal. Aberration characterization and aberration correction are presented for simulated data constructed to mimic aberration introduced by the abdominal wall. The results compare well with what is obtainable using data from a simulated point source.

  20. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  1. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves

    PubMed Central

    Hui, Jie; Li, Rui; Phillips, Evan H.; Goergen, Craig J.; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology. PMID:27069873

  2. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves.

    PubMed

    Hui, Jie; Li, Rui; Phillips, Evan H; Goergen, Craig J; Sturek, Michael; Cheng, Ji-Xin

    2016-03-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology.

  3. Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media.

    PubMed

    Huang, Chao; Wang, Kun; Nie, Liming; Wang, Lihong V; Anastasio, Mark A

    2013-06-01

    Existing approaches to image reconstruction in photoacoustic computed tomography (PACT) with acoustically heterogeneous media are limited to weakly varying media, are computationally burdensome, and/or cannot effectively mitigate the effects of measurement data incompleteness and noise. In this work, we develop and investigate a discrete imaging model for PACT that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations. A key contribution of the work is the establishment of a procedure to implement a matched forward and backprojection operator pair associated with the discrete imaging model, which permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise. The forward and backprojection operators are based on the k-space pseudospectral method for computing numerical solutions to the PA wave equation in the time domain. The developed reconstruction methodology is investigated by use of both computer-simulated and experimental PACT measurement data.

  4. Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging

    DOE PAGES

    Newman, Justin A.; Zhang, Shijie; Sullivan, Shane Z.; ...

    2016-05-16

    Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s–1). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-fold improvementmore » over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. Lastly, these capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension.« less

  5. Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging

    SciTech Connect

    Newman, Justin A.; Zhang, Shijie; Sullivan, Shane Z.; Dow, Ximeng Y.; Becker, Michael; Sheedlo, Michael J.; Stepanov, Sergey; Carlsen, Mark S.; Everly, R. Michael; Das, Chittaranjan; Fischetti, Robert F.; Simpson, Garth J.

    2016-05-16

    Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s–1). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-fold improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. Lastly, these capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension.

  6. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the sixth quarter of this research project the research team developed a method and the experimental procedures for acquiring the data needed for ultrasonic tomography of rock core samples under triaxial stress conditions as outlined in Task 10. Traditional triaxial compression experiments, where compressional and shear wave velocities are measured, provide little or no information about the internal spatial distribution of mechanical damage within the sample. The velocities measured between platen-to-platen or sensor-to-sensor reflects an averaging of all the velocities occurring along that particular raypath across the boundaries of the rock. The research team is attempting to develop and refine a laboratory equivalent of seismic tomography for use on rock samples deformed under triaxial stress conditions. Seismic tomography, utilized for example in crosswell tomography, allows an imaging of the velocities within a discrete zone within the rock. Ultrasonic or acoustic tomography is essentially the extension of that field technology applied to rock samples deforming in the laboratory at high pressures. This report outlines the technical steps and procedures for developing this technology for use on weak, soft chalk samples. Laboratory tests indicate that the chalk samples exhibit major changes in compressional and shear wave velocities during compaction. Since chalk is the rock type responsible for the severe subsidence and compaction in the North Sea it was selected for the first efforts at tomographic imaging of soft rocks. Field evidence from the North Sea suggests that compaction, which has resulted in over 30 feet of subsidence to date, is heterogeneously distributed within the reservoir. The research team will attempt to image this very process in chalk samples. The initial tomographic studies (Scott et al., 1994a,b; 1998) were accomplished on well cemented, competent rocks such as Berea sandstone. The extension of the technology to weaker samples is

  7. Acoustic Image Models for Obstacle Avoidance with Forward-Looking Sonar

    NASA Astrophysics Data System (ADS)

    Masek, T.; Kölsch, M.

    Long-range forward-looking sonars (FLS) have recently been deployed in autonomous unmanned vehicles (AUV). We present models for various features in acoustic images, with the goal of using this sensor for altitude maintenance, obstacle detection and obstacle avoidance. First, we model the backscatter and FLS noise as pixel-based, spatially-varying intensity distributions. Experiments show that these models predict noise with an accuracy of over 98%. Next, the presence of acoustic noise from two other sources including a modem is reliably detected with a template-based filter and a threshold learned from training data. Lastly, the ocean floor location and orientation is estimated with a gradient-descent method using a site-independent template, yielding sufficiently accurate results in 95% of the frames. Temporal information is expected to further improve the performance.

  8. Quantitative Analysis Of Sperm Motion Kinematics From Real-Time Video-Edge Images

    NASA Astrophysics Data System (ADS)

    Davis, Russell O...; Katz, David F.

    1988-02-01

    A new model of sperm swimming kinematics, which uses signal processing methods and multivariate statistical techniques to identify individual cell-motion parameters and unique cell populations, is presented. Swimming paths of individual cells are obtained using real-time, video-edge digitization. Raw paths are adaptively filtered to identify average paths, and measurements of space-time oscillations about average paths are made. Time-dependent frequency information is extracted from spatial variations about average paths using harmonic analysis. Raw-path and average-path measures such as curvature, curve length, and straight-line length, and measures of oscillations about average paths such as time-dependent amplitude and frequency variations, are used in a multivariate, cluster analysis to identify unique cell populations. The entire process, including digitization of sperm video images, is computer-automated. Preliminary results indicate that this method of tracking, digitization, and kinematic analysis accurately identifies unique cell subpopulations, including: the relative numbers of cells in each subpopulation, how subpopulations differ, and the extent and significance of such differences. With appropriate work, this approach may be useful for clinical discrimination between normal and abnormal semen specimens.

  9. Investigation of acoustic changes resulting from contrast enhancement in through-transmission ultrasonic imaging.

    PubMed

    Rothstein, Tamara; Gaitini, Diana; Gallimidi, Zahava; Azhari, Haim

    2010-09-01

    Through-transmitted ultrasonic waves can be used for computed projection imaging of the breast. The goal of this research was to analyze the acoustic properties changes associated with the propagation of ultrasonic waves through media before and after ultrasound contrast agent (UCA) injection and to study the feasibility of a new imaging method combining projection imaging and UCA. Two transmission techniques were examined: Gaussian pulses and pulse inversion. In the latter, three different double inverted pulses were studied: double Gaussian, double square and double sine. A computerized automatic ultrasonic scanning system was used for imaging. To simulate blood vessels, a phantom, consisting of a latex tube through which saline was circulated, was assembled. The phantom was placed within the scanner and sets of acoustic projection images were acquired. Then, a suspension of the UCA Definitely was added to the saline and a new set of images was obtained. The pre and postcontrast images were quantitatively compared in terms of amplitude and time-of-flight (TOF). In addition, nonlinearity was evaluated by comparing the relative alteration of the positive and negative parts of the signal. Statistically significant (p < 0.001) changes in the projection images resulting from the UCA injection were observed in wave amplitude (22% +/- 13%), TOF (7.9 ns +/- 6.3 ns) and nonlinear properties (35% +/- 32% and 56% +/- 17% for Gausian pulses and pulse inversion, respectively). One in vivo study of a female breast is also presented and its preliminary outcomes discussed. Together, these results indicate the technical feasibility of the suggested method and its potential to detect breast tumors.

  10. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Younane Abousleiman, Ph.D.; Musharraf Zaman, Ph.D., P.E.

    2002-11-18

    During the seven quarter of the project the research team analyzed some of the acoustic velocity data and rock deformation data. The goal is to create a series of ''deformation-velocity maps'' which can outline the types of rock deformational mechanisms which can occur at high pressures and then associate those with specific compressional or shear wave velocity signatures. During this quarter, we began to analyze both the acoustical and deformational properties of the various rock types. Some of the preliminary velocity data from the Danian chalk will be presented in this report. This rock type was selected for the initial efforts as it will be used in the tomographic imaging study outlined in Task 10. This is one of the more important rock types in the study as the Danian chalk is thought to represent an excellent analog to the Ekofisk chalk that has caused so many problems in the North Sea. Some of the preliminary acoustic velocity data obtained during this phase of the project indicates that during pore collapse and compaction of this chalk, the acoustic velocities can change by as much as 200 m/s. Theoretically, this significant velocity change should be detectable during repeated successive 3-D seismic images. In addition, research continues with an analysis of the unconsolidated sand samples at high confining pressures obtained in Task 9. The analysis of the results indicate that sands with 10% volume of fines can undergo liquefaction at lower stress conditions than sand samples which do not have fines added. This liquefaction and/or sand flow is similar to ''shallow water'' flows observed during drilling in the offshore Gulf of Mexico.

  11. Assembly of a Multi-channel Video System to Simultaneously Record Cerebral Emboli with Cerebral Imaging

    PubMed Central

    Stoner-Duncan, Benjamin; Kim, Sae Jin; Mergeche, Joanna L.; Anastasian, Zirka H.; Heyer, Eric J.

    2011-01-01

    Stroke remains a significant risk of carotid revascularization for atherosclerotic disease. Emboli generated at the time of treatment either using endarterectomy or stent-angioplasty may progress with blood flow and lodge in brain arteries. Recently, the use of protection devices to trap emboli created at the time of revascularization has helped to establish a role for stent-supported angioplasty compared with endarterectomy. Several devices have been developed to reduce or detect emboli that may be dislodged during carotid artery stenting (CAS) to treat carotid artery stenosis. A significant challenge in assessing the efficacy of these devices is precisely determining when emboli are dislodged in real-time. To address this challenge, we devised a method of simultaneously recording fluoroscopic images, transcranial Doppler (TCD) data, vital signs, and digital video of the patient/physician. This method permits accurate causative analysis and allows procedural events to be precisely correlated to embolic events in real-time. PMID:21441834

  12. Laboratory observations of self-excited dust acoustic shock waves

    NASA Astrophysics Data System (ADS)

    Merlino, Robert L.; Heinrich, Jonathon R.; Kim, Su-Hyun

    2009-11-01

    Dust acoustic waves have been discussed in connection with dust density structures in Saturn's rings and the Earth's mesosphere, and as a possible mechanism for triggering condensation of small grains in dust molecular clouds. Dust acoustic waves are a ubiquitous occurrence in laboratory dusty plasmas formed in glow discharges. We report observations of repeated, self-excited dust acoustic shock waves in a dc glow discharge dusty plasma using high-speed video imaging. Two major observations will be presented: (1) The self-steepening of a nonlinear dust acoustic wave into a saw-tooth wave with sharp gradient in dust density, very similar to those found in numerical solutions [1] of the fully nonlinear fluid equations for nondispersive dust acoustic waves, and (2) the collision and confluence of two dust acoustic shock waves. [4pt] [1] B. Eliasson and P. K. Shukla, Phys. Rev. E 69, 067401 (2004).

  13. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  14. Video Image Analysis of Turbulent Buoyant Jets Using a Novel Laboratory Apparatus

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Colgan, R. E.; Ferencevych, P. G.

    2012-12-01

    Turbulent buoyant jets play an important role in the transport of heat and mass in a variety of environmental settings on Earth. Naturally occurring examples include the discharges from high-temperature seafloor hydrothermal vents and from some types of subaerial volcanic eruptions. Anthropogenic examples include flows from industrial smokestacks and the flow from the damaged well after the Deepwater Horizon oil leak of 2010. Motivated by a desire to find non-invasive methods for measuring the volumetric flow rates of turbulent buoyant jets, we have constructed a laboratory apparatus that can generate these types of flows with easily adjustable nozzle velocities and fluid densities. The jet fluid comprises a variable mixture of nitrogen and carbon dioxide gas, which can be injected at any angle with respect to the vertical into the quiescent surrounding air. To make the flow visible we seed the jet fluid with a water fog generated by an array of piezoelectric diaphragms oscillating at ultrasonic frequencies. The system can generate jets that have initial densities ranging from approximately 2-48% greater than the ambient air. We obtain independent estimates of the volumetric flow rates using well-calibrated rotameters, and collect video image sequences for analysis at frame rates up to 120 frames per second using a machine vision camera. We are using this apparatus to investigate several outstanding problems related to the physics of these flows and their analysis using video imagery. First, we are working to better constrain several theoretical parameters that describe the trajectory of these flows when their initial velocities are not parallel to the buoyancy force. The ultimate goal of this effort is to develop well-calibrated methods for establishing volumetric flow rates using trajectory analysis. Second, we are working to refine optical plume velocimetry (OPV), a non-invasive technique for estimating flow rates using temporal cross-correlation of image

  15. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    PubMed

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study.

  16. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization.

    PubMed

    Kedzierski, Michal; Delis, Paulina

    2016-06-23

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°-90° ( φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles.

  17. Multiwavelength Fluorescence Otoscope for Video-Rate Chemical Imaging of Middle Ear Pathology

    PubMed Central

    2015-01-01

    A common motif in otolaryngology is the lack of certainty regarding diagnosis for middle ear conditions, resulting in many patients being overtreated under the worst-case assumption. Although pneumatic otoscopy and adjunctive tests offer additional information, white light otoscopy has been the main tool for diagnosis of external auditory canal and middle ear pathologies for over a century. In middle ear pathologies, the inability to avail high-resolution structural and/or molecular imaging is particularly glaring, leading to a complicated and erratic decision analysis. Here, we propose a novel multiwavelength fluorescence-based video-rate imaging strategy that combines readily available optical elements and software components to create a novel otoscopic device. This modified otoscope enables low-cost, detailed and objective diagnosis of common middle ear pathological conditions. Using the detection of congenital cholesteatoma as a specific example, we demonstrate the feasibility of fluorescence imaging to differentiate this proliferative lesion from uninvolved middle ear tissue based on the characteristic autofluorescence signals. Availability of real-time, wide-field chemical information should enable more complete removal of cholesteatoma, allowing for better hearing preservation and substantially reducing the well-documented risks, costs and psychological effects of repeated surgical procedures. PMID:25226556

  18. Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization

    PubMed Central

    Kedzierski, Michal; Delis, Paulina

    2016-01-01

    The aim of this research was to assess the possibility of conducting an absolute orientation procedure for video imagery, in which the external orientation for the first image was typical for aerial photogrammetry whereas the external orientation of the second was typical for terrestrial photogrammetry. Starting from the collinearity equations, assuming that the camera tilt angle is equal to 90°, a simplified mathematical model is proposed. The proposed method can be used to determine the X, Y, Z coordinates of points based on a set of collinearity equations of a pair of images. The use of simplified collinearity equations can considerably shorten the processing tine of image data from Unmanned Aerial Vehicles (UAVs), especially in low cost systems. The conducted experiments have shown that it is possible to carry out a complete photogrammetric project of an architectural structure using a camera tilted 85°–90° (φ or ω) and simplified collinearity equations. It is also concluded that there is a correlation between the speed of the UAV and the discrepancy between the established and actual camera tilt angles. PMID:27347954

  19. Transactions and Answer Judging in Multimedia Instruction: A Way to Transact with Features Appearing in Video and Graphic Images.

    ERIC Educational Resources Information Center

    Casey, Carl

    1992-01-01

    Discussion of transactions in computer-based instruction for ill-structured and visual domains focuses on two transactions developed for meteorology training that provide the capability to interact with video and graphic images at a very detailed level. Potential applications for the transactions are suggested, and early evaluation reports are…

  20. Video flowmeter

    DOEpatents

    Lord, David E.; Carter, Gary W.; Petrini, Richard R.

    1983-01-01

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid (10) containing entrained particles (12) is formed and positioned by a rod optic lens assembly (31) on the raster area of a low-light level television camera (20). The particles (12) are illuminated by light transmitted through a bundle of glass fibers (32) surrounding the rod optic lens assembly (31). Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen (40). The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid (10).

  1. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1981-06-10

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid.

  2. Video flowmeter

    DOEpatents

    Lord, D.E.; Carter, G.W.; Petrini, R.R.

    1983-08-02

    A video flowmeter is described that is capable of specifying flow nature and pattern and, at the same time, the quantitative value of the rate of volumetric flow. An image of a determinable volumetric region within a fluid containing entrained particles is formed and positioned by a rod optic lens assembly on the raster area of a low-light level television camera. The particles are illuminated by light transmitted through a bundle of glass fibers surrounding the rod optic lens assembly. Only particle images having speeds on the raster area below the raster line scanning speed may be used to form a video picture which is displayed on a video screen. The flowmeter is calibrated so that the locus of positions of origin of the video picture gives a determination of the volumetric flow rate of the fluid. 4 figs.

  3. Sensing the delivery and endocytosis of nanoparticles using magneto-photo-acoustic imaging

    PubMed Central

    Qu, M.; Mehrmohammadi, M.; Emelianov, S.Y.

    2015-01-01

    Many biomedical applications necessitate a targeted intracellular delivery of the nanomaterial to specific cells. Therefore, a non-invasive and reliable imaging tool is required to detect both the delivery and cellular endocytosis of the nanoparticles. Herein, we demonstrate that magneto-photo-acoustic (MPA) imaging can be used to monitor the delivery and to identify endocytosis of magnetic and optically absorbing nanoparticles. The relationship between photoacoustic (PA) and magneto-motive ultrasound (MMUS) signals from the in vitro samples were analyzed to identify the delivery and endocytosis of nanoparticles. The results indicated that during the delivery of nanoparticles to the vicinity of the cells, both PA and MMUS signals are almost linearly proportional. However, accumulation of nanoparticles within the cells leads to nonlinear MMUS-PA relationship, due to non-linear MMUS signal amplification. Therefore, through longitudinal MPA imaging, it is possible to monitor the delivery of nanoparticles and identify the endocytosis of the nanoparticles by living cells. PMID:26640773

  4. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging

    SciTech Connect

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-15

    Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  5. Mapping preictal and ictal haemodynamic networks using video-electroencephalography and functional imaging.

    PubMed

    Chaudhary, Umair J; Carmichael, David W; Rodionov, Roman; Thornton, Rachel C; Bartlett, Phillipa; Vulliemoz, Serge; Micallef, Caroline; McEvoy, Andrew W; Diehl, Beate; Walker, Matthew C; Duncan, John S; Lemieux, Louis

    2012-12-01

    Ictal patterns on scalp-electroencephalography are often visible only after propagation, therefore rendering localization of the seizure onset zone challenging. We hypothesized that mapping haemodynamic changes before and during seizures using simultaneous video-electroencephalography and functional imaging will improve the localization of the seizure onset zone. Fifty-five patients with ≥2 refractory focal seizures/day, and who had undergone long-term video-electroencephalography monitoring were included in the study. 'Preictal' (30 s immediately preceding the electrographic seizure onset) and ictal phases, 'ictal-onset'; 'ictalestablished' and 'late ictal', were defined based on the evolution of the electrographic pattern and clinical semiology. The functional imaging data were analysed using statistical parametric mapping to map ictal phase-related haemodynamic changes consistent across seizures. The resulting haemodynamic maps were overlaid on co-registered anatomical scans, and the spatial concordance with the presumed and invasively defined seizure onset zone was determined. Twenty patients had typical seizures during functional imaging. Seizures were identified on video-electroencephalography in 15 of 20, on electroencephalography alone in two and on video alone in three patients. All patients showed significant ictal-related haemodynamic changes. In the six cases that underwent invasive evaluation, the ictal-onset phase-related maps had a degree of concordance with the presumed seizure onset zone for all patients. The most statistically significant haemodynamic cluster within the presumed seizure onset zone was between 1.1 and 3.5 cm from the invasively defined seizure onset zone, which was resected in two of three patients undergoing surgery (Class I post-surgical outcome) and was not resected in one patient (Class III post-surgical outcome). In the remaining 14 cases, the ictal-onset phase-related maps had a degree of concordance with the presumed

  6. Imaging of Acoustically Coupled Oscillations Due to Flow Past a Shallow Cavity: Effect of Cavity Length Scale

    SciTech Connect

    P. Oshkai; M. Geveci; D. Rockwell; M. Pollack

    2002-12-12

    Flow-acoustic interactions due to fully turbulent inflow past a shallow axisymmetric cavity mounted in a pipe are investigated using a technique of high-image-density particle image velocimetry in conjunction with unsteady pressure measurements. This imaging leads to patterns of velocity, vorticity, streamline topology, and hydrodynamic contributions to the acoustic power integral. Global instantaneous images, as well as time-averaged images, are evaluated to provide insight into the flow physics during tone generation. Emphasis is on the manner in which the streamwise length scale of the cavity alters the major features of the flow structure. These image-based approaches allow identification of regions of the unsteady shear layer that contribute to the instantaneous hydrodynamic component of the acoustic power, which is necessary to maintain a flow tone. In addition, combined image analysis and pressure measurements allow categorization of the instantaneous flow patterns that are associated with types of time traces and spectra of the fluctuating pressure. In contrast to consideration based solely on pressure spectra, it is demonstrated that locked-on tones may actually exhibit intermittent, non-phase-locked images, apparently due to low damping of the acoustic resonator. Locked-on flow tones (without modulation or intermittency), locked-on flow tones with modulation, and non-locked-on oscillations with short-term, highly coherent fluctuations are defined and represented by selected cases. Depending on which of,these regimes occur, the time-averaged Q (quality)-factor and the dimensionless peak pressure are substantially altered.

  7. Contribution of the supraglottic larynx to the vocal product: imaging and acoustic analysis

    NASA Astrophysics Data System (ADS)

    Gracco, L. Carol

    1996-04-01

    Horizontal supraglottic laryngectomy is a surgical procedure to remove a mass lesion located in the region of the pharynx superior to the true vocal folds. In contrast to full or partial laryngectomy, patients who undergo horizontal supraglottic laryngectomy often present with little or nor involvement to the true vocal folds. This population provides an opportunity to examine the acoustic consequences of altering the pharynx while sparing the laryngeal sound source. Acoustic and magnetic resonance imaging (MRI) data were acquired in a group of four patients before and after supraglottic laryngectomy. Acoustic measures included the identification of vocal tract resonances and the fundamental frequency of the vocal fold vibration. 3D reconstruction of the pharyngeal portion of each subjects' vocal tract were made from MRIs taken during phonation and volume measures were obtained. These measures reveal a variable, but often dramatic difference in the surgically-altered area of the pharynx and changes in the formant frequencies of the vowel/i/post surgically. In some cases the presence of the tumor created a deviation from the expected formant values pre-operatively with post-operative values approaching normal. Patients who also underwent radiation treatment post surgically tended to have greater constriction in the pharyngeal area of the vocal tract.

  8. Acoustic property reconstruction of a neonate Yangtze finless porpoise's (Neophocaena asiaeorientalis) head based on CT imaging.

    PubMed

    Wei, Chong; Wang, Zhitao; Song, Zhongchang; Wang, Kexiong; Wang, Ding; Au, Whitlow W L; Zhang, Yu

    2015-01-01

    The reconstruction of the acoustic properties of a neonate finless porpoise's head was performed using X-ray computed tomography (CT). The head of the deceased neonate porpoise was also segmented across the body axis and cut into slices. The averaged sound velocity and density were measured, and the Hounsfield units (HU) of the corresponding slices were obtained from computed tomography scanning. A regression analysis was employed to show the linear relationships between the Hounsfield unit and both sound velocity and density of samples. Furthermore, the CT imaging data were used to compare the HU value, sound velocity, density and acoustic characteristic impedance of the main tissues in the porpoise's head. The results showed that the linear relationships between HU and both sound velocity and density were qualitatively consistent with previous studies on Indo-pacific humpback dolphins and Cuvier's beaked whales. However, there was no significant increase of the sound velocity and acoustic impedance from the inner core to the outer layer in this neonate finless porpoise's melon.

  9. Investigating the emotional response to room acoustics: A functional magnetic resonance imaging study.

    PubMed

    Lawless, M S; Vigeant, M C

    2015-10-01

    While previous research has demonstrated the powerful influence of pleasant and unpleasant music on emotions, the present study utilizes functional magnetic resonance imaging (fMRI) to assess the positive and negative emotional responses as demonstrated in the brain when listening to music convolved with varying room acoustic conditions. During fMRI scans, subjects rated auralizations created in a simulated concert hall with varying reverberation times. The analysis detected activations in the dorsal striatum, a region associated with anticipation of reward, for two individuals for the highest rated stimulus, though no activations were found for regions associated with negative emotions in any subject.

  10. Radon transform imaging: low-cost video compressive imaging at extreme resolutions

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Wang, Jian; Gupta, Mohit

    2016-05-01

    Most compressive imaging architectures rely on programmable light-modulators to obtain coded linear measurements of a signal. As a consequence, the properties of the light modulator place fundamental limits on the cost, performance, practicality, and capabilities of the compressive camera. For example, the spatial resolution of the single pixel camera is limited to that of its light modulator, which is seldom greater than 4 megapixels. In this paper, we describe a novel approach to compressive imaging that avoids the use of spatial light modulator. In its place, we use novel cylindrical optics and a rotation gantry to directly sample the Radon transform of the image focused on the sensor plane. We show that the reconstruction problem is identical to sparse tomographic recovery and we can leverage the vast literature in compressive magnetic resonance imaging (MRI) to good effect. The proposed design has many important advantages over existing compressive cameras. First, we can achieve a resolution of N × N pixels using a sensor with N photodetectors; hence, with commercially available SWIR line-detectors with 10k pixels, we can potentially achieve spatial resolutions of 100 megapixels, a capability that is unprecedented. Second, our design is scalable more gracefully across wavebands of light since we only require sensors and optics that are optimized for the wavelengths of interest; in contrast, spatial light modulators like DMDs require expensive coatings to be effective in non-visible wavebands. Third, we can exploit properties of line-detectors including electronic shutters and pixels with large aspect ratios to optimize light throughput. On the ip side, a drawback of our approach is the need for moving components in the imaging architecture.

  11. Acoustic radiation force impulse (ARFI) imaging of zebrafish embryo by high-frequency coded excitation sequence.

    PubMed

    Park, Jinhyoung; Lee, Jungwoo; Lau, Sien Ting; Lee, Changyang; Huang, Ying; Lien, Ching-Ling; Kirk Shung, K

    2012-04-01

    Acoustic radiation force impulse (ARFI) imaging has been developed as a non-invasive method for quantitative illustration of tissue stiffness or displacement. Conventional ARFI imaging (2-10 MHz) has been implemented in commercial scanners for illustrating elastic properties of several organs. The image resolution, however, is too coarse to study mechanical properties of micro-sized objects such as cells. This article thus presents a high-frequency coded excitation ARFI technique, with the ultimate goal of displaying elastic characteristics of cellular structures. Tissue mimicking phantoms and zebrafish embryos are imaged with a 100-MHz lithium niobate (LiNbO₃) transducer, by cross-correlating tracked RF echoes with the reference. The phantom results show that the contrast of ARFI image (14 dB) with coded excitation is better than that of the conventional ARFI image (9 dB). The depths of penetration are 2.6 and 2.2 mm, respectively. The stiffness data of the zebrafish demonstrate that the envelope is harder than the embryo region. The temporal displacement change at the embryo and the chorion is as large as 36 and 3.6 μm. Consequently, this high-frequency ARFI approach may serve as a remote palpation imaging tool that reveals viscoelastic properties of small biological samples.

  12. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    NASA Astrophysics Data System (ADS)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  13. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    PubMed Central

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-01-01

    Abstract. Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development. PMID:26509413

  14. Temperature-dependent differences in the nonlinear acoustic behavior of ultrasound contrast agents revealed by high-speed imaging and bulk acoustics.

    PubMed

    Mulvana, Helen; Stride, Eleanor; Tang, Mengxing; Hajnal, Jo V; Eckersley, Robert

    2011-09-01

    Previous work by the authors has established that increasing the temperature of the suspending liquid from 20°C to body temperature has a significant impact on the bulk acoustic properties and stability of an ultrasound contrast agent suspension (SonoVue, Bracco Suisse SA, Manno, Lugano, Switzerland). In this paper the influence of temperature on the nonlinear behavior of microbubbles is investigated, because this is one of the most important parameters in the context of diagnostic imaging. High-speed imaging showed that raising the temperature significantly influences the dynamic behavior of individual microbubbles. At body temperature, microbubbles exhibit greater radial excursion and oscillate less spherically, with a greater incidence of jetting and gas expulsion, and therefore collapse, than they do at room temperature. Bulk acoustics revealed an associated increase in the harmonic content of the scattered signals. These findings emphasize the importance of conducting laboratory studies at body temperature if the results are to be interpreted for in vivo applications.

  15. Thermal image analysis of plastic deformation and fracture behavior by a thermo-video measurement system

    NASA Astrophysics Data System (ADS)

    Ohbuchi, Yoshifumi; Sakamoto, Hidetoshi; Nagatomo, Nobuaki

    2016-12-01

    The visualization of the plastic region and the measurement of its size are necessary and indispensable to evaluate the deformation and fracture behavior of a material. In order to evaluate the plastic deformation and fracture behavior in a structural member with some flaws, the authors paid attention to the surface temperature which is generated by plastic strain energy. The visualization of the plastic deformation was developed by analyzing the relationship between the extension of the plastic deformation range and the surface temperature distribution, which was obtained by an infrared thermo-video system. Furthermore, FEM elasto-plastic analysis was carried out with the experiment, and the effectiveness of this non-contact measurement system of the plastic deformation and fracture process by a thermography system was discussed. The evaluation method using an infrared imaging device proposed in this research has a feature which does not exist in the current evaluation method, i.e. the heat distribution on the surface of the material has been measured widely by noncontact at 2D at high speed. The new measuring technique proposed here can measure the macroscopic plastic deformation distribution on the material surface widely and precisely as a 2D image, and at high speed, by calculation from the heat generation and the heat propagation distribution.

  16. A Marker-less Monitoring System for Movement Analysis of Infants Using Video Images

    NASA Astrophysics Data System (ADS)

    Shima, Keisuke; Osawa, Yuko; Bu, Nan; Tsuji, Tokuo; Tsuji, Toshio; Ishii, Idaku; Matsuda, Hiroshi; Orito, Kensuke; Ikeda, Tomoaki; Noda, Shunichi

    This paper proposes a marker-less motion measurement and analysis system for infants. This system calculates eight types of evaluation indices related to the movement of an infant such as “amount of body motion” and “activity of body” from binary images that are extracted from video images using the background difference and frame difference. Thus, medical doctors can intuitively understand the movements of infants without long-term observations, and this may be helpful in supporting their diagnoses and detecting disabilities and diseases in the early stages. The distinctive feature of this system is that the movements of infants can be measured without using any markers for motion capture and thus it is expected that the natural and inherent tendencies of infants can be analyzed and evaluated. In this paper, the evaluation indices and features of movements between full-term infants (FTIs) and low birth weight infants (LBWIs) are compared using the developed prototype. We found that the amount of body motion and symmetry of upper and lower body movements of LBWIs became lower than those of FTIs. The difference between the movements of FTIs and LBWIs can be evaluated using the proposed system.

  17. Acoustic structure quantification by using ultrasound Nakagami imaging for assessing liver fibrosis

    PubMed Central

    Tsui, Po-Hsiang; Ho, Ming-Chih; Tai, Dar-In; Lin, Ying-Hsiu; Wang, Chiao-Yin; Ma, Hsiang-Yang

    2016-01-01

    Acoustic structure quantification (ASQ) is a recently developed technique widely used for detecting liver fibrosis. Ultrasound Nakagami parametric imaging based on the Nakagami distribution has been widely used to model echo amplitude distribution for tissue characterization. We explored the feasibility of using ultrasound Nakagami imaging as a model-based ASQ technique for assessing liver fibrosis. Standard ultrasound examinations were performed on 19 healthy volunteers and 91 patients with chronic hepatitis B and C (n = 110). Liver biopsy and ultrasound Nakagami imaging analysis were conducted to compare the METAVIR score and Nakagami parameter. The diagnostic value of ultrasound Nakagami imaging was evaluated using receiver operating characteristic (ROC) curves. The Nakagami parameter obtained through ultrasound Nakagami imaging decreased with an increase in the METAVIR score (p < 0.0001), representing an increase in the extent of pre-Rayleigh statistics for echo amplitude distribution. The area under the ROC curve (AUROC) was 0.88 for the diagnosis of any degree of fibrosis (≥F1), whereas it was 0.84, 0.69, and 0.67 for ≥F2, ≥F3, and ≥F4, respectively. Ultrasound Nakagami imaging is a model-based ASQ technique that can be beneficial for the clinical diagnosis of early liver fibrosis. PMID:27605260

  18. Acoustic structure quantification by using ultrasound Nakagami imaging for assessing liver fibrosis.

    PubMed

    Tsui, Po-Hsiang; Ho, Ming-Chih; Tai, Dar-In; Lin, Ying-Hsiu; Wang, Chiao-Yin; Ma, Hsiang-Yang

    2016-09-08

    Acoustic structure quantification (ASQ) is a recently developed technique widely used for detecting liver fibrosis. Ultrasound Nakagami parametric imaging based on the Nakagami distribution has been widely used to model echo amplitude distribution for tissue characterization. We explored the feasibility of using ultrasound Nakagami imaging as a model-based ASQ technique for assessing liver fibrosis. Standard ultrasound examinations were performed on 19 healthy volunteers and 91 patients with chronic hepatitis B and C (n = 110). Liver biopsy and ultrasound Nakagami imaging analysis were conducted to compare the METAVIR score and Nakagami parameter. The diagnostic value of ultrasound Nakagami imaging was evaluated using receiver operating characteristic (ROC) curves. The Nakagami parameter obtained through ultrasound Nakagami imaging decreased with an increase in the METAVIR score (p < 0.0001), representing an increase in the extent of pre-Rayleigh statistics for echo amplitude distribution. The area under the ROC curve (AUROC) was 0.88 for the diagnosis of any degree of fibrosis (≥F1), whereas it was 0.84, 0.69, and 0.67 for ≥F2, ≥F3, and ≥F4, respectively. Ultrasound Nakagami imaging is a model-based ASQ technique that can be beneficial for the clinical diagnosis of early liver fibrosis.

  19. Acoustic quasi-holographic images of scattering by vertical cylinders from one-dimensional bistatic scans.

    PubMed

    Baik, Kyungmin; Dudley, Christopher; Marston, Philip L

    2011-12-01

    When synthetic aperture sonar (SAS) is used to image elastic targets in water, subtle features can be present in the images associated with the dynamical response of the target being viewed. In an effort to improve the understanding of such responses, as well as to explore alternative image processing methods, a laboratory-based system was developed in which targets were illuminated by a transient acoustic source, and bistatic responses were recorded by scanning a hydrophone along a rail system. Images were constructed using a relatively conventional bistatic SAS algorithm and were compared with images based on supersonic holography. The holographic method is a simplification of one previously used to view the time evolution of a target's response [Hefner and Marston, ARLO 2, 55-60 (2001)]. In the holographic method, the space-time evolution of the scattering was used to construct a two-dimensional image with cross range and time as coordinates. Various features for vertically hung cylindrical targets were interpreted using high frequency ray theory. This includes contributions from guided surface elastic waves, as well as transmitted-wave features and specular reflection.

  20. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  1. A novel imaging technique based on the spatial coherence of backscattered waves: demonstration in the presence of acoustical clutter

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Pinton, Gianmarco F.; Lediju, Muyinatu; Trahey, Gregg E.

    2011-03-01

    In the last 20 years, the number of suboptimal and inadequate ultrasound exams has increased. This trend has been linked to the increasing population of overweight and obese individuals. The primary causes of image degradation in these individuals are often attributed to phase aberration and clutter. Phase aberration degrades image quality by distorting the transmitted and received pressure waves, while clutter degrades image quality by introducing incoherent acoustical interference into the received pressure wavefront. Although significant research efforts have pursued the correction of image degradation due to phase aberration, few efforts have characterized or corrected image degradation due to clutter. We have developed a novel imaging technique that is capable of differentiating ultrasonic signals corrupted by acoustical interference. The technique, named short-lag spatial coherence (SLSC) imaging, is based on the spatial coherence of the received ultrasonic wavefront at small spatial distances across the transducer aperture. We demonstrate comparative B-mode and SLSC images using full-wave simulations that include the effects of clutter and show that SLSC imaging generates contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR) that are significantly better than B-mode imaging under noise-free conditions. In the presence of noise, SLSC imaging significantly outperforms conventional B-mode imaging in all image quality metrics. We demonstrate the use of SLSC imaging in vivo and compare B-mode and SLSC images of human thyroid and liver.

  2. Stress-Induced Fracturing of Reservoir Rocks: Acoustic Monitoring and μCT Image Analysis

    NASA Astrophysics Data System (ADS)

    Pradhan, Srutarshi; Stroisz, Anna M.; Fjær, Erling; Stenebråten, Jørn F.; Lund, Hans K.; Sønstebø, Eyvind F.

    2015-11-01

    Stress-induced fracturing in reservoir rocks is an important issue for the petroleum industry. While productivity can be enhanced by a controlled fracturing operation, it can trigger borehole instability problems by reactivating existing fractures/faults in a reservoir. However, safe fracturing can improve the quality of operations during CO2 storage, geothermal installation and gas production at and from the reservoir rocks. Therefore, understanding the fracturing behavior of different types of reservoir rocks is a basic need for planning field operations toward these activities. In our study, stress-induced fracturing of rock samples has been monitored by acoustic emission (AE) and post-experiment computer tomography (CT) scans. We have used hollow cylinder cores of sandstones and chalks, which are representatives of reservoir rocks. The fracture-triggering stress has been measured for different rocks and compared with theoretical estimates. The population of AE events shows the location of main fracture arms which is in a good agreement with post-test CT image analysis, and the fracture patterns inside the samples are visualized through 3D image reconstructions. The amplitudes and energies of acoustic events clearly indicate initiation and propagation of the main fractures. Time evolution of the radial strain measured in the fracturing tests will later be compared to model predictions of fracture size.

  3. Acoustic wavefield and Mach wave radiation of flashing arcs in strombolian explosion measured by image luminance

    NASA Astrophysics Data System (ADS)

    Genco, Riccardo; Ripepe, Maurizio; Marchetti, Emanuele; Bonadonna, Costanza; Biass, Sebastien

    2014-10-01

    Explosive activity often generates visible flashing arcs in the volcanic plume considered as the evidence of the shock-front propagation induced by supersonic dynamics. High-speed image processing is used to visualize the pressure wavefield associated with flashing arcs observed in strombolian explosions. Image luminance is converted in virtual acoustic signal compatible with the signal recorded by pressure transducer. Luminance variations are moving with a spherical front at a 344.7 m/s velocity. Flashing arcs travel at the sound speed already 14 m above the vent and are not necessarily the evidence of a supersonic explosive dynamics. However, seconds later, the velocity of small fragments increases, and the spherical acousto-luminance wavefront becomes planar recalling the Mach wave radiation generated by large scale turbulence in high-speed jet. This planar wavefront forms a Mach angle of 55° with the explosive jet axis, suggesting an explosive dynamics moving at Mo = 1.22 Mach number.

  4. Noncontact photoacoustic imaging achieved by using a low-coherence interferometer as the acoustic detector.

    PubMed

    Wang, Yi; Li, Chunhui; Wang, Ruikang K

    2011-10-15

    We report on a noncontact photoacoustic imaging (PAI) technique in which a low-coherence interferometer [(LCI), optical coherence tomography (OCT) hardware] is utilized as the acoustic detector. A synchronization approach is used to lock the LCI system at its highly sensitive region for photoacoustic detection. The technique is experimentally verified by the imaging of a scattering phantom embedded with hairs and the blood vessels within a mouse ear in vitro. The system's axial and lateral resolutions are evaluated at 60 and 30 μm, respectively. The experimental results indicate that PAI in a noncontact detection mode is possible with high resolution and high bandwidth. The proposed approach lends itself to a natural integration of PAI with OCT, rather than a combination of two separate and independent systems.

  5. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography.

    PubMed

    Ma, Jianguo; Martin, K Heath; Li, Yang; Dayton, Paul A; Shung, K Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-05-07

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for the design of intravascular acoustic angiography transducers.

  6. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography

    NASA Astrophysics Data System (ADS)

    Ma, Jianguo; Martin, K. Heath; Li, Yang; Dayton, Paul A.; Shung, K. Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-05-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for the design of intravascular acoustic angiography transducers.

  7. Design factors of intravascular dual frequency transducers for super-harmonic contrast imaging and acoustic angiography

    PubMed Central

    Ma, Jianguo; Martin, K. Heath; Li, Yang; Dayton, Paul A.; Shung, K. Kirk; Zhou, Qifa; Jiang, Xiaoning

    2015-01-01

    Imaging of coronary vasa vasorum may lead to assessment of the vulnerable plaque development in diagnosis of atherosclerosis diseases. Dual frequency transducers capable of detection of microbubble super-harmonics have shown promise as a new contrast-enhanced intravascular ultrasound (CE-IVUS) platform with the capability of vasa vasorum imaging. Contrast-to-tissue ratio (CTR) in CE-IVUS imaging can be closely associated with the low frequency transmitter performance. In this paper, transducer designs encompassing different transducer layouts, transmitting frequencies, and transducer materials are compared for optimization of imaging performance. In the layout selection, the stacked configuration showed superior super-harmonic imaging compared with the interleaved configuration. In the transmitter frequency selection, a decrease in frequency from 6.5 MHz to 5 MHz resulted in an increase of CTR from 15 dB to 22 dB when receiving frequency was kept constant at 30 MHz. In the material selection, the dual frequency transducer with the lead magnesium niobate-lead titanate (PMN-PT) 1-3 composite transmitter yielded higher axial resolution compared to single crystal transmitters (70 μm compared to 150 μm pulse length). These comparisons provide guidelines for design of intravascular acoustic angiography transducers. PMID:25856384

  8. ACOUSTICAL IMAGING AND MECHANICAL PROPERTIES OF SOFT ROCK AND MARINE SEDIMENTS

    SciTech Connect

    Thurman E. Scott, Jr., Ph.D.; Musharraf Zaman, Ph.D.; Younane Abousleiman, Ph.D.

    2001-04-01

    The oil and gas industry has encountered significant problems in the production of oil and gas from weak rocks (such as chalks and limestones) and from unconsolidated sand formations. Problems include subsidence, compaction, sand production, and catastrophic shallow water sand flows during deep water drilling. Together these cost the petroleum industry hundreds of millions of dollars annually. The goals of this first quarterly report is to document the progress on the project to provide data on the acoustic imaging and mechanical properties of soft rock and marine sediments. The project is intended to determine the geophysical (acoustic velocities) rock properties of weak, poorly cemented rocks and unconsolidated sands. In some cases these weak formations can create problems for reservoir engineers. For example, it cost Phillips Petroleum 1 billion dollars to repair of offshore production facilities damaged during the unexpected subsidence and compaction of the Ekofisk Field in the North Sea (Sulak 1991). Another example is the problem of shallow water flows (SWF) occurring in sands just below the seafloor encountered during deep water drilling operations. In these cases the unconsolidated sands uncontrollably flow up around the annulus of the borehole resulting in loss of the drill casing. The $150 million dollar loss of the Ursa development project in the U.S. Gulf Coast resulted from an uncontrolled SWF (Furlow 1998a,b; 1999a,b). The first three tasks outlined in the work plan are: (1) obtain rock samples, (2) construct new acoustic platens, (3) calibrate and test the equipment. These have been completed as scheduled. Rock Mechanics Institute researchers at the University of Oklahoma have obtained eight different types of samples for the experimental program. These include: (a) Danian Chalk, (b) Cordoba Cream Limestone, (c) Indiana Limestone, (d) Ekofisk Chalk, (e) Oil Creek Sandstone, (f) unconsolidated Oil Creek sand, and (g) unconsolidated Brazos river sand

  9. Passive element enriched photoacoustic computed tomography (PER PACT) for simultaneous imaging of acoustic propagation properties and light absorption.

    PubMed

    Jose, Jithin; Willemink, Rene G H; Resink, Steffen; Piras, Daniele; van Hespen, J C G; Slump, Cornelis H; Steenbergen, Wiendelt; van Leeuwen, Ton G; Manohar, Srirang

    2011-01-31

    We present a 'hybrid' imaging approach which can image both light absorption properties and acoustic transmission properties of an object in a two-dimensional slice using a computed tomography (CT) photoacoustic imager. The ultrasound transmission measurement method uses a strong optical absorber of small cross-section placed in the path of the light illuminating the sample. This absorber, which we call a passive element acts as a source of ultrasound. The interaction of ultrasound with the sample can be measured in transmission, using the same ultrasound detector used for photoacoustics. Such measurements are made at various angles around the sample in a CT approach. Images of the ultrasound propagation parameters, attenuation and speed of sound, can be reconstructed by inversion of a measurement model. We validate the method on specially designed phantoms and biological specimens. The obtained images are quantitative in terms of the shape, size, location, and acoustic properties of the examined heterogeneities.

  10. Variable ultrasound trigger delay for improved magnetic resonance acoustic radiation force imaging

    NASA Astrophysics Data System (ADS)

    Mougenot, Charles; Waspe, Adam; Looi, Thomas; Drake, James M.

    2016-01-01

    Magnetic resonance acoustic radiation force imaging (MR-ARFI) allows the quantification of microscopic displacements induced by ultrasound pulses, which are proportional to the local acoustic intensity. This study describes a new method to acquire MR-ARFI maps, which reduces the measurement noise in the quantification of displacement as well as improving its robustness in the presence of motion. Two MR-ARFI sequences were compared in this study. The first sequence ‘variable MSG’ involves switching the polarity of the motion sensitive gradient (MSG) between odd and even image frames. The second sequence named ‘static MSG’ involves a variable ultrasound trigger delay to sonicate during the first or second MSG for odd and even image frames, respectively. As previously published, the data acquired with a variable MSG required the use of reference data acquired prior to any sonication to process displacement maps. In contrary, data acquired with a static MSG were converted to displacement maps without using reference data acquired prior to the sonication. Displacement maps acquired with both sequences were compared by performing sonications for three different conditions: in a polyacrylamide phantom, in the leg muscle of a freely breathing pig and in the leg muscle of pig under apnea. The comparison of images acquired at even image frames and odd image frames indicates that the sequence with a static MSG provides a significantly better steady state (p  <  0.001 based on a Student’s t-test) than the images acquired with a variable MSG. In addition no reference data prior to sonication were required to process displacement maps for data acquired with a static MSG. The absence of reference data prior to sonication provided a 41% reduction of the spatial distribution of noise (p  <  0.001 based on a Student’s t-test) and reduced the sensitivity to motion for displacements acquired with a static MSG. No significant differences were expected and

  11. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    PubMed

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS.

  12. Application of Video Image Correlation Techniques to the Space Shuttle External Tank Foam Materials

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Nemeth, Michael P.

    2006-01-01

    Results that illustrate the use of a video-image-correlation-based displacement and strain measurement system to assess the effects of material nonuniformities on the behavior of the sprayed-on foam insulation (SOFI) used for the thermal protection system on the Space Shuttle External Tank are presented. Standard structural verification specimens for the SOFI material with and without cracks and subjected to mechanical or thermal loading conditions were tested. Measured full-field displacements and strains are presented for selected loading conditions to illustrate the behavior of the foam and the viability of the measurement technology. The results indicate that significant strain localization can occur in the foam because of material nonuniformities. In particular, elongated cells in the foam can interact with other geometric or material discontinuities in the foam and develop large-magnitude localized strain concentrations that likely initiate failures. Furthermore, some of the results suggest that continuum mechanics and linear elastic fracture mechanics might not adequately represent the physical behavior of the foam, and failure predictions based on homogeneous linear material models are likely to be inadequate.

  13. Application of Video Image Correlation Techniques to the Space Shuttle External Tank Foam Materials

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Nemeth, Michael P.

    2005-01-01

    Results that illustrate the use of a video-image-correlation-based displacement and strain measurement system to assess the effects of material nonuniformities on the behavior of the sprayed-on foam insulation (SOFI) used for the thermal protection system on the Space Shuttle External Tank are presented. Standard structural verification specimens for the SOFI material with and without cracks and subjected to mechanical or thermal loading conditions were tested. Measured full-field displacements and strains are presented for selected loading conditions to illustrate the behavior of the foam and the viability of the measurement technology. The results indicate that significant strain localization can occur in the foam because of material nonuniformities. In particular, elongated cells in the foam can interact with other geometric or material discontinuities in the foam and develop large-magnitude localized strain concentrations that likely initiate failures. Furthermore, some of the results suggest that continuum mechanics and linear elastic fracture mechanics might not adequately represent the physical behavior of the foam, and failure predictions based on homogeneous linear material models are likely to be inadequate.

  14. All-optical video-image encryption with enforced security level using independent component analysis

    NASA Astrophysics Data System (ADS)

    Alfalou, A.; Mansour, A.

    2007-10-01

    In the last two decades, wireless communications have been introduced in various applications. However, the transmitted data can be, at any moment, intercepted by non-authorized people. That could explain why data encryption and secure transmission have gained enormous popularity. In order to secure data transmission, we should pay attention to two aspects: transmission rate and encryption security level. In this paper, we address these two aspects by proposing a new video-image transmission scheme. This new system consists in using the advantage of optical high transmission rate and some powerful signal processing tools to secure the transmitted data. The main idea of our approach is to secure transmitted information at two levels: at the classical level by using an adaptation of standard optical techniques and at a second level (spatial diversity) by using independent transmitters. In the second level, a hacker would need to intercept not only one channel but all of them in order to retrieve information. At the receiver, we can easily apply ICA algorithms to decrypt the received signals and retrieve information.

  15. Behavior and identification of ephemeral sand dunes at the backshore zone using video images.

    PubMed

    Guimarães, Pedro V; Pereira, Pedro S; Calliari, Lauro J; Ellis, Jean T

    2016-09-01

    The backshore zone is transitional environment strongly affected by ocean, air and sand movements. On dissipative beaches, the formation of ephemeral dunes over the backshore zone plays significant contribution in the beach morphodynamics and sediment budget. The aim of this work is to describe a novel method to identify ephemeral dunes in the backshore region and to discuss their morphodynamic behavior. The beach morphology is identified using Argus video imagery, which reveals the behavior of morphologies at Cassino Beach, Rio Grande do Sul, Brasil. Daily images from 2005 to 2007, topographic profiles, meteorological data, and sedimentological parameters were used to determine the frequency and pervasiveness of these features on the backshore. Results indicated that coastline orientation relative to the dominant NE and E winds and the dissipative morphological beach state favored aeolian sand transport towards the backshore. Prevailing NE winds increase sand transportation to the backshore, resulting in the formation of barchans, transverse, and barchanoid-linguiod dunes. Precipitation inhibits aeolian transport and ephemeral dune formation and maintains the existing morphologies during strong SE and SW winds, provided the storm surge is not too high.

  16. A new engineering approach to reveal correlation of physiological change and spontaneous expression from video images

    NASA Astrophysics Data System (ADS)

    Yang, Fenglei; Hu, Sijung; Ma, Xiaoyun; Hassan, Harnani; Wei, Dongqing

    2015-03-01

    Spontaneous expression is associated with physiological states, i.e., heart rate, respiration, oxygen saturation (SpO2%), and heart rate variability (HRV). There have yet not sufficient efforts to explore correlation of physiological change and spontaneous expression. This study aims to study how spontaneous expression is associated with physiological changes with an approved protocol or through the videos provided from Denver Intensity of Spontaneous Facial Action Database. Not like a posed expression, motion artefact in spontaneous expression is one of evitable challenges to be overcome in the study. To obtain a physiological signs from a region of interest (ROI), a new engineering approach is being developed with an artefact-reduction method consolidated 3D active appearance model (AAM) based track, affine transformation based alignment with opto-physiological mode based imaging photoplethysmography. Also, a statistical association spaces is being used to interpret correlation of spontaneous expressions and physiological states including their probability densities by means of Gaussian Mixture Model. The present work is revealing a new avenue of study associations of spontaneous expressions and physiological states with its prospect of applications on physiological and psychological assessment.

  17. Optimized Graph Learning Using Partial Tags and Multiple Features for Image and Video Annotation.

    PubMed

    Song, Jingkuan; Gao, Lianli; Nie, Feiping; Shen, Heng Tao; Yan, Yan; Sebe, Nicu

    2016-11-01

    In multimedia annotation, due to the time constraints and the tediousness of manual tagging, it is quite common to utilize both tagged and untagged data to improve the performance of supervised learning when only limited tagged training data are available. This is often done by adding a geometry-based regularization term in the objective function of a supervised learning model. In this case, a similarity graph is indispensable to exploit the geometrical relationships among the training data points, and the graph construction scheme essentially determines the performance of these graph-based learning algorithms. However, most of the existing works construct the graph empirically and are usually based on a single feature without using the label information. In this paper, we propose a semi-supervised annotation approach by learning an optimized graph (OGL) from multi-cues (i.e., partial tags and multiple features), which can more accurately embed the relationships among the data points. Since OGL is a transductive method and cannot deal with novel data points, we further extend our model to address the out-of-sample issue. Extensive experiments on image and video annotation show the consistent superiority of OGL over the state-of-the-art methods.

  18. Comparison of ultrasound B-mode, strain imaging, acoustic radiation force impulse displacement and shear wave velocity imaging using real time clinical breast images

    NASA Astrophysics Data System (ADS)

    Manickam, Kavitha; Machireddy, Ramasubba Reddy; Raghavan, Bagyam

    2016-04-01

    It has been observed that many pathological process increase the elastic modulus of soft tissue compared to normal. In order to image tissue stiffness using ultrasound, a mechanical compression is applied to tissues of interest and local tissue deformation is measured. Based on the mechanical excitation, ultrasound stiffness imaging methods are classified as compression or strain imaging which is based on external compression and Acoustic Radiation Force Impulse (ARFI) imaging which is based on force generated by focused ultrasound. When ultrasound is focused on tissue, shear wave is generated in lateral direction and shear wave velocity is proportional to stiffness of tissues. The work presented in this paper investigates strain elastography and ARFI imaging in clinical cancer diagnostics using real time patient data. Ultrasound B-mode imaging, strain imaging, ARFI displacement and ARFI shear wave velocity imaging were conducted on 50 patients (31 Benign and 23 malignant categories) using Siemens S2000 machine. True modulus contrast values were calculated from the measured shear wave velocities. For ultrasound B-mode, ARFI displacement imaging and strain imaging, observed image contrast and Contrast to Noise Ratio were calculated for benign and malignant cancers. Observed contrast values were compared based on the true modulus contrast values calculated from shear wave velocity imaging. In addition to that, student unpaired t-test was conducted for all the four techniques and box plots are presented. Results show that, strain imaging is better for malignant cancers whereas ARFI imaging is superior than strain imaging and B-mode for benign lesions representations.

  19. Preliminary study of copper oxide nanoparticles acoustic and magnetic properties for medical imaging

    NASA Astrophysics Data System (ADS)

    Perlman, Or; Weitz, Iris S.; Azhari, Haim

    2015-03-01

    The implementation of multimodal imaging in medicine is highly beneficial as different physical properties may provide complementary information, augmented detection ability, and diagnosis verification. Nanoparticles have been recently used as contrast agents for various imaging modalities. Their significant advantage over conventional large-scale contrast agents is the ability of detection at early stages of the disease, being less prone to obstacles on their path to the target region, and possible conjunction to therapeutics. Copper ions play essential role in human health. They are used as a cofactor for multiple key enzymes involved in various fundamental biochemistry processes. Extremely small size copper oxide nanoparticles (CuO-NPs) are readily soluble in water with high colloidal stability yielding high bioavailability. The goal of this study was to examine the magnetic and acoustic characteristics of CuO-NPs in order to evaluate their potential to serve as contrast imaging agent for both MRI and ultrasound. CuO-NPs 7nm in diameter were synthesized by hot solution method. The particles were scanned using a 9.4T MRI and demonstrated a concentration dependent T1 relaxation time shortening phenomenon. In addition, it was revealed that CuO-NPs can be detected using the ultrasonic B-scan imaging. Finally, speed of sound based ultrasonic computed tomography was applied and showed that CuO-NPs can be clearly imaged. In conclusion, the preliminary results obtained, positively indicate that CuO-NPs may be imaged by both MRI and ultrasound. The results motivate additional in-vivo studies, in which the clinical utility of fused images derived from both modalities for diagnosis improvement will be studied.

  20. Spatial Prediction Filtering of Acoustic Clutter and Random Noise in Medical Ultrasound Imaging.

    PubMed

    Shin, Junseob; Huang, Lianjie

    2017-02-01

    One of the major challenges in array-based medical ultrasound imaging is the image quality degradation caused by sidelobes and off-axis clutter, which is an inherent limitation of the conventional delay-and-sum (DAS) beamforming operating on a finite aperture. Ultrasound image quality is further degraded in imaging applications involving strong tissue attenuation and/or low transmit power. In order to effectively suppress acoustic clutter from off-axis targets and random noise in a robust manner, we introduce in this paper a new adaptive filtering technique called frequency-space (F-X) prediction filtering or FXPF, which was first developed in seismic imaging for random noise attenuation. Seismologists developed FXPF based on the fact that linear and quasilinear events or wavefronts in the time-space (T-X) domain are manifested as a superposition of harmonics in the frequency-space (F-X) domain, which can be predicted using an auto-regressive (AR) model. We describe the FXPF technique as a spectral estimation or a direction-of-arrival problem, and explain why adaptation of this technique into medical ultrasound imaging is beneficial. We apply our new technique to simulated and tissue-mimicking phantom data. Our results demonstrate that FXPF achieves CNR improvements of 26% in simulated noise-free anechoic cyst, 109% in simulated anechoic cyst contaminated with random noise of 15 dB SNR, and 93% for experimental anechoic cyst from a custom-made tissue-mimicking phantom. Our findings suggest that FXPF is an effective technique to enhance ultrasound image contrast and has potential to improve the visualization of clinically important anatomical structures and diagnosis of diseased conditions.

  1. Integrated homeland security system with passive thermal imaging and advanced video analytics

    NASA Astrophysics Data System (ADS)

    Francisco, Glen; Tillman, Jennifer; Hanna, Keith; Heubusch, Jeff; Ayers, Robert

    2007-04-01

    A complete detection, management, and control security system is absolutely essential to preempting criminal and terrorist assaults on key assets and critical infrastructure. According to Tom Ridge, former Secretary of the US Department of Homeland Security, "Voluntary efforts alone are not sufficient to provide the level of assurance Americans deserve and they must take steps to improve security." Further, it is expected that Congress will mandate private sector investment of over $20 billion in infrastructure protection between 2007 and 2015, which is incremental to funds currently being allocated to key sites by the department of Homeland Security. Nearly 500,000 individual sites have been identified by the US Department of Homeland Security as critical infrastructure sites that would suffer severe and extensive damage if a security breach should occur. In fact, one major breach in any of 7,000 critical infrastructure facilities threatens more than 10,000 people. And one major breach in any of 123 facilities-identified as "most critical" among the 500,000-threatens more than 1,000,000 people. Current visible, nightvision or near infrared imaging technology alone has limited foul-weather viewing capability, poor nighttime performance, and limited nighttime range. And many systems today yield excessive false alarms, are managed by fatigued operators, are unable to manage the voluminous data captured, or lack the ability to pinpoint where an intrusion occurred. In our 2006 paper, "Critical Infrastructure Security Confidence Through Automated Thermal Imaging", we showed how a highly effective security solution can be developed by integrating what are now available "next-generation technologies" which include: Thermal imaging for the highly effective detection of intruders in the dark of night and in challenging weather conditions at the sensor imaging level - we refer to this as the passive thermal sensor level detection building block Automated software detection

  2. Imaging of human tooth using ultrasound based chirp-coded nonlinear time reversal acoustics.

    PubMed

    Dos Santos, Serge; Prevorovsky, Zdenek

    2011-08-01

    Human tooth imaging sonography is investigated experimentally with an acousto-optic noncoupling set-up based on the chirp-coded nonlinear time reversal acoustic concept. The complexity of the tooth internal structure (enamel-dentine interface, cracks between internal tubules) is analyzed by adapting the nonlinear elastic wave spectroscopy (NEWS) with the objective of the tomography of damage. Optimization of excitations using intrinsic symmetries, such as time reversal (TR) invariance, reciprocity, correlation properties are then proposed and implemented experimentally. The proposed medical application of this TR-NEWS approach is implemented on a third molar human tooth and constitutes an alternative of noncoupling echodentography techniques. A 10 MHz bandwidth ultrasonic instrumentation has been developed including a laser vibrometer and a 20 MHz contact piezoelectric transducer. The calibrated chirp-coded TR-NEWS imaging of the tooth is obtained using symmetrized excitations, pre- and post-signal processing, and the highly sensitive 14 bit resolution TR-NEWS instrumentation previously calibrated. Nonlinear signature coming from the symmetry properties is observed experimentally in the tooth using this bi-modal TR-NEWS imaging after and before the focusing induced by the time-compression process. The TR-NEWS polar B-scan of the tooth is described and suggested as a potential application for modern echodentography. It constitutes the basis of the self-consistent harmonic imaging sonography for monitoring cracks propagation in the dentine, responsible of human tooth structural health.

  3. Full-Wave Iterative Image Reconstruction in Photoacoustic Tomography With Acoustically Inhomogeneous Media

    PubMed Central

    Huang, Chao; Wang, Kun; Nie, Liming; Wang, Lihong V.; Anastasio, Mark A.

    2014-01-01

    Existing approaches to image reconstruction in photoacoustic computed tomography (PACT) with acoustically heterogeneous media are limited to weakly varying media, are computationally burdensome, and/or cannot effectively mitigate the effects of measurement data incompleteness and noise. In this work, we develop and investigate a discrete imaging model for PACT that is based on the exact photoacoustic (PA) wave equation and facilitates the circumvention of these limitations. A key contribution of the work is the establishment of a procedure to implement a matched forward and backprojection operator pair associated with the discrete imaging model, which permits application of a wide-range of modern image reconstruction algorithms that can mitigate the effects of data incompleteness and noise. The forward and backprojection operators are based on the k-space pseudospectral method for computing numerical solutions to the PA wave equation in the time domain. The developed reconstruction methodology is investigated by use of both computer-simulated and experimental PACT measurement data. PMID:23529196

  4. Evaluating the intensity of the acoustic radiation force impulse (ARFI) in intravascular ultrasound (IVUS) imaging: Preliminary in vitro results.

    PubMed

    Shih, Cho-Chiang; Lai, Ting-Yu; Huang, Chih-Chung

    2016-08-01

    The ability to measure the elastic properties of plaques and vessels is significant in clinical diagnosis, particularly for detecting a vulnerable plaque. A novel concept of combining intravascular ultrasound (IVUS) imaging and acoustic radiation force impulse (ARFI) imaging has recently been proposed. This method has potential in elastography for distinguishing between the stiffness of plaques and arterial vessel walls. However, the intensity of the acoustic radiation force requires calibration as a standard for the further development of an ARFI-IVUS imaging device that could be used in clinical applications. In this study, a dual-frequency transducer with 11MHz and 48MHz was used to measure the association between the biological tissue displacement and the applied acoustic radiation force. The output intensity of the acoustic radiation force generated by the pushing element ranged from 1.8 to 57.9mW/cm(2), as measured using a calibrated hydrophone. The results reveal that all of the acoustic intensities produced by the transducer in the experiments were within the limits specified by FDA regulations and could still displace the biological tissues. Furthermore, blood clots with different hematocrits, which have elastic properties similar to the lipid pool of plaques, with stiffness ranging from 0.5 to 1.9kPa could be displaced from 1 to 4μm, whereas the porcine arteries with stiffness ranging from 120 to 291kPa were displaced from 0.4 to 1.3μm when an acoustic intensity of 57.9mW/cm(2) was used. The in vitro ARFI images of the artery with a blood clot and artificial arteriosclerosis showed a clear distinction of the stiffness distributions of the vessel wall. All the results reveal that ARFI-IVUS imaging has the potential to distinguish the elastic properties of plaques and vessels. Moreover, the acoustic intensity used in ARFI imaging has been experimentally quantified. Although the size of this two-element transducer is unsuitable for IVUS imaging, the

  5. Evaluation of graft stiffness using acoustic radiation force impulse imaging after living donor liver transplantation.

    PubMed

    Ijichi, Hideki; Shirabe, Ken; Matsumoto, Yoshihiro; Yoshizumi, Tomoharu; Ikegami, Toru; Kayashima, Hiroto; Morita, Kazutoyo; Toshima, Takeo; Mano, Yohei; Maehara, Yoshihiko

    2014-11-01

    Acoustic radiation force impulse (ARFI) imaging is an ultrasound-based modality to evaluate tissue stiffness using short-duration acoustic pulses in the region of interest. Virtual touch tissue quantification (VTTQ), which is an implementation of ARFI, allows quantitative assessment of tissue stiffness. Twenty recipients who underwent living donor liver transplantation (LDLT) for chronic liver diseases were enrolled. Graft types included left lobes with the middle hepatic vein and caudate lobes (n = 11), right lobes (n = 7), and right posterior segments (n = 2). They underwent measurement of graft VTTQ during the early post-LDLT period. The VTTQ value level rose after LDLT, reaching a maximum level on postoperative day 4. There were no significant differences in the VTTQ values between the left and right lobe graft types. Significant correlations were observed between the postoperative maximum value of VTTQ and graft volume-to-recipient standard liver volume ratio, portal venous flow to graft volume ratio, and post-LDLT portal venous pressure. The postoperative maximum serum alanine aminotransferase level and ascites fluid production were also significantly correlated with VTTQ. ARFI may be a useful diagnostic tool for the noninvasive and quantitative evaluation of the severity of graft dysfunction after LDLT.

  6. Imaging of 3D Ocean Turbulence Microstructure Using Low Frequency Acoustic Waves

    NASA Astrophysics Data System (ADS)

    Minakov, Alexander; Kolyukhin, Dmitriy; Keers, Henk

    2015-04-01

    In the past decade the technique of imaging the ocean structure with low-frequency signal (Hz), produced by air-guns and typically employed during conventional multichannel seismic data acquisition, has emerged. The method is based on extracting and stacking the acoustic energy back-scattered by the ocean temperature and salinity micro- and meso-structure (1 - 100 meters). However, a good understanding of the link between the scattered wavefield utilized by the seismic oceanography and physical processes in the ocean is still lacking. We describe theory and the numerical implementation of a 3D time-dependent stochastic model of ocean turbulence. The velocity and temperature are simulated as homogeneous Gaussian isotropic random fields with the Kolmogorov-Obukhov energy spectrum in the inertial subrange. Numerical modeling technique is employed for sampling of realizations of random fields with a given spatial-temporal spectral tensor. The model used is shown to be representative for a wide range of scales. Using this model, we provide a framework to solve the forward and inverse acoustic scattering problem using marine seismic data. Our full-waveform inversion method is based on the ray-Born approximation which is specifically suitable for the modelling of small velocity perturbations in the ocean. This is illustrated by showing a good match between synthetic seismograms computed using ray-Born and synthetic seismograms produced with a more computationally expensive finite-difference method.

  7. On the efficiency of image completion methods for intra prediction in video coding with large block structures

    NASA Astrophysics Data System (ADS)

    Doshkov, Dimitar; Jottrand, Oscar; Wiegand, Thomas; Ndjiki-Nya, Patrick

    2013-02-01

    Intra prediction is a fundamental tool in video coding with hybrid block-based architecture. Recent investigations have shown that one of the most beneficial elements for a higher compression performance in high-resolution videos is the incorporation of larger block structures. Thus in this work, we investigate the performance of novel intra prediction modes based on different image completion techniques in a new video coding scheme with large block structures. Image completion methods exploit the fact that high frequency image regions yield high coding costs when using classical H.264/AVC prediction modes. This problem is tackled by investigating the incorporation of several intra predictors using the concept of Laplace partial differential equation (PDE), Least Square (LS) based linear prediction and the Auto Regressive model. A major aspect of this article is the evaluation of the coding performance in a qualitative (i.e. coding efficiency) manner. Experimental results show significant improvements in compression (up to 7.41 %) by integrating the LS-based linear intra prediction.

  8. Simultaneous bilateral real-time 3-d transcranial ultrasound imaging at 1 MHz through poor acoustic windows.

    PubMed

    Lindsey, Brooks D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2013-04-01

    Ultrasound imaging has been proposed as a rapid, portable alternative imaging modality to examine stroke patients in pre-hospital or emergency room settings. However, in performing transcranial ultrasound examinations, 8%-29% of patients in a general population may present with window failure, in which case it is not possible to acquire clinically useful sonographic information through the temporal bone acoustic window. In this work, we describe the technical considerations, design and fabrication of low-frequency (1.2 MHz), large aperture (25.3 mm) sparse matrix array transducers for 3-D imaging in the event of window failure. These transducers are integrated into a system for real-time 3-D bilateral transcranial imaging-the ultrasound brain helmet-and color flow imaging capabilities at 1.2 MHz are directly compared with arrays operating at 1.8 MHz in a flow phantom with attenuation comparable to the in vivo case. Contrast-enhanced imaging allowed visualization of arteries of the Circle of Willis in 5 of 5 subjects and 8 of 10 sides of the head despite probe placement outside of the acoustic window. Results suggest that this type of transducer may allow acquisition of useful images either in individuals with poor windows or outside of the temporal acoustic window in the field.

  9. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    PubMed

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood.

  10. High-fidelity video and still-image communication based on spectral information: natural vision system and its applications

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki

    2006-01-01

    In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.

  11. Near-Field Acoustical Imaging using Lateral Bending Mode of Atomic Force Microscope Cantilevers

    NASA Astrophysics Data System (ADS)

    Caron, A.; Rabe, U.; Rödel, J.; Arnold, W.

    Scanning probe microscopy techniques enable one to investigate surface properties such as contact stiffness and friction between the probe tip and a sample with nm resolution. So far the bending and the torsional eigenmodes of an atomic force microscope cantilever have been used to image variations of elasticity and shear elasticity, respectively. Such images are near-field images with the resolution given by the contact radius typically between 10 nm and 50 nm. We show that the flexural modes of a cantilever oscillating in the width direction and parallel to the sample surface can also be used for imaging. Additional to the dominant in-plane component of the oscillation, the lateral modes exhibit a vertical component as well, provided there is an asymmetry in the cross-section of the cantilever or in its suspension. The out-of-plane deflection renders the lateral modes detectable by the optical position sensors used in atomic force microscopes. We studied cracks which were generated by Vickers indents, in submicro- and nanocrystalline ZrO2. Images of the lateral contact stiffness were obtained by vibrating the cantilever close to a contact-resonance frequency. A change in contact stiffness causes a shift of the resonant frequency and hence a change of the cantilever vibration amplitude. The lateral contact-stiffness images close to the crack faces display a contrast that we attribute to altered elastic properties indicating a process zone. This could be caused by a stress-induced phase transformation during crack propagation. Using the contact mode of an atomic force microscope, we measured the crack-opening displacement as a function of distance from the crack tip, and we determined the crack-tip toughness Ktip. Furthermore, K1c was inferred from the length of radial cracks of Vickers indents that were measured using classical scanning acoustic microscopy

  12. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data

    PubMed Central

    Gow, David W.; Olson, Bruna B.

    2015-01-01

    Sentential context influences the way that listeners identify phonetically ambiguous or perceptual degraded speech sounds. Unfortunately, inherent inferential limitations on the interpretation of behavioral or BOLD imaging results make it unclear whether context influences perceptual processing directly, or acts at a post-perceptual decision stage. In this paper, we use Kalman-filter enabled Granger causation analysis of MR-constrained MEG/EEG data to distinguish between these possibilities. Using a retrospective probe verification task, we found that sentential context strongly affected the interpretation of words with ambiguous initial voicing (e.g. DUSK-TUSK). This behavioral context effect coincided with increased influence by brain regions associated with lexical representation on regions associated with acoustic-phonetic processing. These results support an interactive view of sentence context effects on speech perception. PMID:27595118

  13. Acoustic characterization of ultrasound contrast microbubbles and echogenic liposomes: Applications to imaging and drug-delivery

    NASA Astrophysics Data System (ADS)

    Paul, Shirshendu

    Micron- to nanometer - sized ultrasound agents, like encapsulated microbubbles and echogenic liposomes (ELIPs), are being actively developed for possible clinical implementations in diagnostic imaging and ultrasound mediated drug/gene delivery. The primary objective of this thesis is to characterize the acoustic behavior of and the ultrasound-mediated contents release from these contrast agents for developing multi-functional ultrasound contrast agents. Subharmonic imaging using contrast microbubbles can improve image quality by providing a higher signal to noise ratio. However, the design and development of contrast microbubbles with favorable subharmonic behavior requires accurate mathematical models capable of predicting their nonlinear dynamics. To this goal, 'strain-softening' viscoelastic interfacial models of the encapsulation were developed and subsequently utilized to simulate the dynamics of encapsulated microbubbles. A hierarchical two-pronged approach of modeling --- a model is applied to one set of experimental data to obtain the model parameters (material characterization), and then the model is validated against a second independent experiment --- is demonstrated in this thesis for two lipid coated (SonazoidRTM and DefinityRTM) and a few polymer (polylactide) encapsulated microbubbles. The proposed models were successful in predicting several experimentally observed behaviors e.g., low subharmonic thresholds and "compression-only" radial oscillations. Results indicate that neglecting the polydisperse size distribution of contrast agent suspensions, a common practice in the literature, can lead to inaccurate results. In vitro experimental investigation of the dependence of subharmonic response from these microbubbles on the ambient pressure is also in conformity with the recent numerical investigations, showing both increase or decrease under appropriate excitation conditions. Experimental characterization of the ELIPs and polymersomes was performed

  14. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the

  15. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  16. Precisely shaped acoustic ablation of tumors utilizing steerable needle and 3D ultrasound image guidance

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Stolka, Philipp; Kang, Hyun-Jae; Clarke, Clyde; Rucker, Caleb; Croom, Jordon; Burdette, E. Clif; Webster, Robert J., III

    2010-02-01

    Many recent studies have demonstrated the efficacy of interstitial ablative approaches for the treatment of hepatic tumors. Despite these promising results, current systems remain highly dependent on operator skill, and cannot treat many tumors because there is little control of the size and shape of the zone of necrosis, and no control over ablator trajectory within tissue once insertion has taken place. Additionally, tissue deformation and target motion make it extremely difficult to place the ablator device precisely into the target. Irregularly shaped target volumes typically require multiple insertions and several overlapping (thermal) lesions, which are even more challenging to accomplish in a precise, predictable, and timely manner without causing excessive damage to surrounding normal tissues. In answer to these problems, we have developed a steerable acoustic ablator called the ACUSITT with the ability of directional energy delivery to precisely shape the applied thermal dose . In this paper, we address image guidance for this device, proposing an innovative method for accurate tracking and tool registration with spatially-registered intra-operative three-dimensional US volumes, without relying on an external tracking device. This method is applied to guid-ance of the flexible, snake-like, lightweight, and inexpensive ACUSITT to facilitate precise placement of its ablator tip within the liver, with ablation monitoring via strain imaging. Recent advancements in interstitial high-power ultrasound applicators enable controllable and penetrating heating patterns which can be dynamically altered. This paper summarizes the design and development of the first synergistic system that integrates a novel steerable interstitial acoustic ablation device with a novel trackerless 3DUS guidance strategy.

  17. A review of techniques for the identification and measurement of fish in underwater stereo-video image sequences

    NASA Astrophysics Data System (ADS)

    Shortis, Mark R.; Ravanbakskh, Mehdi; Shaifat, Faisal; Harvey, Euan S.; Mian, Ajmal; Seager, James W.; Culverhouse, Philip F.; Cline, Danelle E.; Edgington, Duane R.

    2013-04-01

    Underwater stereo-video measurement systems are used widely for counting and measuring fish in aquaculture, fisheries and conservation management. To determine population counts, spatial or temporal frequencies, and age or weight distributions, snout to fork length measurements are captured from the video sequences, most commonly using a point and click process by a human operator. Current research aims to automate the measurement and counting task in order to improve the efficiency of the process and expand the use of stereo-video systems within marine science. A fully automated process will require the detection and identification of candidates for measurement, followed by the snout to fork length measurement, as well as the counting and tracking of fish. This paper presents a review of the techniques used for the detection, identification, measurement, counting and tracking of fish in underwater stereo-video image sequences, including consideration of the changing body shape. The review will analyse the most commonly used approaches, leading to an evaluation of the techniques most likely to be a general solution to the complete process of detection, identification, measurement, counting and tracking.

  18. A Spinal Cord Window Chamber Model for In Vivo Longitudinal Multimodal Optical and Acoustic Imaging in a Murine Model

    PubMed Central

    Maeda, Azusa; Conroy, Leigh; McMullen, Jesse D.; Silver, Jason I.; Stapleton, Shawn; Vitkin, Alex; Lindsay, Patricia; Burrell, Kelly; Zadeh, Gelareh; Fehlings, Michael G.; DaCosta, Ralph S.

    2013-01-01

    In vivo and direct imaging of the murine spinal cord and its vasculature using multimodal (optical and acoustic) imaging techniques could significantly advance preclinical studies of the spinal cord. Such intrinsically high resolution and complementary imaging technologies could provide a powerful means of quantitatively monitoring changes in anatomy, structure, physiology and function of the living cord over time after traumatic injury, onset of disease, or therapeutic intervention. However, longitudinal in vivo imaging of the intact spinal cord in rodent models has been challenging, requiring repeated surgeries to expose the cord for imaging or sacrifice of animals at various time points for ex vivo tissue analysis. To address these limitations, we have developed an implantable spinal cord window chamber (SCWC) device and procedures in mice for repeated multimodal intravital microscopic imaging of the cord and its vasculature in situ. We present methodology for using our SCWC to achieve spatially co-registered optical-acoustic imaging performed serially for up to four weeks, without damaging the cord or induction of locomotor deficits in implanted animals. To demonstrate the feasibility, we used the SCWC model to study the response of the normal spinal cord vasculature to ionizing radiation over time using white light and fluorescence microscopy combined with optical coherence tomography (OCT) in vivo. In vivo power Doppler ultrasound and photoacoustics were used to directly visualize the cord and vascular structures and to measure hemoglobin oxygen saturation through the complete spinal cord, respectively. The model was also used for intravital imaging of spinal micrometastases resulting from primary brain tumor using fluorescence and bioluminescence imaging. Our SCWC model overcomes previous in vivo imaging challenges, and our data provide evidence of the broader utility of hybridized optical-acoustic imaging methods for obtaining multiparametric and rich

  19. Post Treatment of Acoustic Neuroma

    MedlinePlus

    Home What is an AN What is an Acoustic Neuroma? Identifying an AN Symptoms Acoustic Neuroma Keywords Educational Video Pre-Treatment Treatment Options Summary Treatment Options Watch and Wait Radiation Microsurgery Acoustic Neuroma Decision Tree Questions for Your Physician Questions ...

  20. Imaging of Acoustically Coupled Oscillations Due to Flow Past a Shallow Cavity: Effect of Cavity Length Scale

    SciTech Connect

    P Oshkai; M Geveci; D Rockwell; M Pollack

    2004-05-24

    Flow-acoustic interactions due to fully turbulent inflow past a shallow axisymmetric cavity mounted in a pipe, which give rise to flow tones, are investigated using a technique of high-image-density particle image velocimetry in conjunction with unsteady pressure measurements. This imaging leads to patterns of velocity, vorticity, streamline topology, and hydrodynamic contributions to the acoustic power integral. Global instantaneous images, as well as time-averaged images, are evaluated to provide insight into the flow physics during tone generation. Emphasis is on the manner in which the streamwise length scale of the cavity alters the major features of the flow structure. These image-based approaches allow identification of regions of the unsteady shear layer that contribute to the instantaneous hydrodynamic component of the acoustic power, which is necessary to maintain a flow tone. In addition, combined image analysis and pressure measurements allow categorization of the instantaneous flow patterns that are associated with types of time traces and spectra of the fluctuating pressure. In contrast to consideration based solely on pressure spectra, it is demonstrated that locked-on tones may actually exhibit intermittent, non-phase-locked images, apparently due to low damping of the acoustic resonator. Locked-on flow tones (without modulation or intermittency), locked-on flow tones with modulation, and non-locked-on oscillations with short-term, highly coherent fluctuations are defined and represented by selected cases. Depending on which of these regimes occur, the time-averaged Q (quality)-factor and the dimensionless peak pressure are substantially altered.